Just two weeks after Samsung’s CEO described the company’s vision for using devices equipped with artificial intelligence, the decision to introduce ChatGPT to assist employees in their work activities proved highly unwise.
Consulted on all sorts of topics, the ChatGPT-based AI allegedly leaked the confidential data it was put in contact with by Samsung employees to the internet at least three times. Stored in the ChatGPT database, that information is integrated with the “experience” learned by the AI, such that any user searching for semiconductor information could reach some of Samsung’s sensitive data.
AI chatbots like ChatGPT assimilate the experience of every chat session conducted with users, using the data to improve and expand their knowledge in various domains. All data provided to the chatbot is stored and helps it answer other users’ questions. The problem is that the AI isn’t very good at keeping secrets, with Samsung employees making a big mistake divulging confidential company data to ChatGPT, which once “absorbed” could stay there forever. Of course, Samsung could remove data from the ChatGPT database by contacting the OpenAI administrator.
The first leak occurred when a semiconductor and device solutions department employee entered source code into ChatGPT, asking the AI for a solution to a problem in the semiconductor equipment measurement database. The second incident also involved code disclosure, with ChatGPT being asked to optimize it for easier analysis by the employee. The third incident occurred when a Samsung employee asked ChatGPT to generate a summary for an internal company meeting.
Without discontinuing ChatGPT services, Samsung asked employees to be more cautious when disclosing information to it. The company also limited the capacity of each ChatGPT entry to a maximum of 1,024 bytes, hoping that this would limit interaction to short dialogues without much information that could be extracted from documents.