Artificial intelligence has become one of the defining topics in the tech world in recent months. Last but not least, the arrival of ChatGPT-4 triggered the meteoric rise.
However, with increasing popularity among the masses, the chatbot is also becoming a target for cybercriminals.
In the past twelve months alone, hackers have been able to obtain more than 100,000 access data, which are now being offered for sale on Darknet marketplaces. This is reported by the IT security researchers from Group-IB in an analysis.
Infostealer Trojan as gateway for data theft
The background of the leaked access data is for info stealer
group belonging trojans. These include the variants Raccoon
, Further
and Redline
.
According to the Group IB analysis, Raccoon was able to get hold of the most log-in data: A total of 78,348 accesses were stolen via this Trojan.
According to the security researchers, the data leak is not only dangerous because of the compromised account. Finally, more and more employees use ChatGPT to optimize the work process.
Since the chatbot is set by default to save chat histories including requests and answers, sensitive company data could also fall into the hands of cybercriminals.
The growing success of ChatGPT can also be measured over time in the captured data. Were there in June 2022 only
74 accesses that were stolen via such Trojans, this number rose continuously on a monthly basis.
The highest value is the past May 2023, in which 26,802 ChatGPT accounts were compromised.
Asia as the main target of ChatGPT attackers
Geographically, most of the stolen ChatGPT credentials come from the Asia-Pacific region: Among the more than 100,000 leaked accounts, almost 41,000 accounts are from Asia.
In Europe, the damage is at least comparatively limited. In the Darknet leak, around 17,000 accounts from the European area can be found here.
Accordingly, we do not assume that your ChatGPT account is in serious danger; especially since your computer must first have been compromised by one of the Infostealer Trojans.
Still, like Group-IB security researchers, we recommend considering certain safeguards for your accounts, whether ChatGPT or otherwise.
These include regularly changing your passwords and activating two-factor authentication (2FA), with which you drive on the much safer side. A password manager for generating and managing secure log-in data can also help here.
Do you actively use ChatGPT? Which AI tools are still in your portfolio? What tips and tricks do you use to make your passwords as secure as possible? Let us know in the comments!