Stolen data from more than 100000 ChatGPT chatbot users has

Stolen data from more than 100,000 ChatGPT chatbot users has surfaced on dark web marketplaces

According to a report by Group-IB, an international cybersecurity company, the data of more than 100,000 users of the chatbot ChatGPT leaked onto the internet and ended up on darknet trading platforms. Leaks were recorded from June 2022 to May of this year.

    Image source: The Hacker News

Image source: The Hacker News

Experts note that the most significant data upload to the dark web took place in May of this year.

“The number of available logs containing compromised ChatGPT accounts peaked at 26,802 in May 2023. Last year saw the highest concentration of ChatGPT credentials for sale in Asia Pacific.”the company said in a report.

In terms of countries, most of the leaked data from ChatGPT users in the observation period came from India (12,632 records), Pakistan (9,217 records), and Brazil (6,531 records). The data of chatbot users from Vietnam, Egypt, the USA, France, Morocco, Indonesia and Bangladesh also appeared on the dark web.

    Image source: Group-IB

Image source: Group-IB

Group-IB experts note that logs containing compromised information about users are actively being sold on Darknet trading platforms. This data includes information about the domains and IP addresses of vulnerable users, among other things.

The analysis also revealed that the majority of the records (78,348 records) were stolen using the Raccon malware to steal information available as Malware as a Service. Windows spyware and the stealth tool Vidar follow. With its help, data was stolen from 12,984 ChatGPT accounts. In third place (6,773 records stolen) was the RedLine Trojan, which steals cookies, usernames and passwords, credit cards stored in web browsers, and FTP credentials and files from an infected device.

It should be clear that this isn’t just about stealing personal information. A ChatGPT account is home to both personal and professional content, from company trade secrets that shouldn’t be there to personal diaries.

“Corporate employees can, for example, enter secret information into a chatbot’s search query, or use a bot to optimize proprietary code.” Considering that ChatGPT’s default configuration saves all queries, sharing this data could give an attacker access to valuable information procure.”says Group-IB in a comment.

About the author

Robbie Elmers

Robbie Elmers is a staff writer for Tech News Space, covering software, applications and services.

Add Comment

Click here to post a comment