Users started to massively deceive ChatGPT to get forbidden information

Users started to massively deceive ChatGPT to get forbidden information from ChatGPT

OpenAI’s popular chatbot ChatGPT can be asked any question, but not all will be answered. At least the first time. According to Bloomberg, the request to learn how to pick the lock is left without instructions, instead the bot will report that such information can be used for illegal purposes and it has no intention of sharing it. However, advanced users can chain complex queries to bypass the limitations.

    Image Credit: ROBIN WORRALL/

Image Credit: ROBIN WORRALL/

T. n. “Jailbreaks” allow you to bypass the bans set by the creators in AI. Since almost all modern systems of this type have restrictions on providing objectionable content or information on how to commit illegal acts, there are many people who want to circumvent them. Such “hackers” who act with the power of the word include both IT professionals and amateurs who are passionate about the game.

One of the students created a website, on which he began posting both his own methods of fooling chatbots and similar “recipes” that could be found on Reddit and other resources. Also appeared special shipping with news from this area – The Prompt Report, which has thousands of subscribers.

Such activities make it possible to identify the limits of the use of AI and vulnerabilities in their protection systems, which allow secret information to be obtained. For example, when ChatGPT could not be directly persuaded to give instructions on how to open a lock, a way was found to invite the AI ​​to play the role of the hero’s evil accomplice, who as part of his role spoke at length about the use of master keys and other tools. Of course, this is just one of the examples.

Experts stress that techniques that worked on one system may be useless on another. Security systems are constantly being improved and user techniques are becoming more sophisticated. According to one of them, such an activity is similar to a video game – overcoming each of the limitations is like climbing to a new level.

According to experts, such experiments serve as a kind of warning that AI can be used in completely different ways than expected, and the “ethical behavior” of such services becomes a matter of great importance. In just a few months, ChatGPT has already gained millions of users, and today the bot is used for a variety of tasks – from searching for information to doing homework to writing malicious code. In addition, people are already using these tools to solve real-world problems such as: B. booking tickets or tables in restaurants.

In the future, the field of application of artificial intelligence will be much larger. According to some reports, OpenAI is considering launching programs for white hat hackers to look for vulnerabilities in the system and receive rewards for doing so.

About the author

Robbie Elmers

Robbie Elmers is a staff writer for Tech News Space, covering software, applications and services.

Add Comment

Click here to post a comment