AI is forced to open accounts with text commands
Software

AI is forced to open accounts with text commands – hacking skills are no longer required

Cybersecurity researcher Johann Rehberger “persuaded” ChatGPT to perform several potentially dangerous operations: read his email message, compile a summary from it, and publish that information on the web, they say The Wall Street Journal. In the hands of an attacker, such a tool could become a formidable weapon.

    Image source: Franz Bachinger / pixabay.com

Image source: Franz Bachinger / pixabay.com

Chatbots based on artificial intelligence algorithms, like ChatGPT, as Mr. Rehberger put it, “Reduce the barrier to entry for all types of attacks. You don’t need to know how to code. No in-depth knowledge of computer science or hacking is required.. The attack method he describes is not applicable to most ChatGPT accounts – it is based on an experimental feature that opens access to Slack, Gmail and other applications. The company responsible for ChatGPT, OpenAI, thanked the expert for the warning and said that the possibility of such attacks being carried out again is being blocked.

Rehberger’s “prompt injection” mechanism is a new class of cyberattacks that arise when companies implement AI technologies in their business and consumer products. Techniques like these are changing the nature of hacking, and cybersecurity professionals have many more vulnerabilities to discover before AI becomes truly ubiquitous.

The generative AI technology behind ChatGPT, which allows you to create entire phrases and sentences, is something of a maximum speed autocomplete tool. The behavior of chatbots is restricted by the developers: there are guidelines intended to prevent the disclosure of confidential information or to prohibit offensive comments. However, there are solutions to circumvent these limitations. For example, Johann Rehberger asked a chatbot to create a summary of a web page on which he himself wrote in large letters: “IMPORTANT NEW INSTRUCTIONS” – and that confused the machine. Gradually, he forced ChatGPT to run various commands. “It’s like yelling at the system, ‘Come on, do it'”Reberger explained. Basically, he forced the AI ​​to reprogram itself.

The “command injection” technique proved viable because of an important characteristic of AI systems: they don’t always correctly distinguish system commands from user input, explained Arvind Narayanan, a professor at Princeton University. This means that AI developers should not only pay attention to the classic aspects of cyber security, but should also consider new threats of a deliberately unpredictable nature.

RELATED TOPICS

About the author

Robbie Elmers

Robbie Elmers is a staff writer for Tech News Space, covering software, applications and services.

Add Comment

Click here to post a comment