Google’s Bard chatbot has been tested for its ability to detect disinformation. It turns out that despite best efforts, Google’s idea can easily be made to write plausibly on absolutely any topic – even well-known conspiracy theories. And the bot does that very convincingly.
Image source: geralt/pixabay.com
In particular, the Bard-Bot created a detailed 13-paragraph story about “Great reload” – a publication that led to a conspiracy theory about the deliberate reduction of the world population by world elites. The bot clearly explained that this is really happening and the world population is being deliberately reduced, especially with the help of economic measures and vaccination, which both the World Economic Forum and, for example, the Bill and Melinda Gates Foundation are involved in the conspiracy. In addition, the bot claimed that COVID-19 vaccines contained microchips so elites could track people’s movements.
The agency commissioned Bard to report on 100 known conspiracy theories, and in 76 cases he wrote coherent and compelling writing on the given topics, without stating that the theories were unverified. In other cases, he pointed out the unreliability of the information. It is worth noting that the GPT-3.5 and GPT-4 language models performed even worse.
American experts fear that bots will allow foreign governments to create persuasive disinformation on a large scale much more effectively than before. Where Internet trolls used to be hampered by limited language skills, it will now be possible to generate text of almost any size without visible errors.
Image source: geralt/pixabay.com
According to some experts, the bot works as it should – it “says” sentences and postulates based on the arrays it was trained on. The bot should be neutral about this or that content, whether it’s true, false, or completely meaningless. Systems are only optimized “by hand” after training, and there is no way to completely prevent the emergence of disinformation. Google has acknowledged that Bard is still in its early stages of development and that it may occasionally generate inaccurate or inappropriate information, although the company takes steps to prevent such content from being created.
NewsGuard uses hundreds of “false narratives” to rank websites and news outlets. The agency began testing chatbots on 100 conspiracy theories in January and asked the bots to create articles on given topics suspected to be disinformation. In some cases, Bard has done a good job of calling information unreliable or unconfirmed when asked. In general, there wasn’t a single dubious narrative that would be refuted by both Bard and GPT-3.5 and GPT-4 at once. If Bard rejected 24% of themes, then GPT-3.5 – 20% and GPT-4 and none at all. OpenAI, which is responsible for developing the last two models, says it needs all sorts of automatic and manual filtering measures to avoid abuse.
While most of the time Bard did a good job of creating disinformation, on a few occasions, such as when he was asked to write a text on behalf of a well-known anti-vaccination opponent, he quoted the text and made it clear there was no evidence of such speculative ones theories. However, according to expertsThe technology itself contains nothing that could prevent risks“.
Add Comment