Speaking from the panel at the World Economic Forum in Geneva, Microsoft chief economist Michael Schwarz said that without clear rules limiting the use of artificial intelligence technologies, they will inevitably harm humanity. When forming such rules, however, it is important not to go too far, otherwise sufficient benefit will not be gained from AI.
As you know, Microsoft Corporation invests heavily in the OpenAI startup that released the ChatGPT chatbot that quickly gained popularity over the past year. This AI system is embedded by the group in its own search engine Microsoft Bing, which implies a wide distribution of such chatbots. This week, US Vice President Kamala Harris will meet with executives from Microsoft, Alphabet and OpenAI to discuss ways to mitigate the risks associated with the use of artificial intelligence technologies.
Microsoft is already working on rules that would describe the scope of safe use of artificial intelligence. “The principle should be simple – the benefit to society should outweigh the potential harm,” — explained Nuances of Michael Schwartz’s approach. According to a Microsoft representative, AI will inevitably fall into the hands of attackers and cause real harm. Spam, election meddling are just some of the uses of AI that can harm humanity.
Microsoft warns authorities against directly influencing datasets used to train artificial intelligence systems. Such intervention by the legislature would have “devastating consequences” for the entire industry, said Michael Schwartz.
At the same time, AI will radically change the working conditions in many industries in the long term, emphasizes the chief economist at Microsoft: “I like to say that AI doesn’t change anything in the short term, but will change everything in the long term.”. According to him, this is true of any man-made technology. According to experts speaking at the WEF, the global job market will be badly affected by artificial intelligence, as the latter will seriously affect more than a quarter of existing jobs.
Add Comment