Microsoft has taught the GPT 4 based Security Copilot system to

Microsoft has taught the GPT-4 based Security Copilot system to deal with hacker attacks

AI systems are already being used to create graphics, chatbots and even to control smart homes. Microsoft has entrusted AI with one of the most important areas of modern life – protecting against cyber threats. The Security Copilot tool lets you spot a cyber attack before it’s too obvious and helps eliminate it.

    Image source: Sigmund Avatar/

Image source: Sigmund Avatar/

In situations where, for whatever reason, a computer’s security has been compromised, Security Copilot will help you find out what happened, what to do, and how to prevent others from repeating similar incidents. The company introduced a new solution based on the GPT-4 model – Security Copilot helps deal with threats for enterprise customers.

So far, the tool is only available to corporate customers. The same big language model that underlies apps like Bing Chat is behind Security Copilot. However, in this case it is a variant that has been specially trained for the materials and technical terms used by IT professionals. Additionally, Microsoft has already connected Copilot to its other security tools. The company promises to be able to use third-party software solutions over time.

While most GPT-4-based consumer applications have been trained on somewhat outdated datasets, Security Copilot gets new information in real-time by examining the literally trillions of threat alerts Microsoft receives every day. This is the advantage of the model – Security Copilot can detect hidden signals even before the fact of the attack became obvious. This allows the tool to be used to detect and eliminate threats in good time.

At the same time, it turned out a long time ago that AI such as ChatGPT, Bing Chat or Google Bard can experience “hallucinations” in which completely unreliable facts are taken as the basis of the “argument”. In the security field, this can become a very dangerous phenomenon. Microsoft has already confirmed that Security Copilot “doesn’t always get everything right”. Fortunately, in the case of the Microsoft product, there is a user feedback mechanism that allows you to provide increasingly relevant answers.

So far, Microsoft hasn’t said what might happen when defensive AI collides with AI that works, for example, for intruders that was created to attack users and businesses. In any case, corporate customers can already test Security Copilot on a small proportion of their users, the company said. If the experiment is successful, it will probably be able to help regular users in the future as well.

About the author

Robbie Elmers

Robbie Elmers is a staff writer for Tech News Space, covering software, applications and services.

Add Comment

Click here to post a comment