Microsoft first brought generative AI to bear in search, then in its productivity apps, and now it bringing the new technology to its security practice with Security Copilot.
The new offering follows Microsoft’s general strategy of bringing an AI natural language assistant to its main user interfaces. But security may be a dangerous place to deploy AI technology that “hallucinates.”
The Security Copilot is powered by OpenAI’s GPT-4 large language model and Microsoft’s own security-focused model, which contains its proprietary knowledge about security threats. Microsoft says its security model intakes 65 trillion signals from the threat environment daily. The Security Copilot service runs within Microsoft’s Azure cloud.
A security pro might encounter a suspicious-looking signal within the company’s systems, then call on the assistant for help in analyzing it and communicating a potential threat. They can quickly call up support materials, URLs, or code snippets about past exploits and ongoing vulnerabilities, and feed it to the assistant, or request information about incidents and alerts from other security tools. Any new information or analysis generated is stored for future investigations.
Microsoft says the security assistant can learn as it encounteres more threat information, developing new skills. This, the company says, might help a security analyst detect threats faster and respond faster.
Microsoft says high up in its blog post that Security Copilot “doesn’t always get everything right” and that it can generate mistakes. As you might expect, dropping an unpredictable generative AI technology into an exacting environment of a security team could be problematic. Generative AI models are notorious for “hallucinating” and generating fiction in the guise of facts. When a security analyst is responding to a perceived threat such as a DDOS or ransomware attack, every second counts, and they might might not have time to sift through an AI-generated threat summary to see if it contains fictional information, says Gartner distinguished VP analyst Avivah Litan.
“I was just on the phone with a major security operator and they said they’re going to push back on using these products until they can be assured that the models are generating accurate information,” she says.
Litan adds that security pros may now need a new class of tools to police the accuracy of the content generated by tools like Security Copilot.
Microsoft says it built into the user interface of the Copilot a way for users to give feedback on the assistant’s responses, so that the company can continue working to make the tool more coherant and useful. But security environments may make bad sandboxes and security people may not have time to help Microsoft conduct R&D on its products. “Microsoft is just using the security domain to advance its plan to put generative AI into all its products,” Litan says.
Microsoft adds that the customer’s proprietary security knowledge base of security threats and responses remains with the customer and is not used to train the Microsoft AI models. The company says Copilot is also able to integrate with other Microsoft security products, and that in the future, it will connect with third-party security products, too.
As AI chatbots evolve, they will be given more access to the “ground truth” information contained in proprietary company databases and AI models. The large language models will likely be used to wrap this kind of data into an easily digestible natural language wrapper, but the LLM models will always defer to the proprietary knowledge bases for factual information. As long as they’re allowed to hallucinate within serious business applications their reliability and usefulness may be limited.
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe

Social media users have been having a field day with Waymo’s autonomou

If you’re not on TikTok, you may not have heard of Aaron Parnas. But for many young people across the U.S., he’s a prominent political news source, with over 3.5 million followers on TikTok and ju

Getting a sense of the scale of social media platforms can be tricky. While tech companies often share self-serving metrics—like monthly active users or how likely users are to buy products after


Fun fact: The saying “work smarter, not harder” is coming up on its 100th birthday. Coined

If you’ve followed Apple for any length of time, you’ve no doubt come across the notion that the company doesn’t rush into adopting cutting-

Every now and then, you run into a tool that truly wows you.
It’s rare—especially nowadays, when everyone and their cousin is coming out with overhyped AI-centric codswallop tha