Spanish English French German Italian Portuguese
Social Marketing
HomeTechnologyArtificial IntelligenceMicrosoft unleashes generative AI in cybersecurity

Microsoft unleashes generative AI in cybersecurity

As part of its ongoing quest to inject generative AI into all of its products, Microsoft has presented Security co-pilot, a new tool that aims to "summarize" and "make sense" of threat intelligence.

In a sparse announcement, Microsoft has introduced Security Copilot as a way to correlate attack data and prioritize security incidents. This is already done by many tools. But Microsoft argues that Security Copilot, which integrates with its current portfolio of security products, improvement thanks to OpenAI generative AI models, specifically the GPT-4, recently released, which generates text.

“Advancing the state of security requires both people and technology: human ingenuity combined with the most advanced tools that help apply human expertise at speed and scale”said Microsoft Security executive vice president Charlie Bell in a canned statement. "With Security Copilot we are building a future where every defender has the tools and technologies they need to make the world a safer place."

Microsoft didn't disclose exactly how Security Copilot incorporates GPT-4, oddly enough. Instead, he highlighted a custom, trained model—perhaps based on GPT-4—powered by Security Copilot that “incorporates a growing set of security-specific skills” and “deploys skills and queries” related to cybersecurity.

Microsoft emphasizes that the model is not based on customer data, which responds to a common criticism of services based on linguistic models.

This custom model helps "capture what other approaches might miss," Microsoft says, answering security-related questions, advising on the best course of action, and summarizing events and processes. But given the tendency of text-generated models to be falsified, it is not clear how effective such a model can be in production.

Microsoft itself admits that the custom Security Copilot model doesn't always get everything right. "AI-generated content may contain errors," the company writes. "As we continue to learn from these interactions, we are adjusting your responses to create more consistent, relevant, and helpful responses."

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks