Microsoft Creates GPT-4 AI Assistant for Cybersecurity

You just can’t get away from artificial intelligence at the moment. Microsoft, which recently announced the release of a machine learning assistant for Office apps, has added to its repertoire this week with Security Copilot.

The tool leverages generative AI models to summarize and “make sense” of threat intelligence, which Microsoft hopes will help prevent data breaches.

Automated security protection is nothing new, of course, but Security Copilot has the tantalising prospect of combining Microsoft’s immense cybersecurity tools with the highly touted AI models produced by OpenAI that threaten to revolutionize the way information is generated.

How does it work?

Microsoft says that Security Copilot uses information fed to it through GPT-4 to study data breaches and identify patterns.

The tech giant didn’t explain exactly how it incorporates GPT-4, which is most often used to generate text (and, occasionally, code), instead highlighting its trained custom model that “incorporates a growing set of security-specific skills” and “deploys skills and queries” related to cybersecurity.

Security Copilot looks like many of the other chatbot interfaces that we have surely all now experimented with in the past few months, but the data that it’s been taught with relates specifically to cyber threat intelligence.

“We don’t think of this as a chat experience. We really think of it as more of a notebook experience than a freeform chat or general purpose chatbot,” explained Chang Kawaguchi, an AI security architect at Microsoft, in an interview with The Verge.

That means you won’t be able to ask it ridiculous questions or create essays for you. Instead, you might ask it “what are all the security incidents in my enterprise?” or ask it to summarize a particular vulnerability.

You can also feed it files, URLs or code snippets for analysis – or ask for incident and alert information from other security tools, Kawaguchi explains. Plus, all prompts and responses are saved, so there’s a full audit trail for investigators.

Results can be stored in a shared workspace, so colleagues can work on the same threat analysis and investigations. “This is like having individual workspaces for investigators and a shared notebook with the ability to promote things you’re working on,” Kawaguchi adds.

What does it all mean?

The meteoric rise in generative AI – content that has been produced by machine learning – since the release of ChatGPT in November 2022 has split people into two camps.

There are the technophiles and business leaders who see the ways that tasks can be automated and productivity enhanced, and the sceptics who fear for their jobs and the prospect of content that’s churned out without context or insight.

The question is especially apparent here, because Security Copilot isn’t designed to produce words. It has the potential to automate security processes in an area that’s already rife with false positives and intentional misdirection from malicious attackers.

Mistakes could have significant cybersecurity implications and result in familiar knock-on effects, from privacy breaches and regulatory penalties to operational and financial disruption.

Microsoft has attempted to satisfy both sides by emphasizing that the tool is designed to assist in security analysts’ work rather than replace it.

It says that its custom model can “catch what other approaches might miss,” but recognizes that it still needs human intervention.

“We know sometimes these models get things wrong, so we’re offering the ability to make sure we have feedback,” says Kawaguchi.

“I don’t think anyone can guarantee zero hallucinations [a false or misleading security alert], but what we are trying to do through things like exposing sources, providing feedback, and grounding this in the data from your own context is ensuring that it’s possible for folks to understand and validate the data they’re seeing,” he adds.

“In some of these examples there’s no correct answer, so having a probabilistic answer is significantly better for the organization and the individual doing the investigation.”

What happens next?

Security Copilot isn’t Microsoft’s first involvement with machine learning. Microsoft 365 Copilot, which helps users navigate the organization’s Office suite – such as Word, Excel, and PowerPoint – has been widely applauded, with some speculating that it “will change Office documents forever.”

Its latest tool is more ambitious, but the firm appears committed to the power of generative AI. Microsoft said that it is starting to preview Security Copilot with “a few customers,” but it doesn’t have a date set for rolling out the technology more broadly.

“We’re not yet talking about timeline for general availability,” says Kawaguchi.

“So much of this is about learning and learning responsibly, so we think it’s important to get it to a small group of folks and start that process of learning and to make this the best possible product and make sure we’re delivering it responsibly.”