The stratospheric rise of artificial intelligence over the past six months has been a sight to behold. Following the release of ChatGPT in October 2022, a platform that produces content at a whim, ranging from essay answers, blog posts, and sort-of-functional code, practically every major player in the tech industry has thrown their hat into the ring.
Tools have ranged from fellow chatbots to more specialized uses for AI, with one of the most robust being automated tools to help organizations detect cybersecurity threats.
Google is the latest tech firm to release such technology, announcing its Cloud Security AI Workbench at the RSA Conference 2023.
At the event, held in San Francisco’s Moscone Centre, Google unveiled the cybersecurity suite, which is powered by an AI language model called Sec-PaLM.
Adapted from data previously collected by Google to support existing AI programs, such as its ChatGPT rival Bard, Sec-PaLM is “fine-tuned for security use cases.”
It incorporates security intelligence such as software vulnerabilities, malware, threat indicators, and behavioral threat actor profiles.
One of the main benefits of the Cloud Security AI Workbench, Google says, is the breadth of resources at its disposal. It combines a range of new AI-powered tools, such as Mandiant’s Threat Intelligence AI, which will use Sec-PaLM to find, summarize, and respond to security threats.
Mandiant is owned by Google, as is VirusTotal, which will also use Sec-PaLM to help subscribers analyze and explain malicious scripts.
Sec-PaLM will also reportedly help customers of Google’s Cloud cybersecurity service Chronicle to search for security events. The tool aims to use the language skills used in the likes of ChatGPT and Bard to interact with users in a conversational manner.
Meanwhile, users of Google’s Security Command Center AI will get “human-readable” explanations of attack exposure, including affected assets, recommended mitigation strategies and risk summaries for security, compliance, and privacy findings.
Commenting on its Cloud Security AI Workbench, Google’s vice president and general manager, Sunil Potti, said: “While generative AI has recently captured the imagination, Sec-PaLM is based on years of foundational AI research by Google and DeepMind, and the deep expertise of our security teams.
“We have only just begun to realize the power of applying generative AI to security, and we look forward to continuing to leverage this expertise for our customers and drive advancements across the security community.”
Is this the future?
Google’s Cloud Security AI Workbench follows Microsoft’s entry into the cybersecurity machine learning landscape last month. Its Security Copilot works alongside Office apps to summarize and “make sense” of threat intelligence, which Microsoft hopes will help prevent data breaches.
Microsoft said that Security Copilot uses information fed to it through GPT-4 to study data breaches and find patterns.
The tech giant didn’t explain exactly how it incorporates GPT-4, instead highlighting its trained custom model that “incorporates a growing set of security-specific skills” and “deploys skills and queries” related to cybersecurity.
Security Copilot looks like many of the other chatbot interfaces that we have surely all now experimented with in the past few months, but the data that it’s been taught with relates specifically to cyber threat intelligence.
“We don’t think of this as a chat experience. We really think of it as more of a notebook experience than a freeform chat or general purpose chatbot,” explained Chang Kawaguchi, an AI security architect at Microsoft, in an interview with The Verge.
These developments suggest a bright future for a cybersecurity sector that is struggling in its fight against increasingly sophisticated criminal hacking endeavors. But before we crown such tools as the savior of the cybersecurity industry, we must consider how ambitious the proclamations from the likes of Google and Microsoft are.
As IT Governance Consultant William Gamble recently noted, technological solutions are only as useful as the people using them.
Looking at algorithmic predictions specifically, he noted how susceptible people are to misinterpreting its capabilities. “We see it as providing accurate, objective answers,” Gamble wrote, “but model quality choice of algorithm predictions is a matter of probability.”
In other words, we always need experts who can examine and parse the information that AI tools provide.
Further problems
In his article, Gamble also pointed to the difficulties that organizations will face maintaining these systems. “The real question for the implementation of AI is whether it will work in your organization. Even if it does, it may not be popular with employees or customers,” he wrote.
“For example, a recent study of an AI-based clinical decision support program for treatment of diabetics found that only 14% of the clinicians would recommend it. They gave it a score of 11 out of 100.
“Implementation of AI is not like implementing other software. It is not a set-it-and-forget-it process. Trained AI systems need corrective maintenance due to data, model, or concept drift.”
Another issue relates to the inherent security problems of generative AI. The godfather of these platforms, ChatGPT, found itself in regulatory hot water recently, after the Italian data protection watchdog found several problems in the way it uses personal data.
The compliance problems stem from the fact that ChatGPT’s operator, OpenAI, trained its language model on 570GB of data from the Internet, including webpages, books, and other material.
In a strict regulatory sense this isn’t necessarily a problem; any information that’s available online is not within the GDPR’s scope because it’s already considered public data.
But the Italian data protection watchdog, the Garante, noted that OpenAI failed to verify the age of its users, potentially exposing minors to inappropriate content.
The Garante also drew attention to a data breach that ChatGPT suffered on 20 March. A bug caused some users to see others’ chat titles, the first message of active users’ chat history, payment details, and other information of subscribers who were active during a nine-hour window.
Plus, as Kyle Wiggers, a senior reporter at TechCrunch, noted in his report on Google’s Cloud Security AI Workbench, all AI language models make mistakes, no matter how cutting edge they are. In particular, he referenced their susceptibility to attacks such as prompt injections, which can cause them to behave in ways that their creators didn’t intend.
“In truth, generative AI for cybersecurity might turn out to be more hype than anything,” Wiggers concluded, noting the lack of studies on its effectiveness. “We’ll see the results soon enough with any luck, but in the meantime, take Google’s and Microsoft’s claims with a healthy grain of salt.”
Others are less skeptical of AI language models transformational powers – whether that’s in the cybersecurity sector or other industries, such as creative fields – but even the most optimistic individuals must understand that these tools provide incremental rather than transformational improvements.
When used correctly, AI language models can provide enormous value, but it will take time, proper personnel, and training.