The shortcomings of artificial intelligence (AI) tools in the cybersecurity world have drawn a lot of attention. But does the bad press mean that AI isn’t working? Or is AI just getting slammed for failing to meet overinflated expectations?
It’s time to take a hard look at what AI is accomplishing before kicking it to the curb.
Where Cyber AI Is Winning
There has never been a superhero who hasn’t gone to the dark side or fallen off their pedestal. AI is no different. But if you know where AI performs well, you’ll have a better idea of how to test vendors’ AI claims.
“Machine learning [and] AI technologies have been influencing information security for a long time,” says Alexandra Murzina, a machine learning engineer and data scientist at cybersecurity firm Positive Technologies. “Spam detection or preventing fraudulent transactions are just two of many examples of successful AI applications in security today.”
The seasoned security pros we interviewed for this story praised AI for its successes in tasks such as these (but there are many more):
Back-end event processing. AI is performing well here but hasn’t yet been loosed to take care of business on its own. “AI is performing well in back-end processing of security events, allowing for automation and speed of use-case development,” says Doug Saylors, partner and cybersecurity co-lead with global technology research and advisory firm ISG. “However, the linkage between the analytics capability and immediate action controlled solely by AI hasn’t matured enough for wide adoption across industries.”
Super-secret, in-your-face invisible stuff. “AI is playing an integral role in cybersecurity, but that role may be a bit more understated or even invisible than the hype around AI might suggest,” says Fred Cate, professor of law and adjunct professor of informatics and computing at Indiana University.
Cate advises you to look around to spot where AI is operating well but quietly, such as biometrics on mobile phones, catching fraudulent charges on a credit card or fraudulent network log-in attempts, or blocking phishing messages on an email service.
Detecting novel malicious code. “An example metric we have is that file-based classifiers built 34 months ago and without any updates are, on average, able to detect most high-profile malware samples that emerge today,” says Travis Rosiek, chief technology and strategy officer for BluVector, a Comcast-owned cyberthreat detection company. “Imagine what else security teams could do with less emphasis on pushing and validating malware signature updates on a regular basis across a complex enterprise.”
Permission management. Permission management is an obstacle to business users and often a vulnerability. “AI shows its efficacy here through several vendor offerings,” says Joel Fulton, CEO of Lucidum, an asset discovery and management platform provider. “When a user attempts an action and is stymied, AI can reason just as a human permission manager might.”
Cyber asset attack surface management (CAASM). These systems identify, track, and monitor all the places in an organization where data is stored, processed, or transmitted. AI can catch and analyze attacks on the fly. This is crucial because “in modern environments, ephemeral cloud assets turn on and off in minutes, work-from-home devices are hidden from view, and data centers are full of dusty corners,” says Rosiek.
Extended detection and response (XDR). AI is still evolving here, but it’s holding its own. “In what’s being called XDR, AI/ML is just another tool in the toolbox to find anomalies — methods of attack that aren’t caught by traditional defense-in-depth technologies,” says Patrick Orzechowski, vice president and distinguished engineer at managed cybersecurity vendor Deepwatch.
Anything simple, repetitious, and done at a huge scale. Only a fool would profess they can protect Internet of Things (IoT) threat surfaces with grit and a few ordinary tools. “In cybersecurity, this is best reflected in areas such as intrusion detection and network monitoring — it’s fairly safe for administrators to allow AI to discover activity that is an outlier and may be malicious in these cases” says Sean O’Brien, founder and lead researcher at Privacy Lab at Yale and CSO at privacy-focused chat company Panquake. “Even then, however, I would caution admins to implement manual, human review into their processes.”
It’s All in the Implementation
In the final analysis, the buyer should beware when buying a cybersecurity product touting “AI inside.” But don’t shy away from AI — every cybersecurity team needs that kind of reach and scale to deal with an ever-expanding attack surface.
“So far AI hasn’t been as much of a game-changer as a game-enhancer. But I wouldn’t at all give up on the promise for a bigger impact in the future,” says Cate.
Just don’t think that you’re going to get AI to work without any work on your and your team’s part. Cyber AI is “very hard,” warns Aaron Sant-Miller, chief data scientist at consulting firm Booz Allen Hamilton, but it is key to building effective defenses.
“It’s very important for organizations to be patient with AI efforts as they identify the required steps to building viable, sustainable, and impactful AI capabilities,” he says. “This will require additional work from cyberteams as both groups work together to identify use cases, refine how AI can be embedded into existing tools, and provide feedback to AI systems as they begin to make detections. Buy-in is critical and continuous participation is essential to creating impactful, operational cyber AI.”