The use of artificial intelligence (AI) has skyrocketed across almost every industry vertical, and cybersecurity is no different. On trade show floors, every cyber solutions provider seems to be offering some sort of AI solution for both companies and individuals.
Recently, ICS Pulse sat down with Lesley Carhart to discuss the proliferation of AI in cybersecurity. Carhart is the director of ICS cybersecurity incident response at Dragos, where she leads response to and proactively hunts for threats in customer ICS environments. They’re retired from the U.S. Air Force Reserves and are a very in-demand speaker and dedicated teacher.
“AI is kind of this ambiguous, imposing buzzword right now that’s being thrown around everywhere in sales and in technology. Really, where you see it today is in our tooling for cybersecurity. First of all, in terms of machine learning and algorithmic detection of bad stuff,” said Carhart. “Machine learning usually involves big data and a lot of mathematics to get to a result, to figure out what’s most common, least common, what stands out statistically, the things that are outliers, because those can be interesting to detect bad stuff.”
To put this into layman’s terms, a lot of the touted “AI” solutions aren’t AI at all: They’re machine learning, or even just automation. It’s using computers to detect “computer stuff” on a large scale, larger than our human brains can process quickly, according to Carhart.
How threat actors can use AI
With the rise of AI, there has also been a growing concern about how threat actors will use it.
“It’s always been there in the background in the last decade of me working in cybersecurity,” said Carhart.
Cybersecurity deals with a lot of data detections, behaviors, emails and people using their computers in different ways, which generates a ton of forensic information, according to Carhart. It’s difficult as a single human being, or even as a small team of humans, to go through all that information and find what’s abnormal. There have always been computers helping process all of this data, but now we’re starting to see adversaries leverage those tools, too. According to Carhart, that’s expected. It’s a constant cat and mouse game between defenders and attackers, and we see the attackers leveraging those same tools to manipulate and process data.
“We’re also seeing things in the terms of like ChatGPT being used in attacks because it’s good at generating computer stuff in a very general functional way,” said Carhart. “What we’ve seen in the industrial space is attackers starting to explore generating malicious configurations for industrial equipment they might not personally understand because that data is out there floating on the internet somewhere.”
Part of this has to do with ladder logic for various systems and how to program processes are so similar that it’s easy for threat actors to do something nefarious. According to Carhart, people are starting to look at, “Can we tell this process to do something nefarious without actually knowing how to program the PLCs in that process ourselves?” It’s a very legitimate way to generate computer code that’s functional from all those bits of data floating around.
AI and industrial control systems
Even more importantly, AI is making the threat actor’s job easier when attacking industrial control systems.
“I think there’s a confluence of things right now that are lowering the barrier to entry in industrial cyberattacks,” said Carhart.
Part of that is the normalization of processes and technology in operational technology (OT). People are using a lot of the same PLCs from the same manufacturers and making the same errors across verticals, according to Carhart. This has created a more normalized, homogenous attack surface. We also see new sources of information, from both AI and repositories like Shodan on the internet, that are making it easier for people to learn how to conduct attacks against processes.
“The challenge hasn’t been, it’s hard to program a PLC. You can go out and learn how to do that tomorrow. The challenge is processes are complicated, generating power is complicated, distributing power and oil and gas, very complicated,” said Carhart. “Understanding how to tamper with those processes in a way that actually causes an impact is very hard. But, again, with the confluence of things that are happening, it’s getting a little bit easier to do those types of things.”
For the whole interview, be sure to watch the video above.