Social engineering attacks have been around for a log time, but they used to be relatively easy to detect. When you got a “Nigerian prince” email, it was usually riddled with mistakes, and it was logical to wonder why a Nigerian prince would be emailing you anyway. The social engineering attacks of the future are likely to be much more complicated and much harder to detect, thanks to artificial intelligence (AI).
Recently, the ICS Pulse Podcast talked to Ethan Schmertzler of Dispel about how new advances in AI are impacting industrial cybersecurity. The following has been edited for clarity. To listen to the podcast, click here.
ICS Pulse: Tell us a little bit about your background and what brought you to cybersecurity and industrial cybersecurity.
Ethan Schmertzler: I’ve been running Dispel for the last eight or nine years now. My background before that was as a software developer, working on front-end interfaces, and then before that, in a class of technologies called C4ISR, which are about intelligence surveillance and reconnaissance. That’s the most specific area for this. It was, “How do you build systems that can allow us to have communication networks that exist, that are very, very hard for even a sophisticated machine learning system, even 10 years ago, to be able to locate, identify and find?” That evolved into a technology platform that’s now being used for industrial control systems around the world because those are fundamentally physical assets that we have to keep hidden, even though they’re communicating over the internet.
ICSP: Was cybersecurity something you were always interested in, or is this something you found your way into over the years from software development?
Schmertzler: I’ve always been fascinated in the idea of security controls around them, not necessarily the world of industrial control system cybersecurity. I think what drew me into that was that it was really just green field. So much money and effort had been spent on information technology (IT) security, and everyone was trying to basically take those IT tools and then smash them into operational technology (OT) environments, and they really don’t work very well. They don’t translate terribly well over them because they’ve got their own unique circumstances.
What was fascinating about it was taking all the experience we’ve had in it, about how to optimize those things, and how do you actually bring that ease of use, the Apple-esque experience, to 30-year-old industrial control systems?
ICSP: Let’s jump into AI. I’m going to start with the big question here: What does the future of AI and cybersecurity look like? What do you think we can expect in the next three to five years?
Schmertzler: The current generation of really popular AI systems that people have seen have been about the creation of really convincing text. So I think we’re going to see that come out both on the offense capability — creating much more sophisticated attacks to try to target people, human-based attacks — and then we’re also going to see them on the support side of things, and being able to identify and respond to questions and help people that are working inside of organizations protect themselves against those kinds of attacks.
The other thing that we’re taking a look at is using these sorts of models to train them on other kinds of targeting data. A lot of the stuff that people are thinking that’s fun and interesting to look at have been the text-based stuff. But teaching these similar models on how to do target acquisition and reconnaissance, which is a typically very expensive manual step for humans in the attack chain, if we can automate that, or if attackers can automate that, that’s going to only accelerate the kinds of risks that organizations are facing.
ICSP: Can you elaborate on some of the pros and cons of AI from an industrial control system standpoint?
Schmertzler: A lot of industrial control systems are still very much a manual process. Humans are directly involved in controlling individual machines — programmable logic controllers, for example — that actually actuate individual changes inside of a factory. The realm of AI won’t necessarily get into that. What we are going to see is the ability for these tools to be able to, one, improve the kind of training we have available for us, improve the ability to pass on knowledge and information to new generations of technical controllers, and then, ideally, start being able to identify problems before they’re detected by human operators.
What’s been interesting is that a lot of the tools that exist already to do predictive maintenance, for example, and do supply chain engineering, those have already been automated in many respects. Those are fairly complicated but straightforward sets of data and statistics that have existed for a number of years. That’s just how Amazon and Walmart have optimized their warehouses. It’s not humans making these decisions, necessarily. It’s feeding data into algorithms and having those give responses back.
Where we’re seeing the difference come in on the downside to your question is that the safety link, oftentimes in industrial control systems, the problem is either they haven’t implemented security controls that they should have because they just haven’t spent the time or money to do it, or they have and now there’s also still a human risk, which is that a human believes that they’re genuinely doing the right thing because of the information that they’re getting. Then, they make a mistake, and they create a security risk for the organization.
ICSP: Given the risks of integrating AI and ICS, what are some steps we can take to help mitigate these risks?
Schmertzler: I think it’s taking humans out of the loop, actually. The problem with AI, and what makes it really impressive, is that, as it gets better and better, it’s like going through the uncanny valley in animation. You could for a while tell that a machine was producing something, and you were reading it, and it wasn’t quite human. You knew what it was. You can intuitively understand that there was something not quite there. As the weights get better in these AI models, you’re going to see, or we’re starting to see, even more human-like responses to conversations. So the cost of doing large-scale, sophisticated, human-targeted attacks is going to increase because the cost of doing them will decrease. It’s a relatively inexpensive way of going after organizations.
Figuring out a way to breach a very sophisticated cybersecurity system is really expensive. Getting a human being to make a mistake, open a link and get malware on the computer, that’s easy, relatively speaking. The solution may not be that we’re going to train our way out of this. I think that doing training is going to still matter, but it’s also, in part, taking the knowledge that the human can give up as a mistake, making that obsolete, or making that so we can protect it. Even if a human is tricked and hands that information over, or gets malware onto their device, it doesn’t matter because it can’t affect the industrial control system.
The way you can do that is a couplefold. It’s obscuring the knowledge of how humans connect to these industrial control systems. That’s using something like a moving target defense network, where the underlying knowledge of the IP infrastructure that they’re connecting to isn’t relevant to what they’re actually going to. It’s also things like network isolation and sandboxing, so making sure that we never allow an endpoint that’s connected to the internet to make a direct connection to a supervisory control and data acquisition (SCADA) system. They have to go through a disposable or compostable intermediary component, which is really our pushing that demilitarized zone out into that environment. We can monitor it, and then we can destroy it after that session’s done so that we don’t transmit any malware through those environments.
ICSP: A huge percentage of attacks are coming in through human error, whether malignant or not. Somebody clicks a link. The old social engineering, or phishing, attacks were pretty easy to detect. There were mistakes in grammar, etc. How will new attacks that are using tools like ChatGPT force the cybersecurity landscape to evolve?
Schmertzler: They’re going to look a lot more like what you’d expect one of your colleagues to send you. They’re going to feel much more real. Our ability as a human to be able to say, “Oh, I’m reading this. This looks like something from a Nigerian prince. That seems interesting but odd.” Whereas, something saying, “It was great to see you guys last weekend. We saw you guys at RSA. I know your team had asked for some pictures from the event for their marketing stuff. Didn’t get their email address. Shooting you this link here.” That might feel very real. That might make a lot of sense. So they’re going to get in under the radar.
The way we’re going to deal with that, I think one of the simplest ways is to stop having people have trusted communications on essentially what’s a public inbox. These email ad systems are not where you should be having those sorts of correspondences. We should probably be migrating them to private channels that are protected by multifactor authentication, so I actually know that you are who you are.
Because we’re probably not going to get every single person into their own independent Slack channels with everyone else, this might be the thing that actually makes email encryption become an adopted platform. It drives legitimacy for people to get their own PKI, or private key infrastructure, and public key infrastructure, that it can encrypt and identify that an email coming from Ethan is actually from Ethan, and an email from Gary or from Tyler actually is from Gary and Tyler. That, historically, has been stuff that people haven’t done. It’s been a technology that’s been available for years, but email encryption is a good guarantee that the email you’re actually getting is actually from you.