The Ethical Considerations of AI in Cybersecurity: Ethan Schmertzler, Dispel, ICS Pulse Podcast

Courtesy: Brett Sayles

Whether we’re ready for it or not, artificial intelligence (AI) is coming to industrial networks. Many manufacturers are already using it to help with efficiency gains and cost reduction. But what are the ethical ramifications of using AI in cybersecurity, and should the government be regulating it more?

The ICS Pulse Podcast recently sat down with Ethan Schmertzler of Dispel to talk about AI’s impact on industrial cybersecurity and moving target defense. The following has been edited for clarity.

To listen to the complete podcast, click here. The read part one of the interview, click here, or you can read part 2 here.

ICS Pulse: I know the government and the regulatory bodies are always chasing the attackers a little bit. But are there any specific regulations or standards right now that govern the use of AI in cybersecurity, or is it just open field?

Ethan Schmertzler: I don’t know the answer to that question. I know that legislatures certainly are holding a number of summits and hearings around AI. But, to my knowledge, I don’t know if anything has actually passed yet.

ICSP: It will be interesting to see where this develops over time, as people start using it, as it becomes more of a threat. Are there ethical considerations that we need to take into account when deploying artificial intelligence in cybersecurity?

Schmertzler: Sure. AI only knows what it’s been fed. AI can work because it has consumed vast amounts of data. This is where we train the weights from. So a couple of things. One, what is it being fed? What data is it looking after? We’ve already seen that you can have AIs that are trained to show certain kinds of biases. They’re not actually giving you factual information. They sound very authoritative. It’s a thoughtfully written document it gives back to you, but it’s not actually fact-checked by anything. Whatever you fed into it is what it’s going to spit back out at you. It’s the old saying: Garbage in, garbage out. If you give it bad data to feed on — whether it’s inaccurate, it’s racist, it’s just bizarre — it’s going to give you back whatever you gave it.

Going back to this, if you’re feeding it information from the internet, it’s read a lot of weird stuff on the internet. So the risks with it from the ethics perspective are, first, are humans relying on stuff that they think is true because a machine told them, and it sounds very thoughtful but it’s utter garbage. That’s going to be a problem because humans are going to make bad judgment calls from that. Then it’s who’s making the decisions about what kind of data we are feeding these things? Certainly, if we start feeding inaccurate data that deals with personal information to these systems, that’s going to be caught up in that web. So we have to be careful that when we start opening this up to gather data and be trained by the general public, making sure that we’re at least being thoughtful about screening out information that could be damaging or could be personal. That shouldn’t be in these databases.

ICSP: We’re a media company. When ChatGPT came out, we immediately started running experiments to see how it would work if we wanted to have it write an article for us. So I would put things in like, “Tell me about an industrial cyberattack on the oil and gas industry.” It would spit out a well-written piece that sounded very authoritative. Then I’d start checking it. Is this true? Is this true? Is this true? About 40% of it was accurate. The other 60% was like, “Boy, that sounded really good, but I can’t find any other source in the world that corroborates that information.”

Schmertzler: Yeah. It puts it out in a really thoughtful way. You read it and go, “That’s interesting.”

ICSP: Which is a scary thing, like you said. How many people are now going to look at that and go, “Sounds accurate. Looks accurate. Must be accurate.”

Schmertzler: Absolutely correct.

ICSP: I know what we’re going to run into as we start to get more guardrails, especially on the U.S. side within AI regulation. But, of course, with threat actors, they don’t have guardrails. They don’t have to be contained by laws. When we’re talking about things like deep fakes and AI cloning, or AI voice cloning specifically, what impact are those going to have on social engineering attacks?

Schmertzler: It’s going to get really ugly. Humans like to trust people. Even the biggest cynics in us all, humans want to trust other folks, or they want to establish relationships. We’re very, very social creatures. As they get better, it’s going to be harder. This AI is going to push us on the internet to require greater authoritative trust of identifying who you actually are. Because most of our interactions are done through email with the public. Email encryption is going to need to become much, much more critical, and actually tying that to identities. This has been the anathema of what the internet was about originally, which was that it should be open. It should be free. We should do whatever we want on it. The problem is, that required that there was an inherent level of trust on those networks and that people wouldn’t abuse them.

It wasn’t that long ago that you could get into a command line. You could just tell an email server that the email came from whomever you wanted to. It would just accept that and say, “That’s fair. I’ll put that email into our database from that.” We’re going to have to get away from, I think, that level of anonymous trust, especially in the way we communicate, and especially in the way that we can send messages onto people’s devices. People want to go into Reddit and deal with whatever there’s there. Fine, that’s in its own contained space. But from a corporate governance perspective of protecting networks, having a vetted system where it’s not just based on the SEO of the website or something, it’s actually based on some kind of certificate authority, that kind of implementation is going to be really critical.

YOU MAY ALSO LIKE

GET ON THE BEAT

 

Keep your finger on the pulse of top industry news

RECENT NEWS
HACKS & ATTACKS
RESOURCES