How do you know what to trust? It can be difficult to know who or what is above board, especially when it comes to technology. In the cybersecurity community, who you choose to trust can be the difference between keeping your systems safe and suffering a massive breach. The philosophy of zero trust stems from this moral issue and urges companies to follow the “never trust, always verify” approach.
Recently, ICS Pulse interviewed cybersecurity practitioner Dennis Hackney about the complexities, benefits and real-world challenges of implementing zero trust, especially in OT systems. To listen to the podcast, click here. To read part one of the podcast, click here.
The following has been edited for clarity.
ICS Pulse: A lot of new cybersecurity directives have been coming out of the Biden administration, whether that’s the new National Cybersecurity Strategy that was released a few weeks ago or some of the others. Those point toward zero trust as a recommended practice, especially in critical infrastructure. Do you see it as a good thing that the government, when looking at critical infrastructure, is considering zero trust as a preferred method?
Dennis Hackney: You’ll see in our government where we have the Cybersecurity and Infrastructure Security Agency (CISA), and we have some of these funded programs that do a lot of research. They work with the Idaho National Labs, which really knows OT well. They helped to design and write up these things. In theory, it does make sense, but I was in a conversation at a very large conference last year where a small group, a type of coalition per se, who’s working at lobbying and helping to update some of these standards from the outside, had this discussion. Most of it was very esoteric. “I don’t trust it. I don’t think this will happen. Or this doesn’t make a lot of sense. Or the government doesn’t know what they’re talking about.”
It seems like there’s a disconnect between what the world — operators, owners and system users — are doing and what they’re doing to protect their systems and what that regulation is steering toward. But I will give you a specific example, and that’s in our security directives that came out from TSA for pipelines, in which case they did specifically call out something like zero trust. In theory, zero trust might be a good thing because the reason that went down was that there was no clean way to disconnect the OT from the IT side of the network. You had zero trust. Obviously, you could do that with those technologies, but when the organizations tried to apply some of these things, they had to sidestep that comment. There’s not a way to do zero trust.
Go back to the National Cybersecurity Center of Excellence (NCCOE), and see how they’re doing it. They’re trying to figure it out even on the IT side. You can’t put that in a document if it’s that aspirational. There’s no way to do it. Don’t put it in the document if it’s that aspirational. If you can say, “Lean toward software-defined networks,” we get our vendors out there, and we start applying or developing different ways to install software-defined networks.
Or if you say, “Go toward this type of active directory, or make sure that you have that type of authentication, use multifactor authentication, but make sure you have a way to update it,” then it makes a lot more sense. I think what’s happening is some of these recommendations are aspirational, and there’s a big disconnect between who wrote that and the people that are doing it.
ICSP: Has zero trust been implemented in the real world of IT, and where does it stand with OT right now?
Hackney: On the IT side of the house, we see organizations that say they’ve done it on a larger scale in their data centers and large enterprise networks or even their point-of-sale systems. We can see things that are customer interfaces, so it’s just for varying levels. I mentioned the NCCOE earlier. They are going through evaluations with multiple service organizations and product organizations. That’s documented in the standard procedure, NIST Standard Procedure 1800-35 series. That’s a good read if you want to see how some of these networks look. It starts with cloud services and how you’ve got Azure AD or AWS active directory.
It talks through how you can develop that type of policy enforcement engine in those environments using solutions that are working through that with NCCOE and NIST. That’s a good place to start if you want to write up on organizations that are both service and product providers and U.S. government organizations working together with some actual examples of architectures.
When it comes to OT, there’s been a large campaign around zero trust in OT. After that coalition meeting last year, I started getting a little bit more interested in zero trust, too, because I was like, “These people don’t know what they’re talking about.” Some of them are vendors, they’re OEMs, and some of them are part of the government. I was like, “This doesn’t make any sense. I don’t trust it. Oh, that’ll never work. Oh, I’d like to learn more about it.” Of course, they had a month to prepare, but it just seemed like something I’d be interested in. I did start to research it, and I was like, “Well, a lot of the stuff might work.”
In operational technology networks, even where I am, we’re already looking at software-defined networks. Where the virtualization exists, we’re looking at micro-segmentation, and we’re looking at technologies that help us manage that virtualized environment from the perspective that we’re micro-segmenting the virtual environment. That’s possible in most of these environments.
ICSP: How can these OT models work, and do they relate to other standards like zone and conduits?
Hackney: Zones and conduits look a lot like the ISA-6443 model, which I will give you an overview of what that is. ISA-62443-2 gives you the requirements around zones and conduits, but let’s talk about that just for a little bit because I can relate that to the zero-trust model. In zones and conduits, the idea is you’re stepping away from that historic model that we used to call the Purdue model, and you’re looking at systems based on functionality, but it could be physical or logical connectivity and location.
You’re identifying if there’s a way to compartmentalize everything that’s required to make that system run into one module, and then other systems into other modules, and then you’re very specifically connecting through conduits between those modules. You can show all communications. The idea there is if I have a zone over here that includes my distributed control system or my control system, I can island it off from some other untrusted zones. That’s the idea between zones and conduits in theory.
It’s very difficult to identify these zones, especially in a very large OT network because at the end of the day, it starts looking like it’s all one big zone. That’s what I’ve run into, at least in my experience. Where it relates to zero trust is once you get all those zones identified, and we call that a system under consideration, then you’ve adequately defined your enterprise resource. That enterprise resource is what things are trying to hack into — let’s just say it’s digital and human identities or whatever you want to call it — are trying to hack into.
You have a very specific connection going in and out of that zone called a conduit, which you can manage very clearly and very concisely in a way that you know everything going in and out of it. That’s why that model works well. Now, from a zero-trust perspective, we call that sandboxing because what we’re doing is we’re not really implementing zero trust to every asset inside of that zone. We’re just treating that zone as if it’s an enterprise resource. There will be some access that takes place within that zone that we’re not really managing. It’s really a hybrid model at the end of the day.
But additionally, when I was talking about the virtualized resources, there are examples of zero-trust models where you can sandbox your applications. You install this policy enforcement point and decision point as an application, or it could be a separate application and a separate server if you want to control it from a security perspective. Once you microsegment your virtual environment, your SCADA (supervisory control and data acquisition) control servers from your historians, from your active directory from your alarm systems, engineering workstations, what have you, then you can control that remotely with your policy enforcement point.
If something tries to gain access from the active directory server to an engineering workstation, and that subject shouldn’t be doing that because it doesn’t follow the standard behaviors, then you can cut it off. The models from that perspective, that hybrid approach is possible. That’s what we’re working toward.
Do you have experience and expertise with the topics mentioned in this article? You should consider contributing content to our CFE Media editorial team and getting the recognition you and your company deserve. Click here to start this process.