Boosting Cybersecurity Collaboration: Cyber Awareness Podcast, Victor Atkins, 1898 and Co.

Courtesy: Brett Sayles

For years, as technologies have advanced, cybersecurity has been more or less an afterthought, something to be tacked on after new products have already been developed. But there has been a rising demand for design engineers, system planners and cybersecurity professionals to collaborate on planning and designing new critical infrastructure projects so that they incorporate cyber risk from the beginning.

In the fifth episode of our Cybersecurity Awareness Month podcast series, we were joined by Victor Atkins, global director of executive advisory services for industrial cybersecurity at 1898 and Co. He talked about why you should start by patching known vulnerabilities in critical industries, how cybersecurity collaboration and communication is on the rise and why information technology (IT) and operational technology (OT) need to be talking more. Listen to the full podcast here.

The following has been edited for clarity.

Gary Cohen: Cybersecurity Awareness Month always highlights some key behaviors, like updating software, multifactor authentication, password protection and recognizing phishing. What do you think people should be focusing on this month?

Victor Atkins: This is a great question. I think start simple. At the very least, people should patch known vulnerabilities. Security professionals are always worried about zero days, which are basically malware variants that are specifically tailored to attack a system vulnerability that’s never been seen in the wild before. But in 2022, the National Security Agency, the Cybersecurity and Infrastructure Security Agency, which we call CISA, and the FBI all released advisories about the top common vulnerabilities and exposures actively exploited by China cyber actors since 2020. I don’t recall the number exactly, but when I was at the Department of Energy tracking cyber intrusions in the energy sector, there was a high percentage of successful cyber incidents that exploited these known vulnerabilities. So just think of the headaches that could have been avoided if everybody would have addressed the vulnerabilities in their systems with patches that already existed and that were already widely available. This is still a problem today.

And if people really wanted to get a gold star for effort, I would say that organizations, particularly those that have critical infrastructure, they have to manage IT and OT networks, information technology and operational technology networks. They should make every effort to know what’s even in their environment and how those networks are configured. So many organizations don’t have any accurate inventory of hardware and software within their environment, nor do they have detailed network maps of how their technology is even configured or connected. So I take the gloomy position that for most critical infrastructure that we rely on every day — whether that’s in communications, transportation, water, energy, etc. — that we should assume our enemies already have these systems compromised.

So if you think that’s true, in many cases, it might be the case that cyber actors with malicious intent may already know more about these networks than the owners themselves. If folks made the investment in resources and time to accurately map and inventory their environment — and this is not actually a very expensive thing to do — then they could know what they have. They could address the vulnerabilities in those networks with what they own. And if they did that, they could eliminate the cyber footholds that many of these adversaries already have established and go a long way toward keeping them out in the future. Or at least it would make it more difficult for them the next time.

Tyler Wall: We are coming up on the end of the year here and about to enter 2024, which feels absolutely wild honestly. But as we’re starting to head into 2024, what trends or developments in cybersecurity are you particularly excited about?

Atkins: I think I’m excited mostly by the trend of the increased collaboration between different disciplines that takes cybersecurity and cyber risk into account when operating cyber physical systems. So what does that mean? There’s a lot of cyber thrown in there.

For some time, people in critical infrastructure industries have been talking about what they call IT/OT integration, which basically just means that people responsible for operating and securing business and IT networks are also doing more work with control system engineers and operators of OT systems to secure these networks as a whole. This is great. Typically, they’ve been separated in an organization. Now, they’re working more together, and we should see more of this, especially as we increase more automation and more data-driven analytics that rely on connectivity into these OT networks. The more these people talk and share responsibility for making these systems more secure, the better we’re all going to be.

But what really seems new in addition to that is the rising demand for design engineers and system planners and cybersecurity professionals to collaborate on planning and designing new critical infrastructure projects so that they incorporate cyber risk along with all the other risk factors that they usually consider when they design a system. So in other words, cyber perspectives are being driven into the early design and front-end planning phases of new projects instead of the typical approach that we’ve always seen where they bolt on cybersecurity technologies after everything’s built and stood up, which is way more expensive and less effective in the long run. For example, in May of this year, the North American Electric Reliability Corporation, which we also call NERC, which is responsible for assuring the reliable and resilient bulk power electric system for North America, they released a paper called Cyber Informed Transmission Planning Framework, which is an elaborate document that challenges the electricity sector to bring cybersecurity into the earlier transmission planning efforts for the future.

OK, so why does that matter? As we start to bring more renewables and more renewable generation sources into the grid, they have a lot of digital components and exposure to cyberattacks. So this call for collaboration for cyber on the front-end planning phase could not happen at a better time for our nation. I’m really excited for the prospects there. Recently, I’ll add, that I participated in a cyber informed engineering practitioners’ workshop, which was hosted by the McCrary Institute at Auburn University, in collaboration with Idaho National Laboratory, where we discussed with a couple hundred people how to get these types of collaborations moving, where cybersecurity people and engineering professionals can work together to create more cyber resilient infrastructure projects. This is going to be difficult because cyber and core perspectives and front-end planning efforts require the evolution of existing engineering standards of care, and other design practices, that in some cases have been around for over a hundred years.

We all know that engineers are not the first early adopters of new ideas and technologies. But we all know that cyber is here to stay, especially in control of industrial processes, which I think your audience obviously appreciates. So engineering design practitioners have to take cyber risks into account, especially as cyber can make those systems fail. If we can do that, we’ll make a huge difference in reducing risk on public health and safety and even our national security. I’m really excited to help drive those efforts forward into the future.

Cohen: That’s a great answer. The idea of collaboration between these groups, the fact that that’s happening more is really heartening. Not asking you to name names or throw anybody under the bus, but can you share a memorable experience or a case from your career that for you really highlighted the importance of cybersecurity?

Atkins: As you highlighted in my intro, I’ve spent most of my career in the government and the intelligence community, which meant most of it was in classified environments. And I can already feel your audience’s eyes roll when I say that. Unfortunately, I can’t share any details about those incidents that I observed or had to help resolve when I was in that capacity. But even so, I can say that when we talk about cybersecurity, we often talk about initiatives in terms of people, process and technology, and there’s a lot of work and attention on establishing new processes and on being compliant with existing standards and policies. There’s a lot of investment in creating and installing whizbang technology and new solutions that everybody claims can provide elegant security controls for whatever threats they own. And that’s all good and everybody should keep doing this, but in just about every one of those really stressful cybersecurity incident situations in my career, the root cause on the final analysis was on the people side.

Which sounds very personal and human, but human behaviors and tendencies are often a source of all of our problems, not just in cybersecurity. You can have a great technical security control program in place, but that doesn’t mean anything if people and key roles bypass those controls for their own convenience. Policies that may be perfect are meaningless if they’re not implemented, maintained or enforced because doing that is too complicated or costly, or because, like we talked about earlier, an entrenched operational culture doesn’t want to see security changes. In my experience, we can have run-of-the-mill carelessness where people with the best of intentions make stupid mistakes or do dumb things. We can harp on the obvious cases of poor password management or opening that phishing email, but I’ve also seen a lot of cases where control systems are poorly configured, or engineers bypass the security and controls in place because they want to make it easier to do their job, like installing a home router at a substation so they don’t have to get out of their truck, things like that.

These are human errors, and hackers can always count on these things to break into systems. In fact, I would argue that that’s probably the thing they rely on the most. So everybody should keep training their people. They should try to do cyber hygiene. They should install better processes. They should keep up with the compliance. They should keep buying that whizbang technology. But they also need to recognize that they can always count on the people in their system being really the weakest link. What can you do? You can’t get rid of the people, obviously, but you can make your system more resilient by doing things like knowing your inventory, eliminating cyber functions and features you don’t need, identifying and focusing on the security of your most critical functions so that you protect those things first. And if you do all that, you really reduce risk to the people problem so that when people do dumb things, it may not hurt so bad.

Wall: In recent years, there have been a lot of different cyberattacks that have come to light, whether that’s because there’s just more of a focus on what is happening in the cyber world now or if there are actually more attacks happening in general. What have we learned from those attacks?

Atkins: I’m trying to learn along with attacks as they evolve because I don’t want people to always constantly fight the last war. As is obvious in my responses, my interest is in threats against critical infrastructure, cyber physical attacks in particular. So the recent threat that has my attention that I think we can learn a lot from was first reported by Microsoft Security in May of this year, and it was later posted in alert by a number of governed agencies, that’s attributed to a Chinese intrusion site called Volt Typhoon. For most of my career, I focused on Russian cyberattacks, but the Russians have their hands full right now. We haven’t seen as much cyber activity from them, but China is really on the uptick. And this group, Volt Typhoon, is believed to be trying to disrupt communications between the U.S. and our regional bases and allies so that they could maybe leverage that attack position to disrupt communications during a time of a future crisis, like a conflict over Taiwan, if you can imagine that.

The hallmark of this kind of attack that we need to learn from is that they use fileless malware techniques, which we often call living off the land. That’s basically where they take advantage of features in the operating system itself without dropping any malicious payload into the memory. So they basically look like an authorized user. They don’t look any different than anybody else in many cases. And if done well, they don’t trigger any antivirus scanning. They don’t leave traces that can be discovered through forensic analysis. So in other words, it’s a very difficult threat to detect, and it can be very effective in achieving some outcome, like a physical effect in the world. So what’s this tell us? It just tells us they’re getting more sophisticated. They’re always going to get more sophisticated, but also they seem to be capitalizing and evolving with all the trends in our industries that only increase opportunities for threats like this.

In all those critical infrastructure sectors like communications, technology, energy production, and delivery, water distribution, transportation, maritime shipping, you name it, companies are increasing the automation. They’re increasing the remote control for monitoring. They’re using analytics that pull data from those systems and networks. These trends only increase the attack surface. Remember, just the rule of thumb, any network connection that a company uses for control and transparency for good purposes can also be hijacked by a capable adversary using these living-off-the-land techniques.

But I’m not all doom and gloom. I’m told I’m always doom and gloom. I should be a little more positive. So I’m trying to think about what’s positive here. And one thing that I think we’ve learned that’s really positive, actually, all joking aside, is that we even know about this at all. It wasn’t long ago that we would’ve expected that this would be first discovered by the government, probably from some classified source or method. If that information was ever shared, it would’ve been so highly redacted and delayed that it would have little use for any network defense strategy. But Volt Typhoon was first reported by a private-sector threat intelligence entity way before the government released anything. Microsoft has been doing this for a while. They were also key to reporting Russian cyber activities in the early days of the war with Ukraine.

I think this shows that the private threat intelligence community is really getting very good at discovering nation-state-level threats, and they aren’t waiting for the government to take action. In fact, just by looking at the way this was reported, it looks like they coordinated and collaborated with the government on the timing and the manner in which this was all put out. If we get the emergence of a strong and capable collaborative government and private threat intelligence ecosystem, I think it really makes us stronger. And I think if we can continue to detect and report on these activities, we’ll deter these adversaries, which ultimately makes us safer in the long run. I think that’s a really positive outcome of all of this.

Cohen: I love that you brought us back around to a positive there. So this field can move pretty fast. What emerging technologies do you see impacting cybersecurity in the near future?

Atkins: I think everybody in this field is contractually obligated now to talk about artificial intelligence and machine learning. It’s the easy one, so why not take it? I don’t claim to be an expert here, but in my role at the Department of Energy, I oversaw a lot of research and development investments at the National Laboratories on AI and machine learning to determine how they might be leveraged for cybersecurity. There’s a lot of promise here — if AI could conceivably recognize and react to cyberattacks with greater accuracy and speed than any human could perform; if they could even predict those behaviors or catch them at early stages; if we consider that cyberattackers may also be able to use this for their operations, but if they confront a well-trained and reliable AI cybersecurity platform that can block or mitigate activities as they’re happening, sandbox those events and analyze them forensically to determine where they come from and then share that information broadly throughout the ecosystem so that other people can mitigate or prevent the attack.

If all that can be done at machine speed, and if we’re really good and we use generative AI to actually do the predictive analysis to be able to predict, identify and defend against future threats, that would be a very formidable front for any cyberattack force to overcome, even if it was also leveraging AI. So I think there’s a lot of promise. There are a lot of capabilities that have been developed already on this, but I also think there’s a long way to go before we can achieve this vision technically. We also have to recognize, again — I keep coming back to this culture problem, people problem — many operational cultures, they’re just very conservative and not inclined to adopt new technologies that they don’t directly control. So I think if we’re talking about OT security, it will take even longer to bring AI into that environment, even after the technology has been proven to be effective.

But, again, trying to be positive: I’m always surprised by how rapidly changing this world is that we live in, and I can’t predict myself how fast or how readily adopted this stuff will be. So I could be surprised. These technologies may be here before we know it. If that does occur, there won’t be any need for people like me, and I’ll have to go find another job. But until that day, I’m hopeful that they can make progress on that front.

Wall: Ending this off with the most important question of your entire career, what is your favorite movie or TV show that has to do with cybersecurity?

Atkins: Well, this podcast obviously, right?

Wall: Correct.

Atkins: But separately from that, I actually crowdsourced this one because there’s a lot of options, as you mentioned. But in my circle of friends and colleagues who are also paranoid about critical infrastructure attack scenarios, the clear favorite when I asked people about this was “Live Free or Die Hard.” I hope we’re past the need for spoiler alerts for a movie that came out in 2007, but for those who aren’t familiar with the plot, it involves a maniacal James Bond-like character who attacks the critical infrastructure of the U.S. for revenge, and some monetary goals. You’ve really got to suspend your disbelief on how some of the cyberattacks are actually achieved. I think if I recall, there’s a scene where a kid with a Nokia phone hijacks a SATCOM network within about 10 seconds off of … I don’t even know how he got access. But those kinds of things are fantasmagoric, is the only word that comes to mind. You do have to suspend your disbelief.

But why the movie’s cool, I think, from a cyber perspective is that it really does highlight the concept of social panic and what might actually happen if we had widespread outages that lasted any amount of time for critical services on which people have come to rely for everyday life. So I think those threats actually are realistic today. I’ve given talks, and I speak about this all the time: There’s all kinds of literature on Chinese and Russian threat capabilities that could make those things realistic. I think that the movie really does highlight the impact of that kind of a threat if it did happen. And you get Bruce Willis. He crashes a helicopter with a car. That’s cool. If you just want to see stuff like that, it’s worth checking out.

Wall: That one’s my go-to as well when this conversation comes up because it’s a very good one. After watching, I spent so much time trying to find actual cases where hackers took over the traffic system and all the lights, because that would’ve been so cool to write about, but it’s actually not that common at all. But who doesn’t like a good Bruce Willis movie?

Atkins: There are some realistic things there, like they do note that you’d have to have physical access to a lot of the control systems to have an effect and things like that. And that is true, but I do believe that the systems, as we’ve talked about earlier, are evolving rapidly to become more digitally connected. As we become more reliable with those technologies, we also become more open to attack. I hope it doesn’t happen, but that’s why we’re all here to try to make sure it doesn’t, right?

YOU MAY ALSO LIKE

GET ON THE BEAT

 

Keep your finger on the pulse of top industry news

RECENT NEWS
HACKS & ATTACKS
RESOURCES