Managing upstream vs. downstream supply chain attacks

Courtesy: Brett Sayles

Supply chain insights

  • Companies that have to comply to a cybersecurity framework or standard tend to have basic hygiene better covered.
  • Big manufacturing and the supply chain tend to get planned years in advance. That’s not something you can do well with cybersecurity because you have to be able to react and respond quickly.
  • Most supply chain attacks happen upstream. Typically, they don’t compromise the download site and upload malicious software. It’s an attack against the components or the development environment.
  • It’s important to understand the composition of software. A software bill of materials, or SBOM, is like an ingredients list for your software product.

One of the biggest cybersecurity concerns over the last few years has been the supply chain, especially the software supply chain. The attack on SolarWinds made it clear just how pernicious supply chain attacks can be. Just one third-party vendor with poor cybersecurity can put hundreds of connected companies at risk. But these risks can be different for upstream and downstream organizations.

John Deskurakis, chief product security officer at Carrier, and Tony Turner, vice president of Fortress Labs with Fortress Information Security, discussed how to protect against supply chain attacks and offered viewpoints from two very different perspectives — manufacturing and security — in this partial transcript from the May 6, 2022, RCEP PDH webcast (archived for one year), “How to Protect Against Supply Chain Attacks.” And check out Part 1 of their discussion on why supply chain attacks create such a target-rich environment. In Part 2, they discussed why supply chain attacks and ransomware are a toxic mix.

This article has been edited for clarity.

ICS Pulse: Let’s talk a little more about more strategies to mitigate supply chain attacks.

John Deskurakis: From the manufacturing perspective for us, it starts with governance because there’s a lot of huge manufacturing lines that go into pushing product out the door. Even when product is just simply software, a lot of hardware is coming delivered with embedded firmware and software that supports it. For us, it’s governance. By governance, I mean policies, processes [and] procedures. The framework by which you operate. So not only are you building something securely that’s designed for security, but how do you respond and react because cybersecurity is not something that is static.

It’s a very dynamic landscape. Every time when we think we have something solved or understood, attackers are very smart. They evolve the way they come after the supply chain and the manufacturer products and services. We have to be dynamic and evolve. When I say governance, I mean the framework of how you deal with these things.

ICSP: Tony, from your perspective, how do you see it in terms of working with the downstream supply chain?

Tony Turner: I’m going to answer that question both from our visibilities. Since we work both upstream and downstream, we kind of sit in the middle with the work that we do. I’m going to answer this both from what we see from the manufacturing community, as well as what we see downstream from the asset owner community.

One of the things that has been interesting to us, especially as we work with different product teams across some of the very large, original equipment manufacturers (OEMs), many of them will have a product family or line of products that has a completely different legal entity within the organization and a completely different set of processes and controls. We do assessments of these folks for the asset owners. And if we don’t do an assessment of every individual group, each product family one versus product family two, versus three, versus four, then we look at one and make assumptions about the controls that are implicit across the organization, based off our analysis of that one product group.

It’s not accurate. In some instances, there’s strong governance across the organization. I built application security programs and manufacturing in a prior life when I worked at Arrow Electronics. The hardest problems I’ve found when joining very large, especially global organizations with multiple product teams is getting that governance layer and that uniformity of control and understanding of the risks and applying those things universally across the organization in a way that makes sense. Sometimes, there’s very good reasons why they do different things. They may have different verticals that they support, and they have to align with different control frameworks. What I think organizations are missing is when they have to align to these verticals and different control frameworks, what we find is a more effective approach is taking that high-water mark across all the frameworks that they need to deal with and applying controls universally across the organization.

It does increase expenses to apply the controls in that way, but it also reduces the human factor and gaps that occur, [and] that can creep in across the manufacturing conversation. The other thing I’ll say on the downstream side [is that] we see the same kind of decentralized approach inside many of our customers. For instance, we do a lot of work with power utilities, electric power, and the ones that have more unified governance processes are more centralized in their security and control functions. They tend to be a little bit more mature as opposed to the ones that run very decentralized. They have different operating companies that service gas, or maybe they service electric power in one market versus another market. Not having that layer of governance across the top can really contribute to some inconsistencies in how they deal with risk.

Deskurakis: What you’re saying really resonates with me. The company I work for is the parent company of over 80-plus subsidiary brand businesses that are operating in multiple verticals. We have businesses that create fire suppression products and other ones that are doing physical security products that are mostly software based. We have others that are HVAC. Essentially, anything you could find in a commercial building, we manufacture somewhere. Also, there are different regional and jurisdictional issues that we have to deal with. The way one implements encryption in France is not the same way as we would do it in the U.S. based on local regulations and laws around it. The way we manufacture products for China is not the same as the way we would do it in India. It’s a global company. We’re competing in every country.

What you’re saying resonates with me. The debate over centralization versus decentralization has been something that I’ve been engaged upon in my current company [and] in the last couple of manufacturing companies I’ve worked for. What we’ve come to realize is that there are benefits to both models. Such that we have instituted more of a hybrid model, which I call a federated model. Where there’s the national government and the state governments, we are making sure that we ensure the basic framework you described. That is the high-water mark for everybody, but then we are also tailoring things to suit different verticals and different jurisdictions as needed. Our responsiveness is fast enough to work with the market and work with the fact that the landscape and cybersecurity is dynamic and always changing.

Many of the things you do in big manufacturing and the supply chain are planned years in advance. That’s not something you can really do well with cybersecurity. You have to be able to react and respond very quickly, but at the same time, you have to be very proactive about things. There’s a lot of balance going on. We’re in the same position, too, where we’re balancing, not just being part of the supply chain in terms of pushing things upstream, but we’re also consuming a lot of things ourselves as a big giant parent company of many subsidiaries. We’re pulling things in from downstream, so we have to think about it both ways. Also, many of the products manufactured within the supply chain, so to speak, are consuming commercial off-the-shelf elements into the product itself. So if we’re not very good at both parts of that, we’re going to miss something.

Turner: One of the things that we have honed in on is how we handle software security and software supply chain risks from a software development lifecycle standpoint. If you look across the lifecycle, there’s certain phases of that lifecycle where there is typically a lot of rigor applied. There are certain phases of that lifecycle where there is none or very little rigor applied. What we have found is the place that people pay the most attention to is the source code that they produce. Most people are running static code analyzers and SEA tools.

They’re doing great things on the software that they produce, but there’s this implicit trust the third party depends on. A lot of times they don’t necessarily understand the third-party code as well as they do their own. It’s more challenging for them to understand where there are problems. Those third-party dependencies [are] under so much pressure. Product teams are under an enormous amount of pressure to get patches out the door very quickly when vulnerabilities occur. I find it very rare when suppliers are actually validating upstream patches. When they’re updating the dependencies, they don’t really know what goes into them, but they know there’s a vulnerability. They need to get it taken care of, and sometimes the updates can be more dangerous than not fixing the vulnerability in the first place.

We’re seeing some of this with some of the abandonware-style attacks that are happening now in the face of the Russia-Ukraine conflict. Developers are making a statement about the software libraries that they are supporting and destroying the code, deleting the code, and the downstream consumers of that information, they update the library, and it completely breaks your application. Now they’ve gone from, “I have a vulnerability that might be exploited” to “I have a product that does not work at all. I’ve completely created an operational failure because I’m not validating that stuff.” The other area where I see people put a lot of rigor is in the packaging and distribution side of things. I think there’s an overreliance on hashing and code signing to be a measure of trust and authenticity in software. The reality is most of the supply chain attacks are happening upstream.

Typically, they’re not compromising the download site and uploading malicious software. It’s an attack against the components or the development environment or developer machines that are writing code, or even compilers. These attacks are happening upstream. When you get to that kind of hashing and code signing stage, it’s kind of what I call a last-mile control. It’s happening after the attack has already occurred. So now we’re signing malicious software, [and] we’re signing and hashing things that have already been compromised. How are the downstream consumers supposed to validate this stuff? All they have is the hash or the code sign to go on. Software consumers are left with very few mechanisms at their disposal to understand that what they’re using is OK or not.

Deskurakis: I think that story is evolving. If we go back to what we were doing 10, 15 years ago to secure software, systems and products, it looks like the stone ages. I think we’re evolving, and it’s a continuous effort on a path to maturity as we learn more and more, and we start to build those methods to understand. The way you attack a complex system is by elements within it that are vulnerable. As an attacker, how am I going to go after this system and shut it down or ransom it? I want to know where the soft spots are. But as the consumer, if you’re upstream, it’s very important. This is an element where we’re evolving.

It’s important that you understand the composition of it. Part of that composition story is something that we refer to as an SBOM. What I’ll do is shift this to [Tony] because the way we do it in manufacturing is starting to become a risk management story. We’re trying to boil the ocean on identifying all the risks, and then manage the risks. But when you have a complicated system that’s built out of a million things, it’s important that you have an inventory. When something goes wrong, you know where it is and which parts of your system has it, [and] which of your products in your building ecosystem has some element that’s vulnerable.

ICSP: Tony, can you explain a little bit about software bill of materials, or SBOMs?

Turner: A software bill of materials, or SBOM, is an inventory, an ingredients list. I frequently describe this [as] similar to a box of cereal. When you buy a box of cereal at the grocery store, you have an ingredients list. It tells you all the chemicals, wheat and sugar that go into that box of cereal. So if you’re concerned about your personal health, it’s easy enough for you to read that ingredients list and determine if anything that’s going into that is not something you want to put in your body. Is it safe for you?

For the longest time, software consumers especially, have not really understood what went into the software that they buy. It’s always been kind of a black box. An SBOM is just describing all those ingredients in all the pieces of software. There’s no source code or proprietary intellectual property that typically goes into a software bill of materials — just the ingredients list and some supporting metadata. Whether it’s hashes, component-component version, dependency relationships, time stamps, CPE values, Pearl values or package URL. These are software identifier references that can be used to do things like look up the component in the national vulnerability database and determine if the components that are being delivered to you in that software might be vulnerable.

Deskurakis: This is really important, especially from the manufacturer perspective. We have hundreds of major products and thousands of components that those products are built out of. The leadership came to my team and said, “Quick, tell us where we are vulnerable to this Log4j problem, and what do we need to do next?” If you don’t have effective SBOMs as part of your offerings, it’s going to take you months to figure this out because you’re going to have to go piecemeal and do analysis and composition analysis one by one by one. If you’re a smaller company with one product, it’s a simple task. But when you have a multitude of products, trying to put together that big picture of identifying the inventory of where your problems are so that you can devise a quick fix, that becomes problematic without something like an SBOM.

SBOM to me is the enabler for real risk management. You can’t manage risk unless you can identify an inventory, and you can’t build an effective plan for mitigation across the purview of all your operations. Not only do you need to understand what all your high-level components are in systems and all of that, but you need to understand, as you said, it’s a recipe. If I’ve got a bad meal and I don’t understand all the parts of my recipe, all the things that I’ve included in the meal, I’m not going to be able to identify where it went wrong. So it’s elemental for risk management to use it, and SBOM is one of our methods. It’s an important one because I find looking at our downstream community where we’re ingesting stuff into the company, we’re trying to look at what’s it made of.

Not every supplier we’ve had historically does a good job about inventory, what what they’re offering is made of. Part of it is due to the lack of maturity and their process. Sometimes I’m debating with suppliers [because] they want to tell us that its proprietary and this kind of thing, which is why I think one of your key points before was transparency. You have to be able to offer transparency upstream in order to actually get things right because this is a community endeavor.

Turner: Log4j is such a great example of this because of the multitude of products that are affected by it. In some instances, the techniques that we’re using as an industry to identify where we had Log4j exposure, with scanners looking for files that indicated Log4j, are not necessarily effective enough because many Log4j’s library can be compiled within other code. Just looking for evidence of a file in a file system is not sufficient. We have to know all the components that go into all this software. From a manufacturing standpoint, this can be very challenging. If you look at some of the Ripple20 vulnerability and other vulnerabilities that have hit the manufacturer, you get a system in the last few years where many manufacturers spend hundreds of thousands, or even millions of dollars, just answering the question, are my products vulnerable? Now bring this problem downstream to the software consumer.

They are buying software from thousands of product manufacturers. The problem becomes completely unscalable to address this in a manual way without some sort of automation at play and getting that visibility and transparency across the entire software ecosystem to understand where they have a problem. With Log4j, we see folks who are still trying to understand if they still have an exposure. We had a very large agency early on send us over a million CPEs, or products in an inventory file, shortly after the news landed. And they asked a question, “How many of my products are vulnerable for Log4j?” Without a massive amount of software bill of materials, it becomes very challenging to answer that question.

The other thing that we have to remember is this is not a once and done kind of thing. There are two dimensions of complexity here that require continuous monitoring. One is we need an SBOM for every new piece of software that is issued. On average, we see about five releases a year — some are more, some are less — for most software providers [and] for most products. If I’m dealing with 100 products, [it] is now 500 SBOMs that I need to monitor. Just because I did an assessment of an SBOM when the software was released, that doesn’t mean that a new issue hasn’t cropped up tomorrow or the next day, that I need to be continuously reanalyzing this. The way I think of this is new insights on old data. We need to continuously look for these new insights on the information that we have at our disposal. Even just starting with trying to solve this problem, [and] just getting visibility in the problem in the first place, is extremely challenging.

YOU MAY ALSO LIKE

GET ON THE BEAT

 

Keep your finger on the pulse of top industry news

RECENT NEWS
HACKS & ATTACKS
RESOURCES