Can engineering afford a world of false positives?

Courtesy of Brett Sayles

As a very young kid, I would ask my dad, “Dad, what do engineers do?” And the response was always, “They solve problems.” And then I’d ask, “What do designers do?” And he’d say, “They create problems.” I do not hold an engineering degree, but in my former 26-year military career and my 10 years as a government contractor, I sure tried to solve a lot of problems. As a cybersecurity analyst and threat researcher — and not with a computer science engineering degree — I can safely say I have been exposed to many problems in the field of cybersecurity applications, products, solutions, and even incident detection and response processes. So it struck my curiosity about what it must be like for engineering (and manufacturing) firms who rely on cybersecurity solutions to protect their company’s information technology (IT) and data, especially in a world rife with false positives

Security research

In July 2021, we conducted research through a survey of 450 people in IT at the decision-making level. Some gave a “very effective” rating for both their cybersecurity staff and their cybersecurity software. More than half (61%) of this group were “very concerned” about the threat of cyberattacks harming their organization. When asked which, staff vs. tools, was most critical for effective defense against cyberattacks at their organization, 28% favored the experience and expertise of their cybersecurity team/staff, while 19% favored their cybersecurity software.

Although we had heard of it before, what surprised us was that 80% reported security analysts at their organization spend time resolving false positive alerts from their current security system. Among these respondents, nearly half (47%) indicated that it is standard practice for their IT security analysts to ignore 50% or more of their security system alerts. Who engineers a solution that provides alerts, but where analysts ignore half of them?

Introducing false positives

Today’s fast-paced cybersecurity technology developments have led to the use of automated tools for everything from vulnerability scanning and threat detection, to intrusion prevention and ad/malware blocking. Tracking hundreds or thousands of anomalies over large networks has always been challenging for IT organizations, and daily management of numerous alerts can be stressful. Experts explain alert fatigue well in this article. Organizations have been consistently struggling to improve security alert management to enhance efficiency and productivity. According to industry research, around 17,000 malware alerts are generated in an organization every week, but only 3,230 are genuine, out of which only 680 are actually investigated.

Most organizations rely on threat monitoring and detection solutions that regularly go through network and user activity data searching for irregularities that indicate malicious activity. When a system successfully detects an incident, it generates an alert that requires humans to verify it as a threat. Often, these solutions will generate false positive alerts. If this happens too often, analysts end up ignoring many of them as they can become overwhelming.

“False positive” is a ubiquitously used term for mislabeled security incidents or files indicating a threat in your network. Sometimes alerts are created on everything that seems suspicious to avoid missing any malicious indicators.

What are false positives?

A false positive is a false alarm informing the security team that a security event has been detected. Put simply, false positives are like house alarms that go off and indicate there was forced entry when, in fact, no such incident occurred. Increasing numbers of false positive security incidents result in analyst inefficiencies.

Managing false positives can be very tedious, tiring and can leave security teams puzzled. They can lead your security operations on a wild goose chase and cause alert fatigue. Alert fatigue usually occurs when the security analysts are exposed to an overwhelming number of alerts, which becomes not only monotonous but causes the team to lose confidence in their security solution. This has become a big issue in security operations center (SOC) teams globally. According to Forrester’s 2020 report, the average SOC team gathers around 11,000 alerts per day, most of which require manual investigation.

What causes false positives?

Various factors contribute to the generation of false positive alerts. We have listed a few that trigger these alerts:

  • Signature-based analysis: Relies on matching signatures of files and code with known signatures of malicious files and code maintained in a database. The solution will compare the new file to the database, and if it finds a match, it will generate an alert.
  • Heuristic analysis: Rather than matching signatures, heuristic analysis uses rules and algorithms to scan and find commands that may be malicious. It may allow the code to execute in a controlled environment to observe its behavior. It is used to detect unique threats even before they can cause any damage. This might lead to the generation of false-positive triggers.
  • Anomaly-based analysis: In anomaly-based detection systems, alerts depend on a model of network normal behavior. These systems generate alerts when they detect deviations from that model of normal behavior. They tend to set off many false positives, as they often cannot distinguish between attacks and harmless anomalous behavior.

Effects of false positives

But wait … we need all three of these analytical methods to really have an effective cybersecurity posture, right? While that is true, the nature of each type of analytical method also creates its own set of inherent opposing defects. For example, the signature-based alerts are only as good as the database of signatures of known malicious files. Heuristics often generate false positive alerts for legitimate processes and code just because they looked like a duck and walked like a duck but didn’t smell like a duck  — a two-out-of-three scenario indicating this code might be bad. And the trouble with anomalies is first establishing what normal really is. All of these take incredible amounts of time to investigate and troubleshoot.

False positives may not seem like an obstacle, but they are extremely degrading to security procedures when healthy objects like cookies, files and websites are considered malicious by a security tool if, in fact, they are not. If a particular rule leads to false positives, it can create numerous alerts that security teams cannot ignore, and analyzing these false positives takes away time needed to identify actual threats. Large volumes of false positive alerts cause data fatigue where analysts end up ignoring legitimate warnings, as our survey indicated.

This might create a situation where analysts disable or suppress rules that generate false positives, which could leave organizations at risk to threats the rules were put in place to prevent. As false positives immediately impact the efficiency of security teams, organizations need to understand the frequency of false positives in implemented security products.

Reducing false positives

Organizations are struggling to find the best method to manage the problem of alert overload. Many are hiring more security analysts or are simply disabling security features to deal with alert volume. However, this is not the solution the entire industry is seeking.

You could reduce the number of rules in security information and event management (SIEM), especially rules that are explicitly designed only for a specific network device or system that you do not have. If you no longer have that system or device in your network but keep the rule, it might create false positives. You might avoid using default rules in an SIEM as they are prone to making errors or are mislabeled in some products. You should run more iterations if the rule still triggers false positives. Rules should be divided into multiple sub-rules that are more specific in nature. Analysts should continuously test rules until they stop generating false positives.

Prioritizing alerts is one of the best techniques an SOC can utilize to reduce time wasted investigating false positives. Alerts that have the highest reliability and are linked to detecting high-level threats should be assigned the topmost priority. This allows security teams to analyze threats from the highest priority down to the lowest, ensuring important alerts are dealt with first. Lastly, tuning, whitelisting and filtering can help a rule function more accurately, along with acceptable exclusions that do not require alerts.

Engineering our way out of the false positive

Let’s give credit where credit is due and acknowledge that many cybersecurity solution designers have done miraculous work creating solutions that autonomously detect, prevent and kill cyber threats before they do damage to our networks, IT and data. Some are better than others. Some integrate with other security, vulnerability management and network performance tools to provide enhanced security.

Engineers, coders and developers that make all this security possible are now looking at integrating artificial intelligence and machine learning into different solutions while at the same time thinking about how to implement zero-trust frameworks into their environments. But just like in physics, for every action there is a reaction. As these hot topic and futuristic capabilities evolve, I look forward to the industry engineering solutions that solve error and false positive problems in current and future security tools.

Intrusion is a CFE Media content partner.

YOU MAY ALSO LIKE

GET ON THE BEAT

 

Keep your finger on the pulse of top industry news

RECENT NEWS
HACKS & ATTACKS
RESOURCES