From the C-suite and board of directors down to those on the front lines of cyber defense, there is a constant struggle to strengthen cyber hygiene and minimize risk, and it has become vital for organizations to validate cybersecurity effectiveness.
Organizations make significant investments in security infrastructure, hire and train teams and put processes in place to protect critical assets. A Mandiant research report shows without evidence of security performance, those organizations are operating on assumptions don’t match reality and leave them with significant risk.
The best way for organization to combat this disconnect is to validate the effectiveness of a security program through ongoing, automated assessment, optimization and rationalization, the report said. That will enable organizations to minimize cyber risk across an entire organization by protecting critical assets, along with brand reputation and economic value.
There is no doubt measuring the effectiveness of and justifying the investment in security controls has become a key performance metric for enterprises because boards of directors and chief executives are expected to provide verifiable proof business assets are protected from the fallout of a potential breach.
However, as organizations begin to address cyber risk as a business problem, they also continue to manage security as an information technology (IT) function, according to the Mandiant report, entitled, “Deep Dive into Cyber Reality.” This dynamic exposes the misalignment between IT, which owns infrastructure, and the security team, which owns the cyber security controls and processes that protect the business. Researchers found this disconnect increases the need for security teams to generate reliable evidence of effectiveness.
Security leaders report they need to be able to confidently answer important questions, such as:
- How effective are my security controls?
- How quickly can I assess the relevance of threat intelligence or my exposure to a likely attack?
- How well do I stop data leakage and protect data integrity?
- How can I simplify and standardize my security stack?
- What evidence can I provide with key security metrics for my executives?
Right now, the survey found, organizations are performing below their predicted levels of effectiveness.
Data show companies find a discrepancy between their expected capabilities and the measured results. On average, they detect only 26% of attacks and prevent 33% of them, which provides an opportunity to optimize their investments.
In addition, it is alarming alerts are only generated for 9% of attacks.
Altogether, this has a negative impact on incident response because security information and event management (SIEMs) and other technologies responsible for triggering alerts cannot deliver a high level of fidelity to prioritize and address security concerns.
Several attacker techniques and tactics are associated with challenges most commonly found in enterprise environments when conducting testing through security validation.
Security tools are often configured to address challenges but, in actuality, may be poorly optimized. The most common reasons for poor optimization include:
- Deployed under default “out-of-the-box” configurations
- Lack of resources to tune and tweak post-deployment
- Security events not making it to the SIEM
- Inability to force controls testing
- Unexpected changes or drift in the underlying infrastructure.
When security executives were asked, “How do you believe your controls are performing in each focus area?” many found after executing an initial iteration of testing, their production environments performed well below expectations against these challenges:
- Infiltrations and ransomware
- Policy evasion
- Malicious file transfer
- Command and control
- Data exfiltration
- Lateral movement.
One case in point in the command and control area where of the tested command and control activities, 97% of the behaviors executed did not have a corresponding alert generated in the SIEM.
A case history showed, to rationalize significant security investments and identify areas for divestiture, a critical infrastructure customer in the energy sector leveraged security validation. The team’s testing efforts identified areas of overlap in capabilities, inefficiencies in product expectations and gaps in overall security posture. The findings provided evidence to support cost reductions in endpoint technologies, correct alerting gaps to the SIEM and deliver improved executive reporting through a third-party analytics platform.
Common causes for these issues were: Outdated or missing site classification, lack of SSL inspection, security events not making it to the SIEM.
This content originally appeared on ISSSource.com. ISSSource is a CFE Media content partner.
Six answers on industrial cybersecurity effectiveness
Assessing cybersecurity today to improve tomorrow’s manufacturing operations
Four tips on cybersecurity risk assessments
Original content can be found at isssource.com.
Do you have experience and expertise with the topics mentioned in this article? You should consider contributing content to our CFE Media editorial team and getting the recognition you and your company deserve. Click here to start this process.