Enhancing Security Detection: Revolutionizing Risk-Based Alerting

Exeon product mockup - RBA blog.webp

What is Risk-Based Alerting?

Risk-based alerting (RBA) is a strategy that uses data analysis and prioritization to issue alerts or notifications when potential risks reach certain predefined levels. The severity and potential impact of an alert is assessed and an alert indicating a potential data breach or critical system compromise is assigned a higher risk level than an alert related to an isolated incident with low impact. Once the detected events have been assigned risk scores and amplification factors, the system can prioritize alerts based on the associated risk levels. Higher risk events, with a higher priority, will be escalated to security analysts for immediate action or to automated response mechanisms.

This approach is considered more efficient and effective than generating alerts for every possible event and helps companies to manage and respond to risks in a more targeted way. Over time, risk-based alerting should evolve and improve based on an organization's experiences and adaptations. This adaptability should ensure that the security strategy remains effective in the face of the changing threat landscape. Finally, it is about money too: by targeting high-risk areas and threats, companies can use their security budget more efficiently and ensure that investments are made where they are needed most.

Network Detection and Response (NDR) enables organizations to use continuous monitoring, machine learning and contextual intelligence to deliver advanced threat scores, potentially weighted on a company’s own risk assumptions, and hence to prioritize and categorize alerts according to the perceived level of risk, improving threat identification and response.

How RBA reduces alert volumes - Exeon graph 4.webp

Going Above and Beyond as a New Standard

NDR solutions continuously monitor network traffic, endpoints and other data sources to identify potentially suspicious or malicious activity. They collect and aggregate data from various sources, such as network devices, servers, applications and endpoints. This data includes network logs (NetFlow, IPFIX, firewall logs) as well as other communication logs, events and alerts or connections generated by the system or triggered by internal servers.

The collected data is normalized and enriched by the NDR to ensure that it is in a consistent format and contains as much context as possible. Enrichment includes the addition of meta-data, asset details, user information and the possible impact of the event, e.g., source and destination. This enables even more comprehensive monitoring of possible anomalies, contextual information about network traffic and user behavior.

The real-time behavioral analysis of network traffic and user actions by NDR and the comprehensive context can improve the accuracy of the individual risk assessment and enables rapid identification and response, even to advanced incidents, APTs (advanced persistent threats).

How ExeonTrace, a leading ML-based NDR, analyzes meta data in order to provide network visibility, anomaly detection and incident response

How Well Does Risk-Based Alerting Work With NDR?

The added value of a Network Detection and Response (NDR) solution for risk-based alerting, besides its continuous monitoring capabilities, is the implementation of machine learning: it facilitates risk-based alerting by using advanced analytics, contextual information, threat data and behavioral analysis to assess the potential risk associated with the detected events on the different network areas. NDR solutions can trigger events differently weighted and can respond to incidents based on the risk assessments, e.g. in isolating vulnerable endpoints or blocking malicious traffic.

The Swiss NDR solution ExeonTrace has dynamic analysis and machine learning capabilities. It uses behavior-based anomaly detection techniques to identify threats based on deviations from normal network and user behavior.

The machine learning algorithms in ExeonTrace detect patterns and anomalies in network traffic. The algorithms, with or without pre-assigned risk values can alert, based on deviations from established patterns. Unusual activities with high anomaly values in critical network areas can be flagged as high-risk events. While SIEMs mostly generate events, NDRs distinguish very well between an event and an alarm. An event is – eg. observed by SIEM – changed to the normal behavior of the system or a person, for example when router ACLs were updated, the firewall policy was pushed etc. Meanwhile, a more important alert would be the notification when a particular event has occurred (which is defined or identified and brought into context, as abnormal and potentially suspicious by SOCs definition, threat patterns or ML algorithms) and is sent via NDR to draw action for the security team, for example, when malicious, ongoing communication between endpoints happens. The generated events are always evaluated according to a threat score and when a certain threshold is exceeded, alarms will be triggered.

To improve risk-based alerting, ExeonTrace includes the concept of "risk boosting". It works with an additional boosting factor for different networks: In the configuration, SOC analysts can specify networks/IPs that they want to weigh higher. This enhances risk-based alerting and allows them to prioritize alerts based on their potential threat level, while monitoring individual anomalies regardless of their asset class. Filtering out lower priority alerts or false positives can reduce alert fatigue: using "boost factors" set to 1 by default can result in fewer alerts by reducing the scale for less critical networks to a value below 1. A risk factor set as a multiple of 1 will result in a higher number of alerts compared to the previous situation and should be used especially for critical networks.

ExeonTrace - Young man in server room with laptop screen (1).webp

Better Visibility and Faster Qualification for your RBA: NDR vs. SIEM

Risk-based alerting in a Security Information and Event Management system (SIEM) is possible but is not without challenges. The pure number of events can lead to both false negatives and false positives, potentially missing real threats or overwhelming security teams with low-priority alerts.

As NDR is primarily network-centric, it is well-suited for detecting and mitigating network-related threats, whereas tools such as the very comprehensive SIEM may not achieve the same depth of network visibility.

Risk-based alerting with an NDR and machine learning can reduce the number of false positives and irrelevant alerts, allowing security teams to focus on real threats. This means less time is wasted investigating non-issues: Unlike with a SIEM, alert fatigue and wasted resources can be minimized, making NDR a critical component of a proactive and effective cybersecurity strategy. With the help of machine learning in NDR, the response to high-risk alerts can be automated, whereas SIEM systems often require manual intervention for their response actions first. Machine learning algorithms in the NDR can analyze historical data to predict potential future threats.

The system can adapt to new and evolving threats by continuously learning from incoming data and improving its RBA risk assessment capabilities over time.

By focusing on high-risk alerts, NDR systems also facilitate an organized and efficient incident response process. Decision-making is streamlined and it ensures that the response effort is commensurate with the risk. Even automated incident response is possible, such as isolating vulnerable endpoints or blocking malicious traffic, whereas SIEM systems may first require manual intervention for response actions. Moreover, there's the issue of subjective alert prioritization and the importance of accurate contextual information. Insider threats may be overlooked and data quality and legacy systems can create blind spots. Regulatory compliance might not align with risk-based alerting, impacting organizations subject to specific regulations.

Make it Easy But Efficient: Less False Positives, More Detection

In summary, while risk-based alerting enhances security, it must address issues like false alerts, complexity, evolving threats, resource allocation, insider threats, and data quality. NDR solutions are better suited for risk-based alerting because they focus on real-time network visibility, behavioral analysis and false alarm reduction. While SIEM systems have their strengths in log management and historical analysis, NDR is an important component in terms of the evolving threat landscape and the need for early risk assessment.

For more in-depth information on how this works concretely for organizations and specific use cases, download the whitepaper below or speak to one of our security experts.

Remove False Positives, Raise the Bar

A Security Detection Whitepaper

Klaus Nemelka


Klaus Nemelka

Product Marketing Manager




Published on: