Skip to content

3CX VoIP Call Detail Records In Graylog

Even with the rise of high-speed networks and sophisticated monitoring tools, VoIP Call Data Records (CDR) remain an essential resource for troubleshooting and optimizing bandwidth usage. These records provide a granular view of call quality, latency, jitter, and packet loss—critical factors that directly impact voice performance. While real-time monitoring solutions can detect immediate issues, CDRs offer historical insights that help IT teams pinpoint recurring problems, track trends, and ensure networks are properly provisioned. Whether diagnosing call degradation, planning capacity upgrades, or investigating security anomalies, CDR are still one of the most reliable tools for keeping VoIP systems running smoothly.

In this blog, we cover the 3CX VoIP PBX and the call data records that are sent to Graylog.

Configuring 3CX for CDR Logging

To do this in the 3CX call server, you configure a 3CX CDR service as a client, as an active socket, to an IP address on a specific port. What happens inside there is that the logs will be shipped in a comma-delimited format, with the fields you see in their field list.

3CX Logging Configuration

The field list contains a lot of records. You can choose to eliminate or add the ones you want, but make sure you keep the order the same, because when you start parsing the data, the order is crucial.

Field Definitions

If you go to the 3CX website under the CDR records section, you’ll find the definition of all the different types of fields, which will help you understand what the data contains.

Creating a 3CX CDR Input in Graylog

In Graylog, create a 3CX CDR input, which is simply a plain text TCP connection to port 3000.

Grok Pattern for Parsing

Here is a grok pattern called: 3CX_CDR. This pattern follows the order of the fields that appear inside the PBX system. Note, this pattern is tied to the image below for the order of the fields. Modifying the fields in 3CX will require changes to this pattern.

%{NUMBER:history_id},(?<call_id>[^,]*),%{TIME:duration},%{TIMESTAMP_ISO8601:time_start},%{TIMESTAMP_ISO8601:time_answered},%{TIMESTAMP_ISO8601:time_end},%{WORD:reason_terminated},(?<from_no>[^,]*),(?<to_no>[^,]*),(?<from_dn>[^,]*),(?<to_dn>[^,]*),(?<dial_no>[^,]*),(?<reason_changed>[^,]*),(?<final_number>[^,]*),(?<final_dn>[^,]*),(?<bill_code>[^,]*),(?<bill_rate>[^,]*),(?<bill_cost>[^,]*),(?<bill_name>[^,]*),(?<chain>[^,]*),(?<from_type>[^,]*),(?<to_type>[^,]*),(?<final_type>[^,]*),(?<from_dispname>[^,]*),(?<to_dispname>[^,]*),(?<final_dispname>[^,]*),(?<missed_queue_calls>[^,]*)

Fields available in order within the PBX System based on this grok pattern: 3CX Call Data Fields

The Parsing Rule:

rule "Parse 3CX CDR GROK"
When
   true
       //Route 3CX CDR to Stream old:
then
    let grokp = grok(
        pattern:"%{3CX_CDR}",
        value:to_string($message.message),
        only_named_captures: true
        );
        
    set_fields(grokp);
    set_field("grok_parse",true);
end

It’s important that you don’t reorder these fields unless you also go into Graylog and reorder your grok pattern accordingly. Inside the rule, I’ve referenced the pattern so that when the data comes in, it automatically parses out the records.

Additional Parsing of the Timestamp.

rule "Parse - 3cx - End Call TimeStamp Breakout"
When
    $message.grok_parse == true
then
    let grokp = grok(
        pattern:"%{TIMESTAMP_ISO8601}",
        value:to_string($message.time_end),
        only_named_captures: false
        );
        
    set_fields(fields:grokp,prefix:"TimeEnd_");
    set_field("grok_parse_timeend_timestamp",true);
    remove_field("TimeEnd_TIMESTAMP_ISO8601");
    remove_field("TimeEnd_MINUTE");
    remove_field("TimeEnd_SECOND");
end

Graylog for Telecom

VoIP Call Data Records (CDRs) may not be the flashiest tool in a network administrator’s arsenal, but they remain one of the most reliable. From diagnosing call quality issues to optimizing bandwidth on your network and uncovering security threats, CDR provide the historical insights needed to keep VoIP systems running smoothly. While real-time monitoring has its place, a solid understanding of CDR data ensures that recurring problems don’t go unnoticed and that networks are properly scaled for future demand. In short, if you’re not leveraging CDR in your VoIP troubleshooting process, you’re missing a critical piece of the puzzle. Try Graylog and and get those VoIP logs and watch this Video!

See the next blog on the 3CX attack detected by Graylog here called “Detecting the 3CX Supply Chain Attack with Graylog and Sigma Rules

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

What To Know About Parsing JSON

If you grew up in the 80s and 90s, you probably remember your most beloved Trapper Keeper. The colorful binder contained all the folders, dividers, and lined paper to keep your middle school and high school self as organized as possible. Parsing JSON, a lightweight data format, is the modern, IT environment version of that colorful – perhaps even Lisa Frank themed – childhood favorite.

 

Parsing JSON involves transforming structured information into a format that can be used within various programming languages. This process can range from making JSON human-readable to extracting specific data points for processing. When you know how to parse JSON, you can improve data management, application performance, and security with structured data that allows for aggregation, correlation, and analysis.

What is JSON?

JSON, or JavaScript Object Notation, is a widely-used, human-readable, and machine-readable data exchange format. JSON structures data using text, representing it through key-value pairs, arrays, and nested elements, enabling data transfers between servers and web applications that use Application Programming Interfaces (APIs).

 

JSON has become a data-serialization standard that many programming languages support, streamlining programmers’ ability to integrate and manipulate the data. Since JSON makes it easy to represent complex objects using a clear structure while maintaining readability, it is useful for maintaining clarity across nested and intricate data models.

 

Some of JSON’s key attributes include:

  • Requires minimal memory and processing power
  • Easy to read
  • Supports key-value pairs and arrays
  • Works with various programming languages
  • Offers standard format for data serialization and transmission

 

How to make JSON readable?

Making JSON data more readable enables you to understand and debug complex objects. Some ways to may JSON more readable include:

  • Pretty-Print JSON: Pretty-printing JSON formats the input string with indentation and line breaks to make hierarchical structures and relationships between object values clearer.
  • Delete Unnecessary Line Breaks: Removing redundant line breaks while converting JSON into a single-line string literal optimizes storage and ensures consistent string representation.
  • Use Tools and IDEs: Tools and extensions in development environments that auto-format JSON data can offer an isolated view to better visualize complex JSON structures.
  • Reviver Function in JavaScript: Using the parse() method applies a reviver function that modifies object values during conversion and shapes data according to specific needs.

 

What does it mean to parse JSON?

JSONs are typically read as a string, so parsing JSON is the process of converting the string into an object to interpret the data in a programming language. For example, in JSON, a person’s profile might look like this:

{ “name”: “Jane Doe”, “age”: 30, “isDeveloper”: true, “skills”: [“JavaScript”, “Python”, “HTML”, “CSS”], }, “projects”: [ { “name”: “Weather App”, “completed”: true }, { “name”: “E-commerce Website”, “completed”: false } ] }

When you parse this JSON data in JavaScript, it might look like this:

Name: Jane Doe
Age: 30
Is Developer: true
Skills: JavaScript, Python, HTML, CSS|
Project 1: Weather App, Completed: true
Project 2: E-commerce Website, Completed: false

 

Even though the information looks the same, it’s easier to read because you removed all of the machine-readable formatting.

Partial JSON parsing

Partial JSON parsing is especially advantageous in environments like Python, where not all fields in the data may be available or necessary. With this flexible input handling, you can ensure model fields have default values to manage missing data without causing errors.

 

For example, if you only want to know the developer’s name, skills, and completed projects, partial JSON parsing allows you to extract the information you want and focus on specific fields.

 

Why is JSON parsing important?

Parsing JSON transforms the JSON data so that you can handle complex objects and structured data. When you parse JSON, you can serialize and deserialize data to improve data interchange, like for web applications.

 

JSON parsing enables:

  • Data Interchange: Allows for easy serialization and deserialization of data across various systems.
  • Dynamic Parsing: Streamlines integration for web-based applications as a subset nature of JavaScript
  • Security: Reduces injection attack risks by ensuring data conforms to expected format.
  • Customization: Transforms raw data into structured, usable objects that can be programmatically manipulated, filtered, and modified according to specific needs.

 

How to parse a JSON file

Parsing a JSON file involves transforming JSON data from a textual format into a structured format that can be manipulated within a programming environment. Modern programming languages provide built-in methods or libraries for parsing JSON data so you can easily integrate and manipulate data effectively. Once parsed, JSON data can be represented as objects or arrays, allowing operations like sorting or mapping.

 

Parsing JSON in JavaScript

Most people use the JSON.parse() method for converting string form JSON data into JavaScript objects since it can handle simple and complex objects. Additionally, you may choose to implement the reviver function to manage custom data conversions.

 

Parsing JSON in PHP

PHP provides the json_decode function so you can translate JSON strings into arrays or objects. Additionally, PHP provides functions that validate the JSON syntax to prevent exceptions that could interrupt execution.

 

Parsing JSON in Python

Parsing JSON in python typically means converting JSON strings into Python dictionaries with the json module. This module provides essential functions like loads() for strings and load() for file objects which are helpful for managing JSON-formatted API data.

 

Parsing JSON in Java

Developers typically use one of the following libraries to parse JSON in Java:

  • Jackson: efficient for handling large files and comes with an extensive feature set
  • Gson: minimal configuration and setup but slower for large datasets
  • json: built-in package providing a set of classes and methods

 

JSON Logging: Best Practices

Log files often have complex, unstructured text-based formatting. When you convert them to JSON, you can store and search your logs more easily. Over time, JSON has become a standard log format because it creates a structured database that allows you to extract the fields that matter to normalize them against other logs that your environment generates. Additionally, as an application’s log data evolves, JSON’s flexibility makes it easier to add or remove fields. Since many programming language either include structured JSON logging in their libraries or offer third-party libraries,

Log from the Start

Making sure that your application generates logs is critical from the very beginning. Logs enable you to debug the application or detect security vulnerabilities. By inserting the JSON logs from the start, you make your testing easier and build security monitoring into the application.

Configure Dependencies

If your dependencies can also generate JSON logs, you should consider configuring it because the structure format makes parsing and analyzing database logs easier.

Format the Schema

Since your JSON logs should be readable and parseable, you want to keep them as compact and streamlined as possible. Some best practices include:

  • Focusing on objects that need to be read
  • Flattening structures by concatenating keys with a separator
  • Using a uniform data type in each field
  • Parsing exception stack traces into attribute hierarchies

Incorporate Context

JSON enables you to include information about what you’re logging for insight into an event’s immediate context. Some context that helps correlate issues across your IT environment include:

  • User identifiers
  • Session identifiers
  • Error messages

 

Graylog: Correlating and Analyzing Logs for Operations and Security

 

With Graylog’s parsing JSON functions, you can parse out useful information, like destination address, response bytes, and other data that helps monitor security incidents or answer IT questions. After extracting the data you want, you can use the Graylog Extended Log Format (GELF) to normalize and structure all log data. Graylog’s purpose-built solution provides lightning-fast search capabilities and flexible integrations that allow your team to collaborate more efficiently.

Graylog Operations provides a cost-efficient solution for IT ops so that organizations can implement robust infrastructure monitoring while staying within budget. With our solution, IT ops can analyze historical data regularly to identify potential slowdowns or system failures while creating alerts that help anticipate issues.

With Graylog’s security analytics and anomaly detection capabilities, you get the cybersecurity platform you need without the complexity that makes your team’s job harder. With our powerful, lightning-fast features and intuitive user interface, you can lower your labor costs while reducing alert fatigue and getting the answers you need – quickly.

 

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Monitoring for PCI DSS 4.0 Compliance

Any company that processes payments knows the pain of an audit under the Payment Card Industry Data Security Standard (PCI DSS). Although the original PCI DSS had gone through various updates, the Payment Card Industry Security Standards Council (PCI SSC) took feedback from the global payments industry to address evolving security needs. The March 2022 release of PCI DSS 4.0 incorporated changes that intend to promote security as an iterative process while ensuring continued flexibility so that organizations could achieve security objectives based on their needs.

 

To give companies time to address new requirements, audits will begin incorporating the majority of the new changes beginning March 31, 2025. However, some issues will be included in audits beginning immediately.

 

Why did the Payment Card Industry Security Standards Council (PCI SSC) update the standard?

At a high level, PCI DSS 4.0 responds to changes in IT infrastructures arising from digital transformation and Software-as-a-Service (SaaS) applications. According to PCI SSC’s press release, changes will enhance validation methods and procedures.

 

When considering PCI DSS 4.0 scope, organizations need to implement controls around the following types of account data:

  • Cardholder Data: Primary Account Number (PAN), Cardholder Name, Expiration Date, Service Code
  • Sensitive Authentication Data (SAD): Full track data (magnetic stripe or chip equivalent), card verification code, Personal Identification Numbers (PINs)/PIN blocks.

 

To get a sense of how the PCI SSC shifted focus when drafting PCI DSS 4.0, you can take a look at how the organization renamed some of the Requirements:

 

 

PCI CategoriesPCI 3.2.1PCI 4.0
Build and Maintain a Secure Network and Systems
  1. Install and maintain a firewall configuration to protect cardholder data
  2. Do not use vendor-supplied defaults for system passwords and other security parameters.
  1. Install and maintain network security controls
  2. Apply secure configurations to all system components

Protect Cardholder Data

(Updated to Protect Account Data in 4.0)

  1. Protect stored cardholder data
  2. Encrypt transmission of cardholder data across open, public networks

3. Protect stored account data

 

4. Protect cardholder data with strong cryptography during transmission over open, public networks

 

Maintain a Vulnerability Management Program
  1. Protect all systems against malware and regularly update anti-virus software or programs
  2. Develop and maintain secure systems and applications

5. Protect all systems and networks from malicious software

6. Develop and maintain secure systems and software

Implement Strong Access Control Measures
  1. Restrict access to cardholder data by business need to know
  2. Identify and authenticate access to system components
  3. Restrict physical access to cardholder data

7. Restrict access to system components and cardholder data by business need to know

8. Identify users and authenticate access to system components

9. Restrict physical access to cardholder data

Regularly Monitor and Test Networks
  1. Track and monitor all access to network resources and cardholder data
  2. Regularly test security systems and processes

10. Log and monitor all access to system components and cardholder data

11. Test security of systems and networks regularly

Maintain an Information Security Policy
  1. Maintain a policy that addresses information security for all personnel
12. Support information security with organizational policies and programs

 

While PCI SSC expanded the requirements to address larger security and privacy issues, many of them remain fundamentally the same as before. According to the Summary of Changes, most updates fall into one of the following categories:

  • Evolving requirement: changes that align with emerging threats and technologies or changes in the industry
  • Clarification or guidance: updated wording, explanation, definition, additional guidance, and/or instruction to improve people’s understanding
  • Structure or format: content reorganization, like combining, separating, or renumbering requirements

 

For organizations that have previously met PCI DSS compliance objectives, those changes place little additional burden.

 

However, PCI DSS 4.0 does include changes to Requirements that organizations should consider.

 

What new Requirements are immediately in effect for all entities?

While additions are effective beginning March 31, 2025, three primary issues affect current PCI audits.

 

Holistically, PCI DSS now includes the following sub requirement across Requirements 2 through 11:

Roles and responsibilities for performing activities for Requirement are documented, assigned, and understood.

 

Additionally, under Requirement 12, all entities should be:

  • Performing a targeted risk analysis for each PCI DSS requirement according to the documented, customized approach
  • Documenting and confirming PCI DSS scope every 12 months

 

What updates are effective March 31, 2025 for all entities?

As the effective date for all requirements draws closer, organizations should consider the major changes that impact their business, security, and privacy operations.

 

Requirement 3

PCI DSS 4.0 incorporates the following new requirements:

  • Minimizing the SAD stored prior to completion and retaining it according to data retention and disposal policies, procedures and processes
  • Encrypting all SAD stored electronically
  • Implementing technical controls to prevent copying/relocating PAN when using remote-access technologies unless requiring explicit authorization
  • Rendering PAN unreadable with keyed cryptographic hashes unless requiring explicit authorization
  • Implementing disk-level or partition-level encryption to make PAN unreadable

 

Requirement 4

PCI DSS 4.0 incorporates the following new requirements:

  • Confirming that certificates safeguarding PAN during transmission across open, public networks are valid, not expired or revoked
  • Maintaining an inventory of trusted keys and certificates

 

Requirement 5

PCI DSS 4.0 incorporates the following new requirements:

  • Performing a targeted risk analysis to determine how often the organization evaluates whether system components pose a malware risk
  • Performing targeted risk analysis to determine how often to scan for malware
  • Performing anti-malware scans when using removable electronic media
  • Implementing phishing attack detection and protection mechanisms

 

Requirement 6

PCI DSS 4.0 incorporates the following new requirements:

  • Maintaining an inventory of bespoke and custom software for vulnerability and patch management purposes
  • Deploying automated technologies for public-facing web applications to continuously detect and prevent web-based attacks
  • Managing payment page scripts loaded and executed in consumers’ browsers

 

Requirement 7

PCI DSS 4.0 incorporates the following new requirements:

  • Reviewing all user accounts and related access privileges
  • Assigning and managing all application and system accounts and related access privileges
  • Reviewing all application and system accounts and their access privileges

 

Requirement 8

PCI DSS 4.0 incorporates the following new requirements:

  • Implementing a minimum complexity level for passwords used as an authentication factor
  • Implementing multi-factor authentication (MFA) for all CDE access
  • Ensuring MFA implemented appropriately
  • Managing interactive login for system or application accounts
  • Using passwords/passphrases for application and system accounts
  • Protecting passwords/passphrases for application and system accounts against misuse

 

Requirement 9

PCI DSS 4.0 incorporates the following new requirements:

  • Performing targeted risk analysis to determine how often POI devices should be inspected

 

Requirement 10

PCI DSS 4.0 incorporates the following new requirements:

  • Automating the review of audit logs
  • Performing a targeted risk analysis to determine how often to review system and component logs
  • Detecting, receiving alerts for, and addressing critical security control system failures
  • Promptly responding to critical security control system failures

 

Requirement 11

PCI DSS 4.0 incorporates the following new requirements:

  • Managing vulnerabilities not ranked as high-risk or critical
  • Performing internal vulnerability scans using authenticated scanning
  • Deploying a change-and-tamper-detection mechanism for payment pages

 

Requirement 12

PCI DSS 4.0 incorporates the following new requirements:

  • Documenting the targeted risk analysis that identifies how often to perform it so it supports each PCI DSS Requirement
  • Documenting and reviewing cryptographic cypher suites and protocols
  • Reviewing hardware and software
  • Reviewing security awareness program at least once every 12 months and updating as necessary
  • Including in training threats to CD, like phishing and related attacks and social engineering
  • Including acceptable technology use in training
  • Performing targeted risk analysis to determine how often to provide training
  • Including in incident response plan the alerts from change-and-tamper detection mechanism for payment pages
  • Implementing incident response procedures and initiating them upon PAN detection

 

What updates are applicable to service providers only?

In some cases, new Requirements apply only to issuers and companies supporting those issuing services and storing sensitive authentication data. Only one of these immediately went into effect, the update to Requirement 12:

  • TPSPs support customers’ requests for PCI DSS compliance status and information about the requirements for which they are responsible

 

Effective March 31, 2025

Service providers should be aware of the following updates:

 

  • Requirement 3:
    • Encrypting SAD
    • Documenting the cryptographic architecture that prevents people from using cryptographic keys in production and test environments
  • Requirement 8
    • Requiring customers to change passwords at least every 90 days or dynamically assessing security posture when not using additional authentication factors
  • Requirement 11
    • Multi-tenant service providers supporting customers for external penetration testing
    • Detecting, receiving alerts for, preventing, and addressing covert malware communication channels using intrusion detection and/or intrusion prevention techniques
  • Requirement 12
    • Documenting and confirming PCI DSS scope every 6 months or upon significant changes
    • Documenting, reviewing, and communicating to executive management the impact that significant organizational changes have on PCI DSS scope

 

Graylog Security and API Security: Monitoring, Detection, and Incident Response for PCI DSS 4.0

 

Graylog Security provides the SIEM capabilities organizations need to implement Threat Detection and Incident Response (TDIR) activities and compliance reporting. Graylog Security’s security analytics and anomaly detection functionalities enable you to aggregate, normalize, correlate, and analyze activities across a complex environment for visibility into and high-fidelity alerts for critical security monitoring and compliance issues like:

 

By incorporating Graylog API Security into your PCI DSS monitoring and incident response planning, you enhance your security and compliance program by mitigating risks and detecting incidents associated with Application Programming Interfaces (APIs). With Graylog’s end-to-end API threat monitoring, detection, and response solution, you can augment the outside-in monitoring from Web Application Firewalls (WAF) and API gateways with API discovery, request and response capture, automated risk assessment, and actionable remediation activities.

 

 

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

How I used Graylog to Fix my Internet Connection

In today’s digital age, the internet has become an integral part of our daily lives. From working remotely to streaming movies, we rely on the internet for almost everything. However, slow internet speeds can be frustrating and can significantly affect our productivity and entertainment. Despite advancements in technology, many people continue to face challenges with their internet speeds, hindering their ability to fully utilize the benefits of the internet. In this blog, we will explore how Dan McDowell, Professional Services Engineer decided to take matters into his own hands and get the data over time to present to his ISP.

Speedtest-Overview

 

Over the course of a few months, I noticed slower and slower internet connectivity. Complaints from neighbors (we are all on the same ISP) lead me to take some action. A few phone calls with “mixed” results were not good enough for me so I knew what I needed, metrics!

Why Metrics?

Showing data without a doubt is one of the most powerful ways to prove a statement. How often do you hear one of the following when you call in for support:

  • Did you unplug it and plug it back in?
  • It’s probably an issue with your router
  • Oh, wireless must be to blame
  • Test it directly connected to your computer!
  • Nothing is wrong on our end, must be yours…

In my scenario I was able to prove without a doubt that this wasn’t a “me” problem. Using data I gathered by running this script every 30 minutes over a few weeks time I was able to prove:

  • This wasn’t an issue with my router
    • The was consistent connectivity slowness at the same times every single day of the week and outside of those times my connectivity was near the offered maximums.
  • Something was wrong on their end
    • Clearly, they were not spec’d to handle the increase in traffic when people stop working and start streaming
    • I used their OWN speed test server for all my testing. It was only one hop away.
    • This was all the proof I needed:
  • End Result?
    • I sent in a few screenshots of my dashboards, highlighting the clear spikes during peak usage periods. I received a phone call not even 10 minutes later from the ISP. They replaced our local OLT and increased the pipe to their co-lo.
      What a massive increase in average performance!

Ookla Speedtest has a CLI tool?!

Yup. This can be configured to use the same speedtest server (my local ISP runs one) each run meaning results are valid and repeatable. Best of all, it can output JSON which I can convert to GELF with ease! In short, I setup a cron job to run my speed test script every 30 minutes on my Graylog server and output the results, converting the JSON message into GELF which NetCat sends to my GELF input.

PORT 8080 must be open outbound!

How can I even?

Prerequisites

1. Install netcat, speedtest and gron.

Debain/Ubuntu

2. curl -s https://packagecloud.io/install/repositories/ookla/speedtest-cli/script.deb.sh | sudo bash
sudo apt install speedtest gron ncat

RHEL/CentOS/Rocky/Apline

wget https://download-ib01.fedoraproject.org/pub/fedora/linux/releases/37/Everything/x86_64/os/Packages/g/gron-0.7.1-4.fc37.x86_64.rpm

sudo dnf install gron-0.7.1-4.fc37.x86_64.rpm curl -s https://packagecloud.io/install/repositories/ookla/speedtest-cli/script.rpm.sh | sudo bash

sudo dnf install speedtest netcat

 

3. You also need a functional Graylog instance with a GELF input running.

4. My speedtest script and Graylog content pack (contains dashboard, route rule and a stream)

  1. Grab the script
    wget https://raw.githubusercontent.com/graylog-labs/graylog-playground/main/Speed%20Test/speedtest.sh
  1. Move the script to a common location and make it executable
    mkdir /scripts
    mv speedtest.sh /scripts/
    chmod +x /scripts/speedtest.sh

Getting Started

  1. Login to your Graylog instance
  2. Navigate to System → Content Packs
  3. Click upload.
  4. Browse to the downloaded location of the Graylog content pack and upload it to your instance
  5. Install the content pack
  6. This will install a Stream, pipeline, pipeline rule (routing to stream) and dashboard
  7. Test out the script!
    1. ssh / console to your linux system hosting Graylog/docker
    2. Manually execute the script:
      /scripts/speedtest.sh localhost 12201
      Script Details: <path to script> <ip/dns/hostname> <port>
  1. Check out the data in your Graylog
    1. Navigate to Streams → Speed Tests
    2. Useful data appears!
    3. Navigate to Dashboards → ISP Speed Test
      1. Check out the data!
  2. Manually execute the script as much as you like. More data will appear the more you run it.

Automate the Script!

This is how I got the data to convince my ISP that something was actually wrong. Setup a CRON job that runs every 30 minutes and within a few day you should see some time related changes.

  1. ssh or console to your linux system hosting the script / Graylog
  2. Create a CRONTAB to run the script every 30 minutes
    1. create crontab (this will be for the currently logged in user OR root if sudo su was used)

crontab -e

    1. Set the script to run every 30 minutes (change as you like)

*/30 * * * * /scripts/speedtest.sh localhost 12201

  1. That’s it! As long as the user the crontab was made for has permissions, the script will run every 30 minutes and the data will go to Graylog . The dashboard will continue to populate for you automatically.

Bonus Concept – Monitor you Sites WAN Connection(s)

This same script could be used to monitor WAN connections at different sites. Without any extra fields, we could use the interface_externalIp or source fields provided by the speedtest cli/sending host to filter by site location, add a pipeline rule to add a field biased on a lookup table or add a single field to the speedtest GELF message (change the script slightly) to provide that in the original message, etc. Use my dashboard to make a new dashboard with tabs for per-site and a summary page! The possibilities are endless.

Most of all, go have fun!

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Why API Discovery Is Critical to Security

For Star Trek fans, space may be the final frontier, but in security, discovering Application Programming Interfaces (APIs) could be the technology equivalent. In the iconic episode “The Trouble with Tribbles,” the legendary starship Enterprise discovers a space station that becomes overwhelmed by little fluffy, purring, rapidly reproducing creatures called “tribbles.” In a modern IT department, APIs can be viewed as the digital tribble overwhelming security teams.

 

As organizations build out their application ecosystems, the number of APIs integrated into their IT environments continues to expand. Organizations and security teams can become overwhelmed by the sheer number of these software “tribbles,” as undiscovered and unmanaged APIs create security blindspots.

 

API discovery is a critical component for any security program because it expands the organization’s attack surface.

 

What is API discovery?

API discovery is a manual or automated process that identifies, documents, and catalogs an organization’s APIs so that security teams can monitor the application-to-application data transfers. To manage all APIs that the organization integrated into its ecosystem, organizations need a comprehensive inventory that includes:

  • Internal APIs: interfaces between a company’s backend information and application functionality
  • External APIs: interfaces exposed over the internet to non-organizational stakeholders, like external developers, third-party vendors, and customers

 

API discovery enables organizations to identify and manage the following:

  • Shadow (“Rogue”) APIs: unchecked or unsupervised APIs
  • Deprecated (“Zombie”) APIs: unused yet operational APIs without the necessary security updates

 

What risks do undocumented and unmanaged APIs pose?

Threat actors can exploit vulnerabilities in these shadow and deprecated APIs, especially when the development and security teams have no way to monitor and secure them.

 

Unmanaged APIs can expose sensitive data, including information about:

  • Software interface: the two endpoints sharing data
  • Technical specifications: the way the endpoints share data
  • Function calls: verbs (GET, DELETE) and nouns (Data, Access) that indicate business logic

 

Why is API discovery important?

Discovering all your organization’s APIs enhances security by incorporating them into:

  • Risk assessments: enabling API vulnerability identification, prioritization, and remediation
  • Compliance: mitigate risks arising from accidental sensitive data exposures that lead to compliance violations, fines, and penalties
  • Vendor risk management: visibility into third-party security practices by understanding the services, applications, and environments that they can impact
  • Incident response: faster detection, investigation, and response times by understanding potential entry points, impacted services, and data leak paths
  • Policy enforcement: ensuring all internal and external APIs follow the company’s security policies and best practices
  • Training and awareness: providing appropriate educational resources for developers and IT staff

 

Beyond the security use case, API discovery provides these additional benefits:

  • Faster integrations by understanding available endpoints, methods, and data formats
  • Microservice architecture management by tracking services, health status, and interdependencies
  • Enhanced product innovation and value by understanding API capabilities and limitations
  • Increased revenue by understanding API usage

 

Using automation for API discovery

While developers can manually discover APIs, the process is expensive, inefficient, and risky. Manual API discovery processes are limited because they are:

  • Time-consuming: With the average organization integrating over 9,000 known APIs, manual processes for identifying unknown or unmanaged APIs can be overwhelming, even in a smaller environment.
  • Error-prone: Discovering all APIs, including undocumented ones and those embedded in code, can lead to incomplete discovery, outdated information, or incorrect documentation.
  • Resource-intensive: Manual discovery processes require manual inventory maintenance.

 

Automated tools make API discovery more comprehensive while reducing overall costs. Automated API discovery tools provide the following benefits:

  • Efficiency: Scanners can quickly identify APIs, enabling developers to focus on more important work.
  • Accurate, comprehensive inventory: API discovery tools can identify embedded and undocumented APIs, enhancing security and documentation.
  • Cost savings: Automation takes less time to scan for updated information, reducing maintenance costs.

 

 

What to look for in an API discovery tool

While different automated tools can help you discover the APIs across your environment, you should know the capabilities that you need and what to look for.

Continuous API Discovery

Developers can deliver new builds multiple times a day, continuously changing the API landscape and risk profile. For an accurate inventory and comprehensive visibility, you should look for a solution that scans:

  • All API traffic at runtime
  • Categorizes API calls
  • Sorts incoming traffic into domain buckets

For example, when discovering APIs by domain, the solution includes cases where:

  • Domains are missing
  • Public or Private IP addresses are used

With the ability to identify shadow and deprecated APIs, the solution should give you a way to add domains to the:

  • Monitoring list so you can start tracking them in the system
  • Prohibited list so that the domain should never be used

 

 

Vulnerability Identification

An API discovery solution that analyzes all traffic can also identify potential security vulnerabilities. When choosing a solution, you should consider whether it contains the following capabilities:

  • Captures unfiltered API request and response detail
  • Enhances details with runtime analysis
  • Creates an accessible datastore for attack detection
  • Identified common threats and API failures aligned to OWASP and MITRE guidance
  • Automatic remediation tops with actionable solutions that enable the teams to optimize critical metrics like Mean Time to Response (MTTR)

Risk Assessment and Scoring

Every identified API and vulnerability increases the organization’s risk. To appropriately mitigate risk arising from previously unidentified and unmanaged APIs, the solution should provide automated risk assessment and scoring. With visibility into the type of API and the high-risk areas that should be prioritized, Security and DevOps teams can focus on the most risky APIs first.

 

Graylog API Security: Continuous, Real-Time API Discovery

Graylog API Security is continuous API security, scanning all API traffic at runtime for active

attacks and threats. Mapped to security and quality rules, Graylog API Security captures

complete request and response details, creating a readily accessible datastore for attack

detection, fast triage, and threat intelligence. With visibility inside the perimeter,

organizations can detect attack traffic from valid users before it reaches their applications.

 

Graylog API Security captures details to immediately identify valid traffic from malicious

actions, adding active API intelligence to your security stack. Think of it as a “security

analyst in-a-box,” automating API security by detecting and alerting on zero-day attacks

and threats. Our pre-configured signatures identify common threats and API failures and

integrate with communication tools like Slack, Teams, Gchat, JIRA or via webhooks.

About Graylog  
At Graylog, our vision is a secure digital world where organizations of all sizes can effectively guard against cyber threats. We’re committed to turning this vision into reality by providing Threat Detection & Response that sets the standard for excellence. Our cloud-native architecture delivers SIEM, API Security, and Enterprise Log Management solutions that are not just efficient and effective—whether hosted by us, on-premises, or in your cloud—but also deliver a fantastic Analyst Experience at the lowest total cost of ownership. We aim to equip security analysts with the best tools for the job, empowering every organization to stand resilient in the ever-evolving cybersecurity landscape.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×