Skip to content

Get Ready for Kafka 4: Changes and Upgrade Considerations

Apache Kafka 4, the much-anticipated next major release of the popular event streaming platform, is almost here. In this blog, find out what’s changing in 4.0 and how to plan your next Kafka upgrade.

 

Apache Kafka Project Update

With four minor releases (3.6 through 3.9), several patches, and a major release on the horizon, 2024 has arguably been the most eventful in the history of the Apache Kafka project. The biggest development, of course, is the upcoming release of Kafka 4, which we will discuss more in depth later in this blog. First, let’s review the 3.x releases from this year that contained significant updates related to some of the key changes coming in 4.0.

Most of the 3.x updates have been made with the upcoming 4.0 Zookeeper deprecation in mind. ZooKeeper has been replaced by Kafka Raft (KRaft) mode and an official Zookeeper to KRaft migration process was introduced in 3.6 and designated as production ready in 3.7. Prior to 3.6, the only way to move to a KRaft-based Kafka cluster was a complete “lift and shift” process, which entailed installing a new KRaft-based cluster and then manually moving topics, producers, and consumers.

JBOD (Just a Bunch of Disks) support for migrating KRaft clusters also was added in 3.7, and some existing features got enhancements as well, such as improved client metrics and observability as defined in KIP-714 and early access to the next-gen consumer rebalancing protocol defined in KIP-848. Java 11 was also marked for deprecation in 3.7 and will be no longer be supported in 4.0.

With 3.8 and 3.9, Log4j appender was deprecated (and also targeted for removal in 4.0) and KIP-848 was promoted to preview status. There were also several improvements made to KRaft migration, and the quorum protocol implemented in KRaft. Support for dynamic KRaft quorums (as detailed in KIP-853) makes adding or removing controller nodes without downtime a much simpler process. With these improvements, Kafka 3.9 has basically become the de facto “bridge release” to 4.0.

 

Kafka 4 Release Date

According to the Kafka 4.0 release plan, feature freeze concluded on December 11th, 2024 and there is a planned code freeze on January 15th, 2025. This means Kafka 4 will likely come out in the final days of January or early February, as the code freeze is typically followed by a stabilization period lasting at least two weeks.

 

What’s Changing in Kafka 4

Based on the latter 3.x releases described above, we know that the biggest changes in Kafka 4 are removals, all noteworthy, though some more monumental than others.

 

Kafka Raft Mode (KRaft) Replaces ZooKeeper

The most notable change in Kafka 4 is that you can no longer run Kafka with ZooKeeper, with KRaft becoming the sole implementation for cluster management. While KRaft mode was marked as production ready for new clusters in 3.3, a few key pieces were needed before ZooKeeper deprecation and removal could be implemented. With the introduction and refinement of the migration process and JBOD support, the Kafka development community feels that total removal of ZooKeeper is finally ready with 4.0.

 

MirrorMaker 1 Removed

While not as huge of an architectural shift as the ZooKeeper removal, MirrorMaker 1 support is also going away in 4.0. Given that most organizations dropped  MirrorMaker 1 for MirrorMaker 2 quite some time ago, we expect this change to be less impactful to the Kafka ecosystem, but it is still notable nonetheless.

 

Kafka Components Logging Moving to Log4j2

With Log4j marked for deprecation in 3.8, 4.0 will also mark the complete transition from Log4j to Log4j2. After the Log4Shell vulnerability was disclosed in late 2021, an industry-wide effort to move to Log4j2 was put into motion. For this reason, most organizations already have moved off of Log4j, so while still a noteworthy change, it should not be all that impactful (and if you are still using Log4j, your systems are already most likely pwned at this point!).

 

Want More Kafka Insights?


Download the Decision Maker’s Guide to Apache Kafka for tips on partition strategy, using Kafka with Spark, security best practices, and more.

Read Guide

 

Kafka 4 Migration and Upgrade Considerations

There are definitely some considerations that should be taken into account when planning your KRaft migration. First, if this is your first foray into KRaft, don’t plan on retiring your entire ZooKeeper infrastructure anytime soon. Best practices dictate that organizations should be running dedicated controller nodes for production clusters, so your production infrastructure will most likely not change. For dev and integration/testing environments, running in mix-mode is fine, so you might see some infrastructure reclamation occurring in those environments.

Another major consideration is the upgrade path you will need to take. Since ZooKeeper is gone in 4.0, there will be no migration functionality associated with 4.0. So, for organizations still running Zookeeper on a Kafka version prior to 3.7, an interim upgrade to 3.9 would be required. Technically, with migration improvements introduced with 3.9, I’d recommend doing this interim step even for installations later than 3.7. The upgrade path would look something like:

3.x => 3.9 => ZK to KR migration => 4.0

Also of note is that Kafka 3.5 and later use a version of ZooKeeper that is not wire-compatible with version 2.4 and older. As such, for older Kafka clusters, a couple of additional interim steps will be required as well. You would need to upgrade to Kafka 3.4, and then upgrade the version of ZooKeeper to 3.8. That migration path might look something like this:

2.3 => 3.4 => ZK 3.8 => 3.9 => ZK to KR migration => 4.0

This should be an edge case since older versions prior to 2.4 should mostly be retired at this point.

 

What to Expect in Future Kafka 4.x Releases

If past precedence is any indication of future plans, I believe we will see continued improvements for containerization support and metrics collection, as well as refinements in the KRaft migration process. In regards to consumer performance, the full release of KIP-848 will also bring significant changes. Moving the complexity of the rebalancing protocol away from clients into the Group Coordinator, with a more modern event-loop process, creates a more incremental approach to rebalancing, where group-wide synchronization events will no longer be required for all coordination events.

Regardless, the future of Kafka looks pretty bright, with these enhancements likely to make the already popular event-streaming platform even better and more efficient.

 

SLA-Backed Technical Support for Kafka

OpenLogic can optimize your Kafka deployments and make sure your implementation is upgrade-ready. Talk to an Enterprise Architect today to get started.

Kafka Support

 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Gartner’s Market Guide for ITSM Just Dropped, Here’s What IT Buyers Need to Know

Gartner recently published its latest Market Guide for IT Service Management Platforms, and as someone who used to co-author the Magic Quadrant for ITSM when I was an analyst at Gartner… I have some thoughts.  

Market Guides aren’t about ranking vendors. They are about helping IT buyers understand market trends and make informed decisions. And one of the biggest takeaways from this year’s guide? Most organizations are overspending on ITSM tools and failing to get ROI. 

This isn’t a new problem, but it is getting worse. IT teams are buying advanced features with the hope of using them, but most organizations don’t have the maturity to implement them effectively. So, they end up stuck with bloated, expensive tools that don’t deliver the value they expected. 

I want to break down what is happening in the ITSM market and, more importantly, how buyers can avoid this trap. 

The ITSM Market Is Fat, and Most Organizations Are Overspending 

Gartner’s research confirms what I have seen for years. Eight in 10 IT buyers overspend on their ITSM tools. 

Why? Because they buy based on a wish list of features rather than the real needs and capabilities of their IT team. 

It is easy to get sold on automation, AI-powered service management, advanced change management, asset discovery, dependency mapping, and all kinds of sophisticated features (by the way, EasyVista offers many of these, we just don’t force you to bundle them if you’re not ready to implement them yet). But here is the reality: Most IT teams still process tickets manually through email and phone. 

The appetite for advanced capabilities is strong, but IT maturity often isn’t there yet. So, companies overbuy, under implement, and waste budget. 

The Reality of ITSM Vendors 

When it comes to ITSM tools, most organizations end up choosing between: 

  1. The “Big Three” (ServiceNow, Ivanti, BMC). These platforms are built for the most mature, complex IT environments, the top 10% of organizations. While more than half of IT teams are using one of these tools, the reality is that most will never fully utilize them. 
  1. More modern but lightweight options (Freshworks, Atlassian). These tools are great for simple ticketing and come at a lower price point. But many organizations outgrow them quickly, realizing they need more advanced capabilities sooner than expected. 

Alright, this is going to sound a little salesy for two seconds, but I promise I’ll move on. 

The fact of the matter is that I came to EasyVista because I genuinely believe a lot more IT organizations should be checking out EasyVista’s ITSM tools. And my opinion on that matches Gartner’s data on organizational maturity and how most IT teams are misaligned with what they are buying. EasyVista offers the best bang for the buck, providing enterprise-grade ITSM with plenty of room to grow without forcing you to overpay for features you won’t use yet. 

Ok, now back to what I’m actually here for, which is to help you make a better buying decision, whether you buy from EasyVista or not. 

How to Choose the Right ITSM Tool Without Wasting Budget 

If you are evaluating ITSM solutions, here is my specific recommendation for the buying process

Engage with vendors for a demo, but come prepared. 

  • Don’t just sit through a sales pitch. Come with a clear list of your organization’s current critical IT objectives and a realistic assessment of what your team can actually implement within the next two to three years. 
  • Ask how each vendor’s tool aligns with those objectives and what their typical customers actually implement within the first year. 

Prioritize what you need today while leaving room to grow. 

  • The goal isn’t to buy every feature you hope to use someday. It is to buy a solution that matches where you are today, with the ability to scale into additional features when you are ready. 
  • Look for a vendor that allows you to add functionality over time rather than forcing you into an all-inclusive package upfront. 

Be extremely thoughtful about pricing and contracts. 

  • This is where the big vendors tend to trap buyers. Their sales teams will push the whole enchilada, locking you into large, expensive contracts for functionality you aren’t ready to implement. And once you realize you’re paying for more than you can actually implement, it is not a very good-tasting enchilada. 
  • Make sure the vendor you are interested in is willing to sell you what you actually need, not just what they want to bundle. 

At the end of the day, overbuying is the fastest way to waste budget, and 80% of IT buyers fall into this trap. The smartest approach is to buy based on reality, not aspiration, and ensure your ITSM tool fits your current needs with the flexibility to grow into additional capabilities over time

ITSM Is Hard, but Smarter Buying Makes It Easier 

ITSM isn’t easy, and no tool is going to magically fix that. 

I obviously think you should check out EasyVista, but what’ is most important is that you get the right mix of features and pricing for your organization. Too many IT teams end up stuck with tools they can’t implement or outgrow too fast. The key is to be thoughtful about what you actually need, what you can realistically implement, and how flexible your vendor is with pricing and scaling options

If there is one thing I have learned from years of analyzing this market, it’ is that success in ITSM doesn’t come from buying the most expensive or most feature-rich tool. It comes from buying the right tool for where you are today, with a clear path to grow into the features you actually need in the future. 

That’s how you avoid wasting budget and actually get ROI from your ITSM investment. 

About EasyVista  
EasyVista is a leading IT software provider delivering comprehensive IT solutions, including service management, remote support, IT monitoring, and self-healing technologies. We empower companies to embrace a customer-focused, proactive, and predictive approach to IT service, support, and operations. EasyVista is dedicated to understanding and exceeding customer expectations, ensuring seamless and superior IT experiences. Today, EasyVista supports over 3,000 companies worldwide in accelerating digital transformation, enhancing employee productivity, reducing operating costs, and boosting satisfaction for both employees and customers across various industries, including financial services, healthcare, education, and manufacturing.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Hadoop Monitoring: Tools, Metrics, and Best Practices

Hadoop monitoring is crucial for maintaining the health, performance, and reliability of Big Data ecosystems. In this blog, find out how Hadoop cluster monitoring works, common issues, key metrics, and observability and monitoring tools that can be leveraged in Hadoop implementations.  

 

Why Is Hadoop Monitoring Important?

In Hadoop, robust monitoring can provide real-time visibility into cluster health, as well as identify potential bottlenecks or failures before they impact day-to-day operations. Hadoop monitoring also enables teams to track key metrics such as execution times, CPU, memory and data storage, enabling them to make informed decisions to plan for the capacities on the clusters. This level of insight is particularly valuable in complex, distributed environments where manual oversight alone is insufficient to manage various Hadoop components and services.

How Hadoop Cluster Monitoring Works

Hadoop cluster monitoring relies on collecting and analyzing metrics data from various sources, including HDFS (NameNodes and DataNodes), YARN, Oozie, MapReduce, and ZooKeeper. These components generate large amount of performance data, such as resource utilization, storage capacity, job status, and node health. Monitoring tools collect information from those components to provide an overview of the cluster’s health and performance. By streaming this data to dashboards, users can gauge the overall state of the Hadoop environment, address bottlenecks, and take steps to optimize performance and prevent downtime.

 

Benefits of Proactive Hadoop Monitoring

Proactive Hadoop monitoring offers a variety of benefits. Organizations can detect potential issues sooner, such as node failures or nodes that are over- or under- provisioned, and delay data processing before it cascades into larger issues that could cause production outages. This helps minimize downtime, improving both the reliability and availability of data services. It also helps in analyzing workloads and identifying patterns in resource usage, enabling better allocation and scaling of the resources.

Furthermore, it assists in performance optimization by monitoring metrics like CPU, memory, disk I/O, and network usage. Proactive Hadoop monitoring also bolsters security, reducing the risk of data breaches or unauthorized access, which leads to more stable, efficient, and secure clusters.

 

Challenges and Common Issues with Hadoop Monitoring

  • The complexity and scale of Hadoop ecosystems can make it difficult to gain an overall view of cluster health and performance across all nodes and components.
  • The distributed nature of Hadoop, where issues in one part of the cluster can have cascading effects on other components, makes troubleshooting tricky.
  • The sheer volume of metrics data generated by Hadoop components can result in alert fatigue, making it difficult to distinguish between critical issues and normal performance fluctuations.
  • The pace at which updates occur in Hadoop can sometimes result in gaps in monitoring coverage.
  • Installing, setting up, and maintaining monitoring tools like Apache Ambari and Ganglia requires expertise not all teams possess.
  • Correlating resource constraints across different components—such as associating a spike in resource usage on HDFS to a specific YARN job—can make root-cause analysis time-consuming and inefficient, potentially delaying troubleshooting and impacting cluster performance.

Overcoming these obstacles requires a combination of hardened monitoring tools, well-established processes, and continuous updates to monitoring strategies to keep pace with the evolving Hadoop landscape.

 

Protect Your Data With Hadoop Support and Services

OpenLogic offers both SLA-backed technical support for Hadoop and a service bundle that includes migration from Cloudera (or your current data platform) to an open source Hadoop stack fully administered and monitored by OpenLogic experts.

Explore HadooP Solutions

Key Metrics for Hadoop Monitoring

Hadoop monitoring relies on tracking a set of critical metrics that provide insights into the cluster health, performance, and resource utilization of the cluster. These metrics span across various components of the Hadoop ecosystem. Below is a breakdown of the key metrics for each of the major components.

 

HDFS

For HDFS, the most critical metrics concern storage and data integrity. HDFS storage utilization monitoring involves tracking the space (used space, free space, and total capacity) across NameNodes at both cluster and node levels. This information helps in capacity planning and ensuring efficient resource usage across the cluster.

Data integrity monitoring in HDFS can be achieved through regularly performing file system checks, and calculating and storing checksums for each data block in separate hidden files within the HDFS namespace. CRC32 (Cyclic Redundancy Check) checksum algorithm is used for its efficiency and low overhead. DataNodes continuously validate integrity by computing and storing checksums when they receive new data blocks, verifying stored data against these checksums and checking for corruption.

Additionally, HDFS maintains a replication factor for each data block, storing multiple copies across different DataNodes. This redundancy helps the system to recover from corrupted blocks by accessing uncorrupted replicas. Executing various HDFS commands can help identify and address any inconsistencies in the file system. Should there be any discrepancy, exception is detected, alerting the system for potential data corruption.

 

MapReduce

Monitoring MapReduce tasks involves tracking various metrics and logs throughout the execution of MapReduce jobs to identify bottlenecks, optimize resource allocation, and resolve issues. Task completion times, input/output records processed, CPU and memory usage, and disk I/O patterns for both map and reduce tasks should be monitored.

Hadoop’s built-in tools, like the JobTracker web interface or the ResourceManager web UIs (in YARN), can be leveraged to track those metrics. These interfaces provide real-time information on job progress, task statuses, and resource utilization. Additionally, analyzing job history logs can offer valuable insights into past performance trends and help identify recurring issues.

Workload optimization should also be monitored via the shuffle and sort phases between map and reduce tasks. These phases often represent significant bottlenecks, especially in jobs with large amounts of intermediate data. Metrics data such as shuffle bytes, spilled records, and merge times can provide insights for optimizations, such as adjusting compression strategies.

Troubleshooting MapReduce jobs involves analyzing task-specific logs. Hadoop generates detailed logs for each task attempt, which can be critical for diagnosing issues like out-of-memory errors, data skew problems, or application-specific bugs. Setting up centralized log aggregation and analysis tools can speed issue resolution.

 

YARN

YARN serves as the resource management layer in Hadoop. YARN metrics provide critical data on resource allocation, execution times, and utilization across the cluster, as well as available and allocated memory, CPU cores, and container statistics.

In YARN, ResourceManage provides critical insights into cluster-wide resource utilization. Monitoring metrics like total available resources, allocated resources, and pending resource requests provides a comprehensive view of cluster capacity and demand.

The CapacityScheduler, or FairScheduler, determines how resources are distributed among applications and queues. Tracking queue-level metrics, including used capacity, pending resources, and currently running applications, helps identify skews in resource allocation.

ApplicationMaster tracks the number of containers requested and allocated, as well as the resources (CPU, memory, and custom resources) assigned to each container that are critical for optimal performance. Job workloads behavior can be monitored by analyzing metrics such as job progress, task completion rates, and resource utilization efficiency. YARN’s web UI and REST API provide access to these metrics, allowing for real-time monitoring and historical analysis.

NodeManager tracks CPU, memory, and disk usage per node to help identify overloaded or underutilized machines, enabling better load balancing and capacity planning. Additionally, monitoring container execution statistics, including launch times, execution durations, and failure rates, can provide insights into performance issues or resource constraints on specific nodes.

Additionally, YARN monitoring strategies might include analyzing resource allocation over time to identify trends, peak usage periods, and potential areas for optimization. It could also include reviewing job queuing times, resource wait times, and different scheduling policies on overall cluster performance.

 

ZooKeeper

ZooKeeper metrics are essential for monitoring the coordination and synchronization services, including latency, throughput, and connection status. Additionally, system- level metrics, such as CPU and memory usage, disk I/O, and network throughput, are critical for analyzing the overall health of the Hadoop infrastructure.

 

JVM

JVM (Java Virtual Machine) metrics are essential for understanding the performance of Hadoop workloads, including garbage collection frequency and duration, heap memory usage, and thread counts. These metrics can be helpful when it comes to identifying memory leaks and fine-tuning memory settings for optimal performance.

 

HBase

HBase metrics such as region server load, read/write request latencies, and compaction queue sizes, are vital for optimal performance.

Spark

Spark metrics, including executor memory usage, shuffle read/write sizes, and job execution times, are critical for clusters leveraging Spark for in-memory processing.

 

Other Metrics

Network-related metrics, such as packet loss rates, network utilization, and TCP retransmission counts, are crucial for identifying network bottlenecks. Additionally, monitoring user and group quota usage helps in managing resource allocation of shared cluster resources. Monitoring security-related metrics like HDFS permission changes and audit logs is critical for maintaining the security of the Hadoop cluster.

Hadoop Monitoring Tools

Let’s look at three of the most popular Hadoop monitoring tools.

 

Apache Ambari

Apache Ambari is a widely used open source tool for provisioning, managing, and monitoring Hadoop clusters. It provides an intuitive web interface to monitor cluster health, manage services, and configure alerts. Ambari also includes the Ambari Metrics System for collecting metrics and the Ambari Alert Framework for system notifications, making it a useful tool for managing Hadoop environments.

 

Prometheus

Prometheus is an another open source monitoring system that can be effectively leveraged to monitor Hadoop clusters. It features a powerful query language (Prom QL) and a flexible data model for metrics collection.

Prometheus can scrape metrics from various Hadoop components, offering easily customizable dashboards and alerting capabilities that helps to maintain cluster performance and reliability. It also includes AlertManager for configuring and managing alerts directly and has service discovery mechanisms for automatically finding and monitoring new targets. Prometheus has a multi-dimensional data model that organizes metrics into key-value pairs called labels, which provide powerful filtering and grouping capabilities.

 

Ganglia

Ganglia is another open source monitoring tool designed for Hadoop clusters. It provides real-time metrics visualization, allowing administrators to track the performance of individual nodes and the overall health of the cluster. It also offers real-time visualization at node, host, and cluster-level views.

Monitoring vs. Observability in Hadoop

The difference between monitoring and observability is that monitoring involves collecting and analyzing the metrics from the Hadoop clusters, while observability provides knowledge about cluster behavior, providing insights into unpredicted issues and root causes. At a basic level, monitoring can be understood as the “what” whereas observability is the “why.”

Monitoring consists of analyzing predetermined sets of data from various systems and tracking metrics using dashboards and alerts. Monitoring tools detect issues and generate alerts when metrics exceed specified thresholds.

Observability, on the other hand, is more holistic, considering the state of systems from its data. Observability enables you to anticipate system behavior in advance, making troubleshooting easier.

Best Practices for Hadoop Monitoring and Observability

  1. Implement Comprehensive Real-Time Monitoring: Establish a monitoring system that provides real-time visibility into the health and performance of the Hadoop clusters. Track key metrics across HDFS, MapReduce, YARN, and ZooKeeper components via tools like Ambari, Prometheus, or Ganglia.
  2. Set Up Automated Alerts and Thresholds: Configure for automated alerts based on predefined thresholds levels for critical metrics. This enables faster responses to potential problems before they escalate. Alerts should be tied to things like resource utilization, CPU, memory usage, data integrity, and system health. Leverage tools like Prometheus AlertManager to manage and distribute alerts.
  3. Implement Centralized Logging and Analysis: Set up a logging system to collect logs from all Hadoop components and related services. This will make troubleshooting and root cause analysis much easier. You can use tools like ELK stack (Elasticsearch, Logstash, Kibana) to collect, index, and analyze logs from across the cluster for faster resolution.
  4. Adopt a Multi-Layered Monitoring Approach: Implement monitoring across different stacks of the Hadoop ecosystem, including infrastructure (hardware, network), platform (HDFS, YARN), and application layers (MapReduce). This provides visibility into all components of the Hadoop environment.
  5. Implement End-to-End Tracing: Set up an end-to-end tracing system across the Hadoop ecosystem to track requests and transactions as they flow through various components.

Final Thoughts

For enterprises that depend on Hadoop clusters to process and store massive amounts of data, monitoring is essential part of preventing downtime, optimizing resource utilization, and ensuring data integrity. If you need assistance with Hadoop monitoring or are interested in alternatives to Cloudera for your Big Data stack management, talk to an OpenLogic expert to learn about our enterprise Hadoop support and services

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Open Source in Finance: Top Technologies and Trends

Editor’s Note: This article was originally published on the Fintech Open Source Foundation (FINOS) blog and is reprinted here with permission.

Financial organizations increasingly rely on open source software as a foundational component of their mission-critical infrastructure. In this blog, we explore the top open source trends and technologies used within the FinTech space from our last State of Open Source Report — with insights on the unique pain points these companies experience when working with OSS. 

About the State of Open Source Survey

OpenLogic by Perforce conducts an annual survey of open source users, specifically focused on open source usage within IT infrastructure. In 2024, we teamed up with the Open Source Initiative for the third year in a row, and brought on a new partner: the Eclipse Foundation, who helped us expand our reach and get more responses than ever before.

For those looking for the non-segmented results from the entire survey population (not just respondents working in the financial sector), you can find them published in our 2024 State of Open Source Report here.

Demographics and Firmographics

For the purposes of this blog, we segmented the results to focus on the Banking, Insurance, and Financial Services verticals. This segment, comprising 250 responses, represented 12.22% of our overall survey population. Before we dive into some of the key results of the survey, let’s look at demographic and firmographic datapoints that will help us to frame the results.

Among respondents representing the Banking, Insurance, and Financial Services verticals, most of their companies were headquartered in North America (32% of responses), with Africa, Asia, and Europe as the next most popular locations at 18.8%, 17.6% and 16%, respectively.

The top 3 roles for respondents were System Administrators (32%) Developers / Engineers (18.8%) and Managers / Directors (16.4%). Within this segment, we also saw strong large enterprise representation with 38.4% of respondents stating they work at companies with over 5000 employees.

Open Source Adoption

Our survey data painted a clear picture, with a combined 85.4% of respondents from these industries increasing their use of open source software. 59.4% said they’re increasing their use of open source significantly. This rate of open source adoption within a heavily regulated set of verticals shows how many companies are confidently deploying open source for their mission-critical applications.

Looking more granularly at areas of open source investment, we saw 37.3% from this segment investing in analytics, 30.8% investing in cloud and container technologies, and 30.3% investing in databases and data technologies.

When asked for the reasons for adopting open source technology, our respondents identified improving development velocity (53.51%), accessing innovation (35.14%), and the overall stability (28.11%) of these technologies as the top drivers. Cost reduction and modernization rounded out the top 5, at 24.86% and 21.08% of responses within the segment, respectively.


Top Challenges When Using Open Source Software

When we asked teams to share the biggest issues they face as they work with open source software, some key themes emerged. Companies within this segment identified maintaining security policies and compliance (56%), keeping up with updates and patches (49.09%), and not enough personnel (49.05%) as the most challenging.

Later in the survey, we asked specifically about how organizations are addressing open source software skill shortages within their organizations. The top tactics selected by our respondents were hiring experienced professionals (48.18%), hiring external consultants/contractors (44.53%), and providing internal or external training (40.88%).

Infrastructure scalability and performance issues (67.98%), and lack of a clear community release support process (59.75%) represented the least challenging areas for respondents within this segment.

Top Open Source Technologies

The State of Open Source Report has sections dedicated to technology categories (i.e. programming languages, databases) to assess which projects have gained adopters and are going strong vs. those that may be declining in popularity. As a reminder, the following results are specific to the Banking, Insurance, and Financial Services verticals.

When looking at Linux distributions, the top five selections were:

  • Ubuntu (33.75%)
  • Amazon Linux (21.88%)
  • Oracle Linux (20.00%)
  • Alpine Linux (16.88%)
  • CentOS (15.62%)

Here’s the full breakdown:

Get Expert Enterprise Linux Support

OpenLogic supports top community and commercial Linux distributions including AlmaLinux, Rocky Linux, Oracle Linux, Debian and Ubuntu. We also offer long-term support for CentOS.

Explore Enterprise linux Support

Looking at cloud-native technologies, the top five selections were:

  • Docker (32.50%)
  • Kubernetes (26.25%)
  • Prometheus (18.13%)
  • OpenStack (15.63%)
  • Cloud Foundry (13.12%)

This chart shows the top 10:

For open source frameworks, we did notice a surprising amount (26.62%) of respondents reporting usage of Angular.js (which has been end of life since 2021).

For those who indicated using Angular.js, we asked a follow up question regarding how they plan on addressing new vulnerabilities. 30.77% expressed that they won’t patch the CVEs, 26.92% noted that they have a vendor that provides patches, and 19.23% said that they will look for a long-term support vendor to help when it comes time.

In terms of open source data technology usage, we saw MySQL (31.08%) and PostgreSQL (30.41%) at the top of the list, with MongoDB (23.65%), Redis (20.27%), and Elasticsearch (18.24%) rounding out the top 5.

In the full report, we also look at the top programming languages/runtimes, infrastructure automation and configuration technologies, DevOps tools, and more. You can access the full report here

Open Source Maturity and Stewardship

At the end of the survey, we asked respondents to share information about the overall open source maturity of their organizations. 55.88% noted that they perform security scans to identify vulnerabilities within their open source packages, 41.91% noted that they have established open source compliance or security policies, and 34.56% have experts for the different open source technologies they use.

Another marker for organizational open source maturity is the sponsorship of nonprofit open source projects. The most supported organizations among Banking, Insurance, and Financial Services verticals were the Apache Software Foundation (27.94%), the Open Source Initiative (22.06%), and the Eclipse Foundation (19.85%). It’s also worth noting that 19.85% of respondents didn’t know of any official sponsorship of these projects within their organization. Overall, 89.41% noted that they sponsored at least one open source nonprofit organization.

Banking on Open Source: Finding Success With OSS in the Finance Sector

 

In this on-demand webinar, hear about how banks, Fintech, and financial services providers can meet security and compliance requirements while deploying open source software.

Final Thoughts

In this blog, we looked at segmented data from our 2024 State of Open Source Report specific to the Banking, Insurance, and Financial Services verticals. Considering these industries are heavily regulated, with most required to meet compliance requirements with their IT infrastructure, it was encouraging to see over 85% increasing their usage of open source software.

Not surprisingly, maintaining security policies and compliance was a top challenge for this segment. Given the current pace of open source adoption within this space, we expect this to continue to be a pain point. It’s up to organizations to manage the complexity that comes with juggling so many open source packages, and ultimately ensure that they have the technical expertise on hand to support that software — especially when it’s used in mission-critical IT infrastructure. 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Harbor Registry Overview: Using Harbor for Container Image Management

Learn about Harbor and the benefits of using it for container image management in cloud-native environments like Kubernetes. In this blog, our expert describes key features and ideal use cases, and discusses the pros and cons of two Harbor alternatives. 

 

What Is Harbor?

Harbor is an open source registry for securely storing and managing container images in cloud-native environments.

Originating as an internal project at VMWare, Harbor entered the open source scene in 2016. Its focus was clear: Storing and securing container images in a cloud-native environment. In its ideal configuration, Harbor is typically deployed to a Kubernetes cluster, where it provides container images from all sources a single home.

Providing a unified storage space proved invaluable when it came to managing images. As Harbor is capable of pulling from other registries as well as accepting user submissions, teams could route all images through their Harbor deployment, ensuring consistent policies would be applied. Vulnerability scanning, access control, signature verification — all of it could now be configured and controlled in one place.

Owing to its ease of use and substantial benefits, Harbor took off in popularity. By 2018, Harbor had joined the Cloud Native Computing Foundation (CNCF) and reached “Graduated” status by 2020. Since then, Harbor has continued to grow and remains a staple in Kubernetes environments.

Harbor Registry Key Features

Harbor comes with a host of features tuned to address common challenges in containerized environments. Instead of jumping straight into a list of everything Harbor can do (which might be overwhelming), let’s start with some of the core concepts, and build an understanding of Harbor’s feature set piece by piece.

Interface

Once deployed, Harbor exposes a web interface for interacting and exploring its artifacts and configurations. In addition to this, an API is exposed such that common tools, like the Docker client, can push and pull images directly to the registry.

Users

The interface, and much of the registry’s functionality, is locked by permissions granted to users. In the simplest cases, users can be created by Harbor itself and managed internally. However, this doesn’t scale particularly well, so Harbor also provides integration into other popular services such as OIDC, Active Directory, and LDAP.

Projects

Artifacts within Harbor are owned by a project. This grouping allows settings and permissions to be tuned for sets of artifacts as opposed to a purely global level. From there, users can be granted a role in a project, such as Guest (read-only), Developer (read-write), or Project Admin (read-write-configure).

Security

Aside from access control, Harbor includes several other critical security features. By utilizing popular image scanners, such as Trivy, images can be automatically scanned for known vulnerabilities. The results of these scans can be leveraged to prevent pulling of artifacts with unaddressed security issues.

On top of scanning, Harbor also includes support for signature verification. After using a tool like Notary or Cosign to sign an artifact, Harbor is capable of verifying each signature and rejecting artifacts which fail the verification process.

Additional Features

With the core functionality out of the way, we can now take a brief look at some of the other features of Harbor.

  • Storage of OCI Artifacts – In addition to container images, Harbor can store OCI artifacts such as Helm charts.
  • High-Availability – As Harbor deploys to Kubernetes, it follows the common pattern of providing a high-availability configuration, ensuring maximum uptime.
  • Registry Replication – While users can manually push and pull from the registry, images may also be automatically replicated to and from external registries. This is highly configurable, allowing for control over how and when artifacts are replicated.
  • Observability – Harbor natively supports a standard suite of observability features, including logging, metrics, and tracing.
  • SBOM – As well as scanning artifacts, Harbor can generate a Software Bill of Materials (SBOM), which acts as a list of all found dependencies within an image.

Harbor Registry Installation

Harbor provides two paths for installation:

  1. The first is to use their own installer, which deploys Harbor locally using Docker. This is a great option to try Harbor out or for small teams which will be leveraging Harbor in a limited fashion.
  2. The second path is to deploy to Kubernetes. This is accomplished via Helm and enables high-availability configurations. The Kubernetes deployment is the recommended approach for most teams.

To get started with either of these paths, we recommend following the official documentation for the most up-to-date instructions.

Need Help With Harbor?

OpenLogic now offers Gold-level, SLA-backed support for Harbor. Talk to an expert today to learn more or request a quote for Harbor technical support.

Talk to an Expert

Using Harbor for Container Image Management

With the high-level understanding of Harbor out of the way, we can dig a bit further into understanding when Harbor is worth considering. Typically, as organizations grow and their usage of containers increases, hosting your own registry becomes a stronger choice. While the operation cost of Harbor is low, any new piece of infrastructure must be maintained. As such, if your organization or team makes light use of containers, it may be better to look at cloud-based providers first.

Let’s take a look at three scenarios in which Harbor could be leveraged.

Private Registry

Let’s suppose your team is building and consuming their own container images. While these images shouldn’t contain any sensitive information, they may hold proprietary software or similar materials that need to remain safe. This, understandably, makes externally hosted options less desirable.

By deploying Harbor locally as a private registry, images can be kept on-site, greatly reducing the potential for accidental leaks. Furthermore, corporate security policies are enforced on all images, ensuring scanning and signing take place without ever leaving the network.

Proxy Registry

Now let’s consider a case in which a team makes heavy use of public images. This is a fairly common setup and typically not an issue. However, depending on how these images are being consumed, the team may find themselves running into rate limiting and bandwidth issues.

In this case, by using Harbor to mirror an external registry, each image only needs to be pulled by Harbor once, greatly reducing the load on the external service. As an added benefit, Harbor will remain available even when the external registry is not.

Air-Gapped Registry

Finally, let’s consider a critical system which relies on both public and internal container images to function. For security reasons, this environment is air-gapped, preventing access to public registries.

Here, a self-hosted image registry is the only viable option, making Harbor a smart choice. Images can be manually marshalled in and assigned different security policies by grouping based on source. On top of that, Harbor provides a mechanism for manually updating the security vulnerability database in its scanner, enabling up-to-date scans without a connection to the internet.

Back to top

Harbor Alternatives

Many options exist within the container registry space. As Harbor is a CNCF graduated project, it is typically the recommended choice for organizations looking to host their container images on-site. Instead of direct comparisons, let’s take a look at two alternatives with some significant tradeoffs.

Sonatype Nexus

Nexus is an artifact registry in a much broader sense than Harbor. While it does support acting as a container image registry, its strength lies primarily in the range of artifacts it can hold. This includes artifacts for Docker, Go, Maven, Python, Yum, and more. The advantage here is clear: If container images are smaller component of your broader technical needs, a general-purpose repository can provide quite a bit of value.

However, these features come with a drawback: Container images are supported, but many of the security features are not. At the time of this writing, Nexus does not support signing or vulnerability scanning on container images.

Artifactory

Similar to Nexus, Artifactory supports a much wider range of artifact types. However, unlike Nexus, it does not sacrifice container image security features. Instead, its drawback is a common one: Cost. Artifactory is an offering form JFrog, and while it has a wide range of features, it requires a paid license for full functionality. A side effect of this is that Artifactory tends to leverage other offerings from JFrog as well.

When considering Artifactory, it’s important to evaluate the surrounding ecosystem and community. While we recommend open source solutions for their flexibility and community support, options like Artifactory may fit particular use cases better.

Harbor Container Registry FAQs

In this section, we’ll answer some of the most common questions about Harbor.

What Is the Difference Between Docker Hub vs. Harbor?

Docker Hub is a popular cloud-based registry. It provides many of the features available in Harbor but cannot be hosted on-site. Additionally, some functionality is gated behind paid tiers of membership. By comparison, Harbor’s self-hosted nature is ideal for teams needing on-site security and control over their registry.

Is Harbor Free?

Yes. Harbor is both free and open source under the Apache License 2.0.

Can I Use Harbor With Kubernetes?

Yes. Harbor is built from the ground up to support Kubernetes, including high-availability configurations.

Where Can I Get Harbor Support?

The Harbor community is active and you can connect with other Harbor users on X and/or attend biweekly community meetings on Zoom to get updates and submit feedback. There are also distribution lists for both Harbor users and developers to join.

However, there is no guarantee someone in the community will have expertise or knowledge that relates to your particular use case, or that you’ll be able to get help quickly. This is why some teams opt for SLA-backed support provided by vendors like OpenLogic. The advantage of this is having an exact timeline for resolution and the ability to talk directly, 24-7, with an Enterprise Architect.

Final Thoughts

As covered in this blog, Harbor is an excellent choice for container image management, particularly in instances where you want to host your registry on-site, mirror an external registry, or have an air-gapped environment. As a project, Harbor is well-maintained and benefits from a robust community of contributors. While it is free to deploy, all infrastructure software requires some degree of maintenance, so it’s always a good idea to consider the “soft cost” in terms of your team’s time to decide whether it makes sense to get support from a third party like OpenLogic. 

 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×