Skip to content

Perforce Delphix Announces New AI Data Compliance Product

As a Platinum sponsor at FabCon 2025, Delphix will be showcasing data privacy compliance for AI, natively integrated in Fabric pipelines.

 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Running Kafka Without ZooKeeper in KRaft Mode

ZooKeeper will be completely gone in the next major Apache Kafka release (Kafka 4), and replaced by Kafka Raft, or KRaft mode. Many developers are excited about this change, but it will impact teams currently running Kafka with ZooKeeper who need to determine an upgrade path once ZooKeeper is no longer supported.

In this blog, our expert explains what KRaft mode is and how Raft implementations differ from ZooKeeper-based deployments, what to consider when planning your KRaft migration, and how your environment will look different when you’re running Kafka without ZooKeeper.

Note: This blog was originally published in 2022 and was updated and revised in 2025 to reflect the latest developments.

 

What Is Kafka Raft (KRaft) Mode?

Kafka Raft (which loosely stands for Reliable, Replicated, Redundant, And Fault Tolerant) or KRaft, is Kafka’s implementation of the Raft consensus algorithm.

Created as an alternative to the Paxos family of algorithms, the Raft Consensus protocol is meant to be a simpler consensus implementation with the goal of being easier to understand than Paxos. Both Paxos and Raft operate in similar manner under normal stable operating conditions, and both protocols accomplish the following:

  • Leader writes operation to its log and requests following servers to do the same thing
  • The operation is marked as “commited” once a majority of servers acknowledge the operation

This results in a consensus-based change to the state machine, or in this specific case, the Kafka cluster.

The main difference between Raft and Paxos, however, is when operations are not normal and new leader must be elected. Both algorithms will guarantee that the new leader’s log will contain the most up-to-date commits, but how they accomplish this process differs.

In Paxos, the leader election contains not only the proposal and subsequent vote, but also must contain any missing log entries the candidate is missing. Followers in Paxos implementations can vote for any candidate and once the candidate is elected as leader, the new leader will utilize this data to update its log to maintain currency.

In Raft, on the other hand, followers will only vote for a candidate if the candidate’s log is the at least as up to date as the follower’s log. This means only the most up-to-date candidate will be elected as leader. Ultimately, both protocols are remarkably similar in their approach to solving the consensus problem. However, with Raft making some base assumptions about the data, namely the order of commits in the log, we can see improvements in election efficiency in Raft.

What does this mean in regards to Kafka? From the protocol side of things, not much. ZooKeeper utilizes a proprietary consensus protocol called ZAB (ZooKeeper Atomic Broadcast) that is much more focused on total ordering of commits to the change log. This focus on commit order makes Raft consensus fit quite well into the Kafka ecosystem.

That said, changes from an infrastructure perspective will be quite noticeable. With each broker having the Kraft logic incorporated into the base code, ZooKeeper nodes will no longer be part of the Kafka infrastructure. Note that this doesn’t necessarily mean less servers in the production environment — more on that later.

 

Why Is Kafka Raft Replacing ZooKeeper?

To understand why the Kafka community leadership decided to make this move away from ZooKeeper, we can look directly at KIP-500 for their reasoning. In short, this move was meant to reduce complexity and handle cluster metadata in a more robust fashion. Removing the requirement for ZooKeeper means there is no longer a need to deploy two distinctly different distributed systems. ZooKeeper has different deployment patterns, management tools, and configuration syntax when compared to Kafka. Unifying the functionality to single system will reduce configuration errors and overall operational complexity.

In addition to simpler operations, treating the metadata as its own event stream means that a single number, an offset, can be used to describe a cluster member’s position and be quickly brought up to date. This in effect applies the same principles used for producers and consumers to the Kafka cluster members themselves.

Get the Decision Maker’s Guide to Apache Kafka

This white paper has everything you need to master Kafka’s complexity, from partition strategies and security best practices to tips for running Kafka on K8s.

DownLoad Guide

 

KRaft vs. ZooKeeper

In a ZooKeeper-based Kafka deployment, the cluster consists of several broker nodes and a quorum of ZooKeeper nodes. In this environment, each change to the cluster metadata is treated as an isolated event, with no relationship to previous or future events. When state changes are pushed out to the cluster from the cluster controller, a.k.a. the broker in charge of tracking/electing partition leadership, there is potential for some brokers to not receive all updates, or for stale updates to create race conditions as we’ve seen in some larger Kafka installations. Ultimately, these failure points have the potential to leave brokers in divergent states.

While not entirely accurate, as all broker nodes can (and do) talk to ZooKeeper, the image below is a basic example of what that looks like:


In contrast, the metadata in KRaft is stored within Kafka itself and ZooKeeper is effectively replaced by a quorum of Kafka controllers. The controller nodes comprise a raft quorum to elect the active controller that manages the metadata partition. This log contains everything that used to be found ZooKeeper: topic, partition, ISRs, configuration data, etc. will all be located in this metadata partition.

Using the Raft algorithm controller nodes will elect the leader without the use of an external system like ZooKeeper. The leader, or active controller, will be the partition leader for the metadata partition and will handle all RPCs from the brokers.

Learn more about Kafka partitions >>

The diagram below is a logical representation of the new cluster environment implementation using KRaft:


Note in the diagram above there is no longer a double-sided arrow. This denotes another major difference in the two environments: Instead of the controller sending updates to the brokers, controllers fetch the metadata via a MetadataFetch API. In similar fashion to a regular fetch request, the broker will track the offset of the latest update it fetched, requesting only newer updates from the active controller persisting that metadata to disk for faster startup times.

In most cases, the broker will only need to request the deltas of the metadata log. However, in cases where no data exists on a broker or a broker is too far out of sync, a full metadata set can be shipped. A broker will periodically request metadata and this request will act as a heartbeat as well.

Previously, when a broker entered or exited a cluster, this was kept track of in ZooKeeper, but now the broker status will be registered directly with the active controller. In a post-ZooKeeper world, cluster membership and metadata updates are tightly coupled. Failure to receive metadata updates will result in eviction from the cluster.

ZooKeeper Deprecation and Removal

KRaft has been considered “production ready” since Kafka 3.3 and ZooKeeper was officially deprecated in Kafka 3.5. It will be removed completely in Kafka 4 and higher.

 

Running Kafka Without ZooKeeper

As organizations plan their migrations to KRaft environments, there are quite a few things to consider. In a KRaft-based cluster, Kafka nodes can be run in one of three different modes know as process.role. The process.role can be set to broker, controller, or combined. In a production cluster, it is recommended that the combined process.role should be avoided — in other words, having dedicated JVM resources assigned to brokers and controllers. So, as mentioned previously, doing away with ZooKeeper doesn’t necessarily mean doing away with the compute resources in production. Using the combined process.role in develop or staging environments is perfectly acceptable.

Since we originally published this blog, several upgrades and changes to the KRaft implementation have been completed and released. The list of caveats previously mentioned have largely been addressed, including:

  • Configuring SCRAM users via the administrative API: With the completion and implementation of KIP-900 in Kafka 3.5.0 for inter-broker communications, the kafka-storage tool can be used as a mechanism to configure SCRAM.
  • Supporting JBOD configurations with multiple storage directories: JBOD support was introduced in 3.7 as an early access feature. With the completion of KIP-858 and its implementation in 3.8, that is no longer the case.
  • Modifying certain dynamic configurations on the standalone KRaft controller: In early releases of Kafka KRaft, some configuration items could not be updated dynamically, but as far as we are aware, these have mostly been fixed.  The “missing features” verbiage should be removed with 4.0 (see conversation here).
  • Delegation tokens: KIP-900 also paved the way for “delegation token” support.  With the completion of KAFKA-15219 in 3.6, delegation tokens are now supported in KRaft mode.

 

KRaft Migration

Although a fully-fledged and supported upgrade path has been implemented and can be used to move clusters from Zookeeper mode to KRaft mode, in-place upgrades generally should be avoided. At OpenLogic, we typically recommend lift-and-shift style, blue-green deployments instead. However, given the complexity of some Kafka clusters, having an official migration path is very much a welcome tool in the tool belt.

While detailing the KRaft migration process would require an entire series of blog posts, you can find an overview of the process in the Kafka documentation here. Of particular interest, though, is the requirement to upgrade to Kafka 3.9.0. The metadata version cannot be upgraded during the migration, so inter.broker.protocol.version must be at 3.9 prior to the migration. So, even if your organization isn’t planning on migrating to KRaft anytime soon, it still makes sense to plan your upgrade to 3.9 sooner rather than later.

 

KRaft Mode FAQ

What benefits would my organization see, if any, from migrating to KRaft?

The most basic benefit for any organization is of course being able to stay up to date on your software versions. With ZooKeeper removal on the horizon, staying updated in ZK mode will eventually be impossible. Also, organizations will see a decrease in cluster complexity as Kafka will handle metadata natively instead utilizing third-party software.

Lastly, organizations operating at the upper levels of cluster size will see a considerable increase in reliability and service continuity in KRaft mode. While ZooKeeper is a reliable coordination service for a myriad of open source projects, whenever our customers with extremely large clusters (i.e. 30/40+ brokers with thousands of partitions) are encountering severe issues, it often winds up being a ZooKeeper issue.

 

If we migrate from ZooKeeper to KRaft, can we decommission our dedicated ZK hardware?

Most likely, no, at least not in production. Production KRaft controllers should be deployed in dedicated controller mode, so they will need dedicated compute just like ZooKeeper in production does. However, non-production clusters can run in mixed mode.

 

We have “N” number of ZooKeeper nodes; how many KRaft controller nodes should we use?

At the very minimum organizations should deploy at least 3 controller nodes in production. The system requirements for ZooKeeper and controller nodes are quite similar, though, so deploying the same number of controller nodes is a reasonable place to start. Ultimately, a thorough load and performance testing protocol should be followed to validate this.

 

If we are running Kafka via Strimzi Kubernetes operator, can we start using KRaft?

Yes! However, be aware that as of version 0.45.0, there some limitations with controller. Currently, Strimzi continues to use static controller quorums, which means there are some limitations on using dynamic controller quorums. More information can be found in the Strimzi documentation here.

 

Final Thoughts

For greenfield implementations, using KRaft should be a no-brainer, but for mature Kafka environments, migrating will be a complete rip and replace for your cluster, with all the complications that could follow. Creating a detailed migration plan, with blue/green deployment strategies, is crucial in such cases. And if your team is lacking in Kafka expertise, seeking out external support to guide your migration would also be a good idea.

This Blog Was Written By One of Our Kafka Experts.

OpenLogic Kafka experts can provide 24/7 technical support, consultations, migration/upgrade assistance, or even train your team.

Explore kafka Solutions 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Planning a CentOS to Rocky Linux Migration

On June 30, 2024, the ten-year run of CentOS 7 came to an end, but more significantly, the date marked all versions of CentOS Linux reaching end of life. Now, more than six months later, many organizations are still planning their migrations to other community-supported Enterprise Linux distributions. They are exploring CentOS alternatives in the hopes avoiding the squeeze from Red Hat and having to pay for Extended Life Cycle Support, on top of RHEL licenses for each host that needs to be kept secure (which is usually all of them).

One popular migration path is CentOS to Rocky Linux. In other blogs, we’ve looked at why many have chosen Rocky Linux, but here, we’ll focus on the primary considerations for organizations that haven’t made the move yet, highlight potential risks, and explain how to evaluate the readiness of a given enterprise architecture to move from one major Enterprise Linux version (6, 7, and 8) to the latest version, Enterprise Linux 9. We’ll also provide a step-by-step walkthrough of a CentOS 7 to Rocky Linux 9 migration.

 

Step 1: Evaluate Whether Your Applications Are Ready for Rocky Linux Migration

How were you made aware that your team has out-of-compliance, End-of-Life CentOS hosts in their architecture in the first place? Perhaps your IT department sent you a email containing a list of these hosts, as the result of a scan. Or maybe you examined a list of AMIs in your Amazon Web Services (AWS) EC2 infrastructure, and came up with a list of affected hosts. Whatever way it happened, you have the list of hosts that need an upgrade, which means you have essentially completed the first step for migration: creating an inventory of hosts that need to make the move to Rocky Linux.

Ideally, you have a list of applications that are running on each of these hosts, and know the purpose of each. For example, “This one is a MariaDB server,” or “This runs Oracle 12c.” If you don’t, now is a good time to start building out a spreadsheet listing the CentOS hosts that need to be upgraded that includes the workload each is responsible for. Then create another inventory, for each host, of what software is installed. Find the owners of each machine, so that you can further examine what software is installed, perhaps in unconventional ways, that you might be missing.

Without knowing what software is on which machines, you won’t know the side effects of completing a major Enterprise Linux version upgrade. The main side effect of upgrading is new versions of the kernel, new versions of software in the yum / dnf repositories, and new versions of glibc and libstdc++. And this side effect can have some major unintended consequences.

Step 2: Decide Between Migrating in Place vs. Building a New System

The major consequences listed above are particularly important when deciding whether to do an in-place upgrade, or building a new system. You can either migrate a system to Rocky Linux in-place, or build a new system and migrate applications and data over. Each option has pros and cons, but let’s examine the three side effects of upgrading: new kernel, new versions of dnf-sourced applications, and new API and ABI versions.

To be clear, it’s not just the ABI contracts of glibc and libstdc++ that can stymie your upgrade plans. All bets are off regarding API and ABI compatibility between all libraries and packages between major versions of EL. Another risk of in-place upgrades is, in the end, the system may have a combination of packages from different EL versions; a few libraries from EL7 and an app from EL8 running on an EL9 box. Hybrid-version systems are incredibly difficult to troubleshoot, or even rebuild if not restoring from a full backup.

When you examine your inventory of hosts that need to be migrated off of CentOS, you need to determine if they’re bare-metal. Physical hosts are much more likely to have custom kernel modules or drivers built against the current kernel against source, source that’s often proprietary. Perhaps the host is connected to a tape library, has a PCI-E card like a graphics card, or has another peripheral that’s connected to an industrial application from a third-party manufacturer. For this reason, a hardware inventory is incredibly important: what peripherals or non-standard hardware are installed in the host?

Most open source drivers and kernel modules are going to seamlessly work from one major EL release to another, but add-ons that aren’t shipped with the server are more likely to need a driver to be recompiled against the new kernel in Rocky Linux. Make sure you can both obtain the source code for the driver and that it compiles against the new kernel. Otherwise you might successfully upgrade to Rocky Linux, but be stuck with an application that can’t reach a critical peripheral.

If you are on physical hardware as described above, migrating in place has the advantage that you don’t need more systems. This may be the easiest (and least expensive) route. You do have to make sure that connections are stable, and the machine will be available the whole time, because if the upgrade script is interrupted, the system may be left in an unrecoverable state. This would probably require a rescue disk and some manual work to get to a point where it is usable again. If you can’t take a system out of production to rebuild, then running the migration may be the best option, as the migration can be planned in a standard maintenance window.

However, if you are on a virtualized infrastructure, or have spare hardware, it would be safer to build a new system as you want it, then migrate the data and applications over and swap out the old system with the new. But even then, a software inventory is especially important due to the upgraded dnf-sourced software and upgraded glibc/libstdc++ libraries.

Step 3: Use a Software Inventory to Mitigate Risk

If your organization produces software with C++, it’s possible that you’re producing applications “dynamically linked” against system libraries. If you’re targeting CentOS 7, and you upgrade to Rocky Linux 9, the libraries you linked against are going to be upgraded, and the application you wrote might suddenly crash at runtime, even if it starts successfully after the upgrade.

This is because certain standard library methods might remain the same, allowing the application to start, but others might have been deprecated or changed, causing the application to crash when they’re no longer available. Because these safety checks are completed at compile-time, a runtime error might occur.

Even if your organization doesn’t maintain their own C++ applications, you might be installing applications from a third-party vendor that link against CentOS 7 libraries. If this vendor uses an external yum or dnf-based repository, there’s a good chance that the upgrade to Rocky might fail due to unresolved dependency issues. If the application is installed by downloading a .tar.gz, .sh, or .run file, and binaries are installed onto the host from that installer, there’s a strong possibility that the application might suffer similar crashes or incompatibilities from unexpected versions of C++ libraries, Python bindings, Lua versions, and the list goes on.

All of the above illustrates why it’s so important to have a software inventory of some kind. It could be as simple as a spreadsheet or a Software Bill of Materials (SBOM). Once you have that inventory, you can plan ahead and contact the vendors of third-party software, making sure they have an EL 8 or EL9 version of their software that can be upgraded to once your host becomes a Rocky Linux host.

As for the dnf-sourced packages, and considering the previously mentioned issues with version numbers changing for supporting libraries, moving from CentOS 7 to Rocky Linux 9 can include some major upgrades. For example, the upgrade from MariaDB 5.5, which was released in April 2012, to MariaDB 10.5.27, which shipped in May 2023. As you can imagine, there needs to be an end-to-end plan for this upgrade, and all of the hard-to-predict “ripples” it may cause.

What happens when you run into applications that can’t run on the new version of Rocky 8 or Rocky 9? One option would be containerizing old workloads in order to increase security, reducing the attack surface by running in an isolated container running on an up-to-date Rocky 9 host.

Migration Services and Technical Support for Rocky Linux

Get assistance from Enterprise Architects who are directly involved with the Rocky Linux project.

Talk to a Rocky Linux Expert 

Step 4: Determine Your CentOS to Rocky Linux Migration Path

There are a few considerations teams will need to make when migrating from CentOS to Rocky Linux. Depending on the CentOS version(s) being used, the upgrade path may require migrating to intermediary versions before arriving at the intended version. (E.g., CentOS 6 to CentOS 7, CentOS 7 to CentOS 8, CentOS 8 to Rocky Linux 8, then Rocky Linux 8 to Rocky Linux 9).

Approaching your upgrade with this step-wise approach is also useful for mitigating the risks described above. Catching incompatibilities early on in the upgrade process will be critical for informing leadership of how long this process is actually going to take.

Migrating CentOS 6 to Rocky Linux

Unfortunately, there is no direct migration path from CentOS 6 to Rocky Linux. Rocky Linux starts at 8, so hosts will have to be on CentOS 8 to migrate. As described above, there are too many changes between CentOS 6 and 7, much less 8, to migrate. Once you’ve migrated to CentOS 7 you can migrate from CentOS 7 to CentOS 8 then migrate to Rocky Linux 9, or migrate from CentOS 8 to Rocky Linux 8 then upgrade to Rocky Linux 9.

For CentOS 6 users there are two ways to do this.

  1. Upgrade from CentOS 6, to 7, to 8, then migrate to Rocky Linux 8, then upgrade to Rocky Linux 9. This process can take a considerable amount of time, and can run into some additional hiccups due to the package changes between major versions.
  2. Build a new machine and migrate your data over. The best case scenario with this approach is that all third party software needed in the new machine has a new version, and all your data can be upgraded safely. The worst case is that you’ll have software with no new version and will need to find alternatives or find a way to run the software anyway. This depends on the libraries being used in your CentOS 6 install. Luckily, containerization makes it easy to run older versions of software on newer systems, or even on completely different distributions entirely.

Migrating CentOS 7 to Rocky Linux

The migration path for CentOS 7 to Rocky Linux is similar to CentOS 6. However, migrating from CentOS 7 to Rocky Linux is a bit easier because CentOS 7 already uses systemd for service management, while CentOS 6 still uses legacy SysV init scripts.

There are a few other changes to keep an eye out for when moving from CentOS 7 to Rocky Linux, but, aside from the service management difference, the considerations are nearly identical to CentOS 6 migration.

Video: CentOS 7 Migration Recommendations

 

Migrating CentOS 8 to Rocky Linux

Migrating from CentOS 8 to Rocky Linux 8 is relatively painless, and avoids all of the risks described above. Since it is nearly identical, there are only a few changes, which makes this the least risky and easiest to validate step in the process. The repos are swapped out for Rocky Linux repos, and a few packages are replaced (mostly branding packages, for example, the package that provides all of the logos for CentOS).

Real-World Example: Upgrading CentOS 7 to Rocky Linux 9

In this section, we will walk through a CentOS 7 to Rocky Linux 9 migration, showing all the steps involved and potential trouble spots.

1. Install the Latest Version of the ELevate Repository from the AlmaLinux Project

$ yum install -y http://repo.almalinux.org/elevate/elevate-release-latest-el$(rpm –eval %rhel).noarch.rpm

Loaded plugins: auto-update-debuginfo, fastestmirror

elevate-release-latest-el7.noarch.rpm                    | 6.6 kB     00:00

Examining /var/tmp/yum-root-nXSITp/elevate-release-latest-el7.noarch.rpm: elevate-release-1.0-2.el7.noarch

Marking /var/tmp/yum-root-nXSITp/elevate-release-latest-el7.noarch.rpm to be installed

Resolving Dependencies

–> Running transaction check

—> Package elevate-release.noarch 0:1.0-2.el7 will be installed

–> Finished Dependency Resolution

Dependencies Resolved

================================================================================

 Package          Arch    Version     Repository                           Size

================================================================================

Installing:

 elevate-release  noarch  1.0-2.el7   /elevate-release-latest-el7.noarch  3.4 k

Transaction Summary

================================================================================

Install  1 Package

Total size: 3.4 k

Installed size: 3.4 k

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

  Installing : elevate-release-1.0-2.el7.noarch                             1/1

  Verifying  : elevate-release-1.0-2.el7.noarch                             1/1

Installed:

  elevate-release.noarch 0:1.0-2.el7

Complete!

2. Install the LEAPP Package

Specifically, we are going to install the leapp-data-rocky package, which will help us move to Rocky Linux, as opposed to AlmaLinux:

$ yum install -y leapp-upgrade leapp-data-rocky

Loaded plugins: auto-update-debuginfo, fastestmirror

Determining fastest mirrors

Resolving Dependencies

–> Running transaction check

—> Package leapp-data-rocky.noarch 0:0.5-1.el7.20241127 will be installed

—> Package leapp-upgrade-el7toel8.noarch 1:0.21.0-4.el7.elevate.4 will be installed

–> Processing Dependency: leapp-repository-dependencies = 10 for package: 1:leapp-upgrade-el7toel8-0.21.0-4.el7.elevate.4.noarch

–> Processing Dependency: leapp-framework >= 6.0 for package: 1:leapp-upgrade-el7toel8-0.21.0-4.el7.elevate.4.noarch

–> Processing Dependency: leapp for package: 1:leapp-upgrade-el7toel8-0.21.0-4.el7.elevate.4.noarch

–> Processing Dependency: python2-leapp for package: 1:leapp-upgrade-el7toel8-0.21.0-4.el7.elevate.4.noarch

–> Running transaction check

—> Package leapp.noarch 0:0.18.0-2.el7 will be installed

—> Package leapp-upgrade-el7toel8-deps.noarch 1:0.21.0-4.el7.elevate.4 will be installed

–> Processing Dependency: dnf >= 4 for package: 1:leapp-upgrade-el7toel8-deps-0.21.0-4.el7.elevate.4.noarch

–> Processing Dependency: pciutils for package: 1:leapp-upgrade-el7toel8-deps-0.21.0-4.el7.elevate.4.noarch

—> Package python2-leapp.noarch 0:0.18.0-2.el7 will be installed

–> Processing Dependency: leapp-framework-dependencies = 6 for package: python2-leapp-0.18.0-2.el7.noarch

–> Running transaction check

—> Package dnf.noarch 0:4.0.9.2-2.el7_9 will be installed

–> Processing Dependency: python2-dnf = 4.0.9.2-2.el7_9 for package: dnf-4.0.9.2-2.el7_9.noarch

—> Package leapp-deps.noarch 0:0.18.0-2.el7 will be installed

–> Processing Dependency: PyYAML for package: leapp-deps-0.18.0-2.el7.noarch

—> Package pciutils.x86_64 0:3.5.1-3.el7 will be installed

–> Running transaction check

—> Package PyYAML.x86_64 0:3.10-11.el7 will be installed

–> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64

—> Package python2-dnf.noarch 0:4.0.9.2-2.el7_9 will be installed

–> Processing Dependency: dnf-data = 4.0.9.2-2.el7_9 for package: python2-dnf-4.0.9.2-2.el7_9.noarch

–> Processing Dependency: python2-libdnf >= 0.22.5 for package: python2-dnf-4.0.9.2-2.el7_9.noarch

–> Processing Dependency: python2-libcomps >= 0.1.8 for package: python2-dnf-4.0.9.2-2.el7_9.noarch

–> Processing Dependency: python2-hawkey >= 0.22.5 for package: python2-dnf-4.0.9.2-2.el7_9.noarch

–> Processing Dependency: libmodulemd >= 1.4.0 for package: python2-dnf-4.0.9.2-2.el7_9.noarch

–> Processing Dependency: python2-libdnf for package: python2-dnf-4.0.9.2-2.el7_9.noarch

–> Processing Dependency: deltarpm for package: python2-dnf-4.0.9.2-2.el7_9.noarch

–> Running transaction check

—> Package deltarpm.x86_64 0:3.6-3.el7 will be installed

—> Package dnf-data.noarch 0:4.0.9.2-2.el7_9 will be installed

–> Processing Dependency: libreport-filesystem for package: dnf-data-4.0.9.2-2.el7_9.noarch

—> Package libmodulemd.x86_64 0:1.6.3-1.el7 will be installed

—> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed

—> Package python2-hawkey.x86_64 0:0.22.5-2.el7_9 will be installed

–> Processing Dependency: libdnf(x86-64) = 0.22.5-2.el7_9 for package: python2-hawkey-0.22.5-2.el7_9.x86_64

–> Processing Dependency: libsolvext.so.0(SOLV_1.0)(64bit) for package: python2-hawkey-0.22.5-2.el7_9.x86_64

–> Processing Dependency: libsolv.so.0(SOLV_1.0)(64bit) for package: python2-hawkey-0.22.5-2.el7_9.x86_64

–> Processing Dependency: libsolvext.so.0()(64bit) for package: python2-hawkey-0.22.5-2.el7_9.x86_64

–> Processing Dependency: libsolv.so.0()(64bit) for package: python2-hawkey-0.22.5-2.el7_9.x86_64

–> Processing Dependency: librepo.so.0()(64bit) for package: python2-hawkey-0.22.5-2.el7_9.x86_64

–> Processing Dependency: libdnf.so.2()(64bit) for package: python2-hawkey-0.22.5-2.el7_9.x86_64

—> Package python2-libcomps.x86_64 0:0.1.8-14.el7 will be installed

–> Processing Dependency: libcomps(x86-64) = 0.1.8-14.el7 for package: python2-libcomps-0.1.8-14.el7.x86_64

–> Processing Dependency: libcomps.so.0.1.6()(64bit) for package: python2-libcomps-0.1.8-14.el7.x86_64

—> Package python2-libdnf.x86_64 0:0.22.5-2.el7_9 will be installed

–> Running transaction check

—> Package libcomps.x86_64 0:0.1.8-14.el7 will be installed

—> Package libdnf.x86_64 0:0.22.5-2.el7_9 will be installed

—> Package librepo.x86_64 0:1.8.1-8.el7_9 will be installed

—> Package libreport-filesystem.x86_64 0:2.1.11-53.el7.centos will be installed

—> Package libsolv.x86_64 0:0.6.34-4.el7 will be installed

–> Finished Dependency Resolution

Dependencies Resolved

================================================================================

 Package                Arch   Version                  Repository         Size

================================================================================

Installing:

 leapp-data-rocky       noarch 0.5-1.el7.20241127       elevate           323 k

 leapp-upgrade-el7toel8 noarch 1:0.21.0-4.el7.elevate.4 elevate           1.2 M

Installing for dependencies:

 PyYAML                 x86_64 3.10-11.el7              C7.9.2009-base    153 k

 deltarpm               x86_64 3.6-3.el7                C7.9.2009-base     82 k

 dnf                    noarch 4.0.9.2-2.el7_9          C7.9.2009-extras  357 k

 dnf-data               noarch 4.0.9.2-2.el7_9          C7.9.2009-extras   51 k

 leapp                  noarch 0.18.0-2.el7             elevate            31 k

 leapp-deps             noarch 0.18.0-2.el7             elevate            13 k

 leapp-upgrade-el7toel8-deps

                        noarch 1:0.21.0-4.el7.elevate.4 elevate            41 k

 libcomps               x86_64 0.1.8-14.el7             C7.9.2009-extras   75 k

 libdnf                 x86_64 0.22.5-2.el7_9           C7.9.2009-extras  535 k

 libmodulemd            x86_64 1.6.3-1.el7              C7.9.2009-extras  141 k

 librepo                x86_64 1.8.1-8.el7_9            C7.9.2009-updates  82 k

 libreport-filesystem   x86_64 2.1.11-53.el7.centos     C7.9.2009-base     41 k

 libsolv                x86_64 0.6.34-4.el7             C7.9.2009-base    329 k

 libyaml                x86_64 0.1.4-11.el7_0           C7.9.2009-base     55 k

 pciutils               x86_64 3.5.1-3.el7              C7.9.2009-base     93 k

 python2-dnf            noarch 4.0.9.2-2.el7_9          C7.9.2009-extras  414 k

 python2-hawkey         x86_64 0.22.5-2.el7_9           C7.9.2009-extras   71 k

 python2-leapp          noarch 0.18.0-2.el7             elevate           195 k

 python2-libcomps       x86_64 0.1.8-14.el7             C7.9.2009-extras   47 k

 python2-libdnf         x86_64 0.22.5-2.el7_9           C7.9.2009-extras  611 k

Transaction Summary

================================================================================

Install  2 Packages (+20 Dependent packages)

Total download size: 4.8 M

Installed size: 33 M

Downloading packages:

(1/22): deltarpm-3.6-3.el7.x86_64.rpm                      |  82 kB   00:00

warning: /var/cache/yum/x86_64/7/elevate/packages/leapp-0.18.0-2.el7.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 81b961a5: NOKEY

Public key for leapp-0.18.0-2.el7.noarch.rpm is not installed

(2/22): leapp-0.18.0-2.el7.noarch.rpm                      |  31 kB   00:00

(3/22): PyYAML-3.10-11.el7.x86_64.rpm                      | 153 kB   00:00

(4/22): leapp-deps-0.18.0-2.el7.noarch.rpm                 |  13 kB   00:00

(5/22): dnf-data-4.0.9.2-2.el7_9.noarch.rpm                |  51 kB   00:00

(6/22): dnf-4.0.9.2-2.el7_9.noarch.rpm                     | 357 kB   00:00

(7/22): leapp-upgrade-el7toel8-0.21.0-4.el7.elevate.4.noar | 1.2 MB   00:00

(8/22): leapp-upgrade-el7toel8-deps-0.21.0-4.el7.elevate.4 |  41 kB   00:00

(9/22): libcomps-0.1.8-14.el7.x86_64.rpm                   |  75 kB   00:00

(10/22): leapp-data-rocky-0.5-1.el7.20241127.noarch.rpm    | 323 kB   00:00

(11/22): libdnf-0.22.5-2.el7_9.x86_64.rpm                  | 535 kB   00:00

(12/22): libmodulemd-1.6.3-1.el7.x86_64.rpm                | 141 kB   00:00

(13/22): libreport-filesystem-2.1.11-53.el7.centos.x86_64. |  41 kB   00:00

(14/22): librepo-1.8.1-8.el7_9.x86_64.rpm                  |  82 kB   00:00

(15/22): libyaml-0.1.4-11.el7_0.x86_64.rpm                 |  55 kB   00:00

(16/22): pciutils-3.5.1-3.el7.x86_64.rpm                   |  93 kB   00:00

(17/22): python2-leapp-0.18.0-2.el7.noarch.rpm             | 195 kB   00:00

(18/22): libsolv-0.6.34-4.el7.x86_64.rpm                   | 329 kB   00:00

(19/22): python2-hawkey-0.22.5-2.el7_9.x86_64.rpm          |  71 kB   00:00

(20/22): python2-libcomps-0.1.8-14.el7.x86_64.rpm          |  47 kB   00:00

(21/22): python2-dnf-4.0.9.2-2.el7_9.noarch.rpm            | 414 kB   00:00

(22/22): python2-libdnf-0.22.5-2.el7_9.x86_64.rpm          | 611 kB   00:00

——————————————————————————–

Total                                              2.4 MB/s | 4.8 MB  00:01

Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ELevate

Importing GPG key 0x81B961A5:

 Userid     : “ELevate <packager@almalinux.org>”

 Fingerprint: 74e7 f249 ee69 8a4d acfb 48c8 4297 85e1 81b9 61a5

 Package    : elevate-release-1.0-2.el7.noarch (@/elevate-release-latest-el7.noarch)

 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-ELevate

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

  Installing : libsolv-0.6.34-4.el7.x86_64                                 1/22

  Installing : librepo-1.8.1-8.el7_9.x86_64                                2/22

  Installing : libyaml-0.1.4-11.el7_0.x86_64                               3/22

  Installing : libmodulemd-1.6.3-1.el7.x86_64                              4/22

  Installing : libdnf-0.22.5-2.el7_9.x86_64                                5/22

  Installing : python2-libdnf-0.22.5-2.el7_9.x86_64                        6/22

  Installing : python2-hawkey-0.22.5-2.el7_9.x86_64                        7/22

  Installing : PyYAML-3.10-11.el7.x86_64                                   8/22

  Installing : leapp-deps-0.18.0-2.el7.noarch                              9/22

  Installing : python2-leapp-0.18.0-2.el7.noarch                          10/22

  Installing : pciutils-3.5.1-3.el7.x86_64                                11/22

  Installing : deltarpm-3.6-3.el7.x86_64                                  12/22

  Installing : libreport-filesystem-2.1.11-53.el7.centos.x86_64           13/22

  Installing : dnf-data-4.0.9.2-2.el7_9.noarch                            14/22

  Installing : libcomps-0.1.8-14.el7.x86_64                               15/22

  Installing : python2-libcomps-0.1.8-14.el7.x86_64                       16/22

  Installing : python2-dnf-4.0.9.2-2.el7_9.noarch                         17/22

  Installing : dnf-4.0.9.2-2.el7_9.noarch                                 18/22

  Installing : 1:leapp-upgrade-el7toel8-deps-0.21.0-4.el7.elevate.4.noa   19/22

  Installing : leapp-0.18.0-2.el7.noarch                                  20/22

  Installing : 1:leapp-upgrade-el7toel8-0.21.0-4.el7.elevate.4.noarch     21/22

  Installing : leapp-data-rocky-0.5-1.el7.20241127.noarch                 22/22

  Verifying  : dnf-4.0.9.2-2.el7_9.noarch                                  1/22

  Verifying  : leapp-0.18.0-2.el7.noarch                                   2/22

  Verifying  : libdnf-0.22.5-2.el7_9.x86_64                                3/22

  Verifying  : librepo-1.8.1-8.el7_9.x86_64                                4/22

  Verifying  : libmodulemd-1.6.3-1.el7.x86_64                              5/22

  Verifying  : dnf-data-4.0.9.2-2.el7_9.noarch                             6/22

  Verifying  : leapp-data-rocky-0.5-1.el7.20241127.noarch                  7/22

  Verifying  : libcomps-0.1.8-14.el7.x86_64                                8/22

  Verifying  : libreport-filesystem-2.1.11-53.el7.centos.x86_64            9/22

  Verifying  : python2-hawkey-0.22.5-2.el7_9.x86_64                       10/22

  Verifying  : deltarpm-3.6-3.el7.x86_64                                  11/22

  Verifying  : python2-dnf-4.0.9.2-2.el7_9.noarch                         12/22

  Verifying  : leapp-deps-0.18.0-2.el7.noarch                             13/22

  Verifying  : python2-libdnf-0.22.5-2.el7_9.x86_64                       14/22

  Verifying  : libyaml-0.1.4-11.el7_0.x86_64                              15/22

  Verifying  : python2-libcomps-0.1.8-14.el7.x86_64                       16/22

  Verifying  : 1:leapp-upgrade-el7toel8-0.21.0-4.el7.elevate.4.noarch     17/22

  Verifying  : 1:leapp-upgrade-el7toel8-deps-0.21.0-4.el7.elevate.4.noa   18/22

  Verifying  : libsolv-0.6.34-4.el7.x86_64                                19/22

  Verifying  : python2-leapp-0.18.0-2.el7.noarch                          20/22

  Verifying  : PyYAML-3.10-11.el7.x86_64                                  21/22

  Verifying  : pciutils-3.5.1-3.el7.x86_64                                22/22

Installed:

  leapp-data-rocky.noarch 0:0.5-1.el7.20241127

  leapp-upgrade-el7toel8.noarch 1:0.21.0-4.el7.elevate.4

Dependency Installed:

  PyYAML.x86_64 0:3.10-11.el7

  deltarpm.x86_64 0:3.6-3.el7

  dnf.noarch 0:4.0.9.2-2.el7_9

  dnf-data.noarch 0:4.0.9.2-2.el7_9

  leapp.noarch 0:0.18.0-2.el7

  leapp-deps.noarch 0:0.18.0-2.el7

  leapp-upgrade-el7toel8-deps.noarch 1:0.21.0-4.el7.elevate.4

  libcomps.x86_64 0:0.1.8-14.el7

  libdnf.x86_64 0:0.22.5-2.el7_9

  libmodulemd.x86_64 0:1.6.3-1.el7

  librepo.x86_64 0:1.8.1-8.el7_9

  libreport-filesystem.x86_64 0:2.1.11-53.el7.centos

  libsolv.x86_64 0:0.6.34-4.el7

  libyaml.x86_64 0:0.1.4-11.el7_0

  pciutils.x86_64 0:3.5.1-3.el7

  python2-dnf.noarch 0:4.0.9.2-2.el7_9

  python2-hawkey.x86_64 0:0.22.5-2.el7_9

  python2-leapp.noarch 0:0.18.0-2.el7

  python2-libcomps.x86_64 0:0.1.8-14.el7

  python2-libdnf.x86_64 0:0.22.5-2.el7_9

Complete!

3. Run the Pre-Upgrade Checks

This stage is typically where sticky situations will show up. Even on a simple system like the one we’re using for this example, the pre-upgrade is very likely to have critical errors.

It is worth noting that this step does not change the system, so no side effects should be expected from running this command; notice that it does not require root permissions.

$ leapp preupgrade

============================================================

                      REPORT OVERVIEW

============================================================

Upgrade has been inhibited due to the following problems:

    1. Leapp detected loaded kernel drivers which have been removed in RHEL 8. Upgrade cannot proceed.

    2. Missing required answers in the answer file

HIGH and MEDIUM severity reports:

    1. GRUB2 core will be automatically updated during the upgrade

    2. Difference in Python versions and support in RHEL 8

    3. Packages not signed by Red Hat found on the system

    4. Detected custom leapp actors or files.

    5. Detected customized configuration for dynamic linker.

    6. ipa-server package is installed but no IdM is configured

    7. chrony using default configuration

Reports summary:

    Errors:                      0

    Inhibitors:                  2

    HIGH severity reports:       5

    MEDIUM severity reports:     2

    LOW severity reports:        4

    INFO severity reports:       2

Before continuing, review the full report below for details about discovered problems and possible remediation instructions:

    A report has been generated at /var/log/leapp/leapp-report.txt

    A report has been generated at /var/log/leapp/leapp-report.json

============================================================

                   END OF REPORT OVERVIEW

============================================================

The leapp-report.txt that is generated from the pre-upgrade command contains some canned resolutions to the errors it generated. Let’s try some of those answers!

# Remove pkcs11 module

$ leapp answer –section remove_pam_pkcs11_module_check.confirm=True

$ rmmod pata_acpi floppy

There are plenty of other messages that were generated and put in the report, but only the ones we used above were necessary for the upgrade to move forward. If you have third-party repositories, there’s a possibility that upgraded versions of system dependencies might be present. You’ll have to manually remove or downgrade those packages to resolve these version inconsistencies.

In order for the upgrade to proceed, you can’t have any errors that will inhibit the upgrade. This will appear in the “Reports Summary” under “Inhibitors.” You may address some of the warnings that appear by examining the report and testing the suggestions they provide to see if you can resolve the warnings. Keep in mind only the inhibitors are required to proceed, though.

4. Run the Upgrade Process, Reboot, and Test

$ leapp upgrade

Once this command completes, it’s time to reboot into the new kernel. You will need to manually intervene and select the new boot option in GRUB: ELevate-Upgrade-Initramfs

Now that you’ve booted into the new environment, you should see that you’re running on Rocky 8. Run through your standard QA tests to make sure the services the host is providing all work as expected. Continuing the upgrade to Rocky 9 isn’t likely to fix services that are broken at this stage, so conduct a thorough check before continuing the upgrade.

5. Upgrade to Rocky 9

The steps for continuing the upgrade to Rocky 9 are similar to the steps we took to upgrade to Rocky 8, starting with the ELevate repo:

$ yum install -y http://repo.almalinux.org/elevate/elevate-release-latest-el$(rpm –eval %rhel).noarch.rpm

Next, we can remove package exclusions that were added from the previous LEAPP upgrade:

$ yum config-manager –save –setopt exclude=”

You might run into a scenario like we did where we had to remove LEAPP with its dependencies, because because leapp-upgrade-el7toel8 was still installed, and it failed because it didn’t match the version. Then you can install the following:

$ yum install -y leapp-upgrade leapp-data-rocky

And then run the LEAPP preupgrade again:
$ leapp preupgrade

The output of the report will be similar to the previous one. After examining the report, we had to run these steps:

$ sed -i “s/^AllowZoneDrifting=.*/AllowZoneDrifting=no/” /etc/firewalld/firewalld.conf

$ leapp answer –section check_vdo.confirm=True

Note: If root login is allowed, the report will fail. We resolved this by overriding our sshd_config:

$ sed -i ‘s/^PermitRootLogin yes$/PermitRootLogin yes # test/’ /etc/ssh/sshd_config

Here was our last report before we ran the upgrade again:

$ leapp upgrade

============================================================

                      REPORT OVERVIEW

============================================================

HIGH and MEDIUM severity reports:

    1. Packages not signed by Red Hat found on the system

    2. Detected custom leapp actors or files.

    3. Leapp detected loaded kernel drivers which are no longer maintained in RHEL 9.

    4. Remote root logins globally allowed using password

    5. GRUB2 core will be automatically updated during the upgrade

Reports summary:

    Errors:                      0

    Inhibitors:                  0

    HIGH severity reports:       5

    MEDIUM severity reports:     0

    LOW severity reports:        2

    INFO severity reports:       3

Before continuing, review the full report below for details about discovered problems and possible remediation instructions:

    A report has been generated at /var/log/leapp/leapp-report.txt

    A report has been generated at /var/log/leapp/leapp-report.json

============================================================

                   END OF REPORT OVERVIEW

============================================================

Let’s give it a shot!

$ leapp upgrade

Again, even on a simple system, we can get blocking errors:

Following errors occurred and the upgrade cannot continue:

    1. Actor: dnf_package_download

       Message: DNF execution failed with non zero exit code.

Looking at /var/log/leapp/leapp-report.txt, there are a number of warnings, including a conflict between rocky-logos 86.3-1.el8 and rocky-logos-90.15-2.el9:

file /usr/share/redhat-logos from install of rocky-logos-90.15-2.el9.x86_64 conflicts with file from package rocky-logos-86.3-1.el8.x86_64

To resolve this, we removed rocky-logos (which also removed rocky-backgrounds), and re-ran the LEAPP upgrade. Our next step was reboot, just as we did during the CentOS 7 to Rocky 8 upgrade, and select the grub entry ELevate-Upgrade-Initramfs again, and watch it go!

Upon rebooting, it was time to remove the excludes again:

$ yum config-manager –save –setopt exclude=”

Then we could remove orphaned packages, which cleans up the system and makes it more secure:

$ rpm -qa | grep -E ‘el8[.-]’ | xargs rpm -e

The same goes for the LEAPP packages, since they’re not needed anymore:

$ dnf remove $(rpm -qa|grep leapp)

Done! Now it’s time to run our E2E and integration tests. After thorough testing of this host, it will be ready to re-enter production as an upgraded system.

Final Thoughts

Organizations with a long list of hosts in their host inventory, or hosts with especially complex software inventories, may need assistance sorting out all of the complexities associated with a CentOS to Rocky Linux migration. Or they might need more time than they initially allotted for the project. Partnering with OpenLogic for CentOS long-term support or migration services can ease the burden considerably. Our Professional Services team can help you plan your migration or provide hands-on-keyboard support throughout the process. And after you have successfully migrated, our Enterprise Architects can offer Rocky Linux support up to 24-7.

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

How to Find the Best Linux Distro for Your Organization

“What’s the best Linux distro?”

A better question to ask: “Which Linux distro can meet my business’s needs now and as we scale?”

Now that CentOS Linux has reached end of life, the playing field has widened, with several viable alternatives. This blog gives an overview of the post-CentOS EOL Linux landscape, comparing the most popular Enterprise Linux distributions and highlighting key differentiators. As you read, keep in mind your own team’s bandwidth and expertise with managing Linux infrastructure as you’re evaluating factors like cost, stability, and security.

As those in the process of migrating off CentOS know all too well, longevity is important, too. Confidence in the project’s direction, the strength of the community, governance model (i.e. how much control a for-profit corporation has) — these are all considerations that could (and should) influence which open source Linux distro is the right fit for your organization.

 

 

Types of Open Source Linux Distributions

Linux distros are a combination of the open source Linux kernel and a suite of supporting software that facilitates the development and operation of applications. Open source communities make decisions about which packages to include based on the use cases they want to prioritize. A Linux distribution designed for desktop, for example, might include tools like media players and UI customizability features. Enterprise Linux distros, on the other hand, focus more on security, stability, and speed to optimize performance for mission-critical applications.

There are a few different ways to categorize open source Linux distros. You can bucket them according to who manages the project (a community or a commercial entity), the release model (rolling or fixed), or the upstream source (e.g. Fedora, Debian).

 

Community vs. Commercial

The difference between community vs. commercial open source Linux distributions is that community-backed Linux distributions are free to use and supported by a community made up of individual contributors. These volunteers dedicate time and expertise to maintain the project and commit to releasing security updates, bug fixes, and new versions.

Commercial Enterprise Linux distributions are sold by software vendors who build their product from open source components and packages and require a paid subscription. The distro itself is functionally identical to the community version, but users have access to technical support, and often some proprietary enterprise features/tooling.

 

Rolling Release vs. Fixed Release

Rolling release means that updates and new features are continuously and incrementally released instead of bundled into versions that are released on a fixed schedule. Frequent updates to the Linux kernel, libraries, utilities, or any package are released as soon as they are ready without waiting for a defined release date. Typically, rolling release Linux distros do not require users to perform large-scale version upgrades because of this “steady drip” of updates. Issues, bugs, and vulnerabilities can be identified and resolved more rapidly compared to fixed, or regular, release distros.

Rolling release distros appeal to those who prioritize having the latest software and features over stability. However, they require users to stay proactive in system maintenance and be prepared to address issues that arise due to the constant stream of updates. Rolling release models can often lead to conflicts between different software versions as no testing is done to validate that different software interoperates correctly; sometimes new features in a new package release can also lead to subtle behavior differences that cause application breakage. As such, many organizations prefer fixed release models for their business critical applications.

 

Upstream Source

There are distros derived from Fedora, RHEL (which itself comes from Fedora), Debian, SUSE, and more. Each ecosystem has strengths, and preference here might come down to what your team is accustomed to and other considerations (for instance, if you are already an Oracle customer, Oracle Linux might make more sense than if you are not).

Now let’s take a closer look at some of the distributions themselves, grouping them by their upstream source and starting with Fedora.

Note: Asterisks denote that the distribution is currently supported by OpenLogic. 

Fedora and RHEL-Based Linux Distros

Fedora*

Fedora is a popular, community-backed Linux distro known for its emphasis on new features and technologies, and open source collaboration. It aims to provide a platform for both desktop and server users, offering the latest software while maintaining a balance between innovation and stability. Fedora users appreciate staying on the forefront of technology, contributing to open source projects, and experimenting with the latest software innovations. Fedora typically releases two new versions a year, one in the spring and one in the fall.

 

CentOS Stream*

CentOS Stream is referred to as the “rolling preview” on which RHEL releases are based. It is the bridge between Fedora and RHEL, using the same source code Red Hat uses to produce the next version of RHEL. The current version is CentOS Stream 10 and precedes RHEL 10 (and downstream RHEL distros like Rocky Linux and AlmaLinux).

Picking CentOS Stream comes down to your preferences for your overall Linux ecosystem. Everything that you expect inside a RHEL/CentOS ecosystem, such as package manager and virtualization options, will still be available to you in Stream, and you’ll receive bug fixes and security patches on a faster schedule than on CentOS Linux. If you’re on the fence about the rolling release route and not sure your organization is ready, this CentOS Stream migration checklist is a good resource.

 

Red Hat Enterprise Linux (RHEL)

RHEL is a well-established commercial Enterprise Linux distro known for its stability, long-term support, and comprehensive ecosystem. It offers various editions tailored for different workloads and environments, such as servers, cloud, and container deployments. RHEL is built off of snapshots of CentOS Stream, freezing all software versions to those in the snapshot, and only applying security fixes going forward from that release version. This is what gives it stability and security.

Red Hat, now owned by IBM, provides support for RHEL customers, but the license cost and annual fees may be prohibitively expensive for some organizations. As with any commercial software, there is a greater risk of vendor lock-in as well.

 

CentOS Linux (Discontinued)*

Much to the community’s surprise (and dismay), CentOS 8 was prematurely sunsetted in 2021 just two years after its release and CentOS 7 reached end of life in 2024. Red Hat, who then controlled the project, announced the end of CentOS Linux as part of their decision to focus more on CentOS Stream. This led to the creation of new distros derived from the RHEL source code, most notably Rocky Linux and AlmaLinux, to replace CentOS Linux.

Migrating and decommissioning environments can take months (or even years), so CentOS long-term support is one option for businesses that need more time to evaluate other distros and transition their EOL CentOS deployments.

 

Rocky Linux*

Rocky Linux is a community-supported Linux distro created by one of the founders of CentOS and one of the most popular CentOS alternatives. Promising bug-for-bug compatibility with RHEL, Rocky Linux aims to provide a stable, reliable, and compatible platform for organizations and users who were previously relying on CentOS for their server infrastructure.

Related Blog >>Comparing Rocky Linux vs. RHEL

 

AlmaLinux*

Like Rocky Linux, AlmaLinux is a community-backed, open source Linux distro launched in response to the CentOS Linux project being discontinued. AlmaLinux is binary-compatible with RHEL, meaning that applications will run on AlmaLinux as seamlessly as in RHEL.

 

Oracle Linux*

Oracle Linux is packaged and distributed by Oracle, and is another binary-compatible rebuild of RHEL’s RPMs. Oracle Linux is tested and optimized to work well with Oracle’s other software offerings, making it a suitable choice for running Oracle databases and other application workloads. Some worry that eventually Oracle might start charging for Oracle Linux (like they did with OracleJDK in 2019), but as of now, it is free and at a price point similar to RHEL, you can purchase SLA-backed commercial support.

Get the Decision Maker’s Guide to Enterprise Linux

In this complete guide to the Enterprise Linux landscape, our experts present insights and analysis on 20 of the top Enterprise Linux distributions — with a full comparison matrix and battlecards.

Download for Free

Debian-Based Linux Distributions

 

Debian Linux*

Debian is known for its commitment to open source principles, stability, and extensive package management system. It serves as the foundation for various other Linux distros such as Ubuntu and Linux Mint. Debian is widely used in both desktop and server environments. It is a popular choice for users seeking a reliable and customizable Linux distro for a wide range of applications and use cases, including embedded systems.

 

Debian Testing

Debian also has a testing branch, similar to a beta version, which is an intermediary stage between Debian’s unstable and stable branches. The testing branch is intended for users who want a balance between access to newer software and a relatively stable system. Debian Testing gets new features and fixes before the stable Debian release so there might be issues to troubleshoot in exchange for access to the latest and greatest features, some of which make their way into the stable Debian release.

 

Ubuntu Community Edition*

Often referred to as simply “Ubuntu,” this distro is widely used due to its user-friendly experience, robust software ecosystem, and active community support. It is a solid choice for both desktop and server Linux, and enterprise use. Like Debian, Ubuntu also uses the apt ecosystem for package management and many AI-related packages are included in the distro.

 

Ubuntu Pro

Ubuntu Pro is the commercialized version of Ubuntu known for its ease of use, regular updates, and compatibility with cloud environments. There are versions optimized for different environments, such as Ubuntu Desktop, Ubuntu Server, Ubuntu for IoT, and Ubuntu Cloud. Ubuntu attracts front-end developers with easy-to-use features and a slew of programming resources, including AI libraries.

 

Linux Mint

Linux Mint strives to provide a stable, user-friendly experience for both Linux newcomers and experienced users. It is based on Ubuntu and Debian, building upon their foundations while adding additional features and design elements. Linux Mint emphasizes convenience and provides a traditional desktop experience with a lot of customization options. It also was designed to help Windows users seamlessly transition to a Linux OS.

SUSE Distributions

OpenSUSE Leap*

OpenSUSE Leap is a community-driven distro that combines the stability of a fixed release model with the availability of up-to-date software packages. It provides a reliable and user-friendly operating system for both desktop and server environments. OpenSUSE is generally considered to be stable for production use, and those familiar with the SLES, SUSE, and Slackware ecosystem will feel comfortable in this environment. OpenSUSE focuses on deployment simplicity, user-friendly toolchain, and cloud-readiness.

 

OpenSUSE Tumbleweed*

Tumbleweed is the OpenSUSE community’s rolling release distro. Just as in CentOS Stream, bug fixes and security patches come earlier than in OpenSUSE Leap, the regular release distro, but there also could be some features that are not quite ready for primetime. Tumbleweed supports a wide range of desktop environments, software libraries, and tools.

 

SUSE Linux Enterprise Server (SLES)

SLES is the commercial counterpart to the OpenSUSE Linux distros and is backed by SUSE, a German-based multinational enterprise. It is an enterprise-focused distribution with a strong emphasis on reliability, scalability, and high-performance computing. It offers features like Systemd, Btrfs, and containers support, making it suitable for various server and virtual environments.

Other Open Source Linux Distributions

 

Arch Linux

Arch Linux is a rolling, lightweight Linux distro that is highly customizable and emphasizes simplicity, minimalism, and a DIY approach. It is a better fit for experienced Linux users who want to build a tailored and efficient OS environment according to their specific needs. Its rolling release model provides continuous updates to the latest software packages and features without the need for major version upgrades. Arch Linux is popular among developers and Linux enthusiasts (aka “power users”) who enjoy experimenting with and fine-tuning their Linux system.

 

Alpine Linux

Alpine Linux is a security-oriented lightweight Linux distro designed for resource efficiency and containerization. It is known for its small footprint, speed, and focus on security measures. Alpine Linux is often used in scenarios where size and security are critical, such as in containers, IoT devices, and embedded systems. Alpine Linux is particularly suitable for scenarios where fast boot times, small memory usage, and strong security are required.

 

Amazon Linux

Amazon Linux is AWS’s Linux distro intended for use in Amazon Elastic Compute Cloud environments (EC2). It is offered as pre-configured Amazon Machines Images (AMI) ready to use in AWS. Originally built from RHEL, the distro is now derived from CentOS Stream, and the source code is publicly available and distributed under open source licenses.

Final Thoughts

Hopefully it is clear by now that choosing the best Linux distro for your organization will likely take some time and research. Considering what each offering can help your business achieve, and where you might find friction in implementation is key to succeeding with your next open source Linux distro. Make sure you think about intended use cases, the skills required, and learning curve. Tooling (such as package management) is important to evaluate, along with ecosystem, compatibility, and vendor lock-in risk.

One way to avoid vendor lock-in but still get the security and support you need is to partner with a third party like OpenLogic. Our Enterprise Linux support is guaranteed by SLAs and every ticket is handled by an Enterprise Architect with at least 15 years of Linux experience. We also offer migration services – from consulting to executing the migration itself.

Editor’s Note: This blog was originally published in January 2021. It was updated in February 2025 to reflect changes in the open source Enterprise Linux landscape.

Looking For Migration Services or Support?

OpenLogic offers CentOS migration services and technical support, backed by SLAs, for AlmaLinux, Rocky Linux, CentOS Stream, Ubuntu, Debian, Oracle Linux, and more. Talk to an expert today to get started.

Talk to an Expert  See Datasheet

 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Open Source Big Data Infrastructure: Key Technologies for Data Storage, Mining, and Visualization

Big Data infrastructure refers to the systems (hardware, software, network components) and processes that enable the collection, management, and analysis of massive datasets. Companies that handle large volumes of data constantly coming in from multiple sources often rely on open source Big Data frameworks (i.e. Hadoop, Spark), databases (i.e. Cassandra), and stream processing platforms (i.e. Kafka) as the foundation of their Big Data infrastructure.

In this blog, we’ll explore some of the most commonly used technologies and methods for data storage, processing, mining, and visualization in an open source Big Data stack. 

Data Storage and Processing

The primary purpose of Big Data storage is to successfully store vast amounts of data for future analysis and use. A scalable architecture that allows businesses to collect, manage, and analyze immense sets of data in real-time is essential. 

 

Big Data storage solutions are designed to address the speed, volume, and complexity of large datasets. Examples include data lakes, warehouses, and pipelines, all of which which can exist in the cloud, on-premises, or in an off-site physical location (which is referred to as colocation storage).

Data Lakes

Data lakes are centralized storage solutions that process and secure data in its native format without size limitations. They can enable different forms of smart analytics, such as machine learning and visualizations.

Data Warehouses

Data warehouses aggregate datasets from different sources into a single storage unit for robust analysis, data mining, AI, and more. Unlike a data lake, data warehouses have a three-tier structure for storing data.

Data Pipelines

Data pipelines gather raw data from one or more sources, potentially merge and transform it in some way, and then transport it to another location, such as lakes or warehouses.

Related Technologies

No matter where data is stored, at the heart of any Big Data stack is the processing framework. One prominent open source example is Apache Hadoop, which allows for the distributed processing of large datasets across clusters of computers. Hadoop has been around for a long time, but is still popular especially for non-cloud-based solutions. It can be seamlessly coupled with other open source data technologies like Hive or HBase for a more comprehensive implementation to meet business requirements. 

Data Mining

Data mining is defined as the process of filtering, sorting, and classifying data from large datasets to reveal patterns and relationships, which helps enterprises identify and solve complex business problems through data analysis. 

Machine learning (ML), artificial intelligence (AI), and statistical analysis are the crucial data mining elements that are necessary to scrutinize, sort, and prepare data for deeper analysis. Top ML algorithms and AI tools have enabled the easy mining of massive datasets, including customer data, transactional records, and even log files picked up from sensors, actuators, IoT devices, mobile apps, and servers.

 

Every data science application demands a different data mining approach. Pattern recognition and anomaly detection are two of the most well-known and both employ a combination of techniques to mine data.Let’s look at some of the fundamental data mining techniques commonly used across industry verticals.

 

Association Rule

The association rule refers to the if-then statements that establish correlations and relationships between two or more data items. The correlations are evaluated using support and confidence metrics, where support determines the frequency of occurrence of data items within the dataset, and confidence relates to the accuracy of if-then statements.

For example, while tracking a customer’s behavior when purchasing online items, an observation is made that the customer generally buys cookies when purchasing a coffee pack. In such a case, the association rule establishes the relation between two items (cookies and coffee packs), and forecasts future buys whenever the customer adds the coffee pack to the shopping cart.

 

Classification

The classification data mining technique classifies data items within a dataset into different categories. For example, vehiclescan be grouped into different categories, such as sedan, hatchback, petrol, diesel, electric, etc., based on attributes such as the vehicle’s shape, wheel type, or even number of seats. When a new vehicle arrives, it can be categorized into various classes depending on the identified vehicle attributes. The same classification strategy can be applied to categorize customers based on factors like age, address, purchase history, and social group.

 

Clustering

Clustering data mining techniques group data elements into clusters that share common characteristics. Data pieces get clustered into categories by simply identifying one or more attributes. Some of the well-known clustering techniques are k-means clustering, hierarchical clustering, and Gaussian mixture models.

 

Regression

Regression is a statistical modeling technique using previous observations to predict new data values. In other words, it is a method of determining relationships between data elements based on the predicted data values for a set of defined variables. This category’s classifier is called the “Continuous Value Classifier.”

 

Sequence & Path Analysis

One can also mine sequential data to determine patterns, wherein specific events or data values lead to other events in the future. This technique is applied for long-term data as sequential analysis is key to identifying trends or regular occurrences of certain events. For example, when a customer buys a grocery item, you can use a sequential pattern to suggest or add another item to the basket based on the customer’s purchase pattern.

 

Neural Networks

Neural networks technically refer to algorithms that mimic the human brain and try to replicate its activity to accomplish a desired goal or task. These are used for several pattern recognition applications that typically involve deep learning techniques. Neural networks are a product of advanced machine learning research.

 

Prediction

The prediction data mining technique is typically used for predicting the occurrence of an event, such as machinery failure or a fault in an industrial component, a fraudulent event, or company profits crossing a certain threshold. Prediction techniques can help analyze trends, establish correlations, and do pattern matching when combined with other mining methods. Using such a mining technique, data miners can analyze past instances to forecast future events.

 

Related Technologies

When it comes to data mining tasks, open source technologies like Spark, YARN or Oozie are great engines that use flexible and powerful Map Reduction and batching techniques.

Data Visualization

Data visualization is the graphical representation of information and data. With visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data.

As more companies increasingly depend on their Big Data to make operational and business-critical decisions, visualization has become a key tool to make sense of the trillions of rows of data generated every day.

 

Data visualization helps tell stories by curating data into a medium that is easier to understand. A good visualization removes the noise from data and highlights the useful information, like trends and outliers.

 

However, it’s not as simple as just dressing up a graph to make it look better or slapping on the “info” part of an infographic. Effective data visualization is a delicate balancing act between form and function. The plainest graph could be too boring to catch any notice, or it could make a powerful point; likewise, the most stunning visualization could utterly fail at conveying the right message or it could speak volumes. The data and the visuals need to work together, and there’s an art to combining great analysis with great storytelling.

 

Related Technologies

The open source software that best responds to these needs is Grafana, which encompasses all basic visualization elements. With a tool like Grafana, a business will be able to effectively monitor their Big Data implementation, and let data visualizations drive informed decisions, enhance system performance, and streamline troubleshooting.

 

 

Final Thoughts

While we’ve covered some of the fundamentals of Big Data infrastructure here, it should go without saying that there is much more to this topic than can be covered in a single blog post. It’s also worth noting that implementing and maintaining Big Data infrastructure requires a high level of technical expertise. These technologies are among the most complex, which is why companies that lack the in-house capabilities often turn to third parties for commercial support and/or Big Data platform administration. Investing in a Big Data platform can deliver big rewards, but only if it’s backed by a solid Big Data strategy and managed by individuals who have the necessary skills and experience.

Unlock the Power of Your Big Data

If you need to modernize your Big Data infrastructure or have questions about administering or supporting technologies like Hadoop, our Enterprise Architects can help.

Talk to a Big Data expert

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×