Skip to content

Open Source in Finance: Top Technologies and Trends

Editor’s Note: This article was originally published on the Fintech Open Source Foundation (FINOS) blog and is reprinted here with permission.

Financial organizations increasingly rely on open source software as a foundational component of their mission-critical infrastructure. In this blog, we explore the top open source trends and technologies used within the FinTech space from our last State of Open Source Report — with insights on the unique pain points these companies experience when working with OSS. 

About the State of Open Source Survey

OpenLogic by Perforce conducts an annual survey of open source users, specifically focused on open source usage within IT infrastructure. In 2024, we teamed up with the Open Source Initiative for the third year in a row, and brought on a new partner: the Eclipse Foundation, who helped us expand our reach and get more responses than ever before.

For those looking for the non-segmented results from the entire survey population (not just respondents working in the financial sector), you can find them published in our 2024 State of Open Source Report here.

Demographics and Firmographics

For the purposes of this blog, we segmented the results to focus on the Banking, Insurance, and Financial Services verticals. This segment, comprising 250 responses, represented 12.22% of our overall survey population. Before we dive into some of the key results of the survey, let’s look at demographic and firmographic datapoints that will help us to frame the results.

Among respondents representing the Banking, Insurance, and Financial Services verticals, most of their companies were headquartered in North America (32% of responses), with Africa, Asia, and Europe as the next most popular locations at 18.8%, 17.6% and 16%, respectively.

The top 3 roles for respondents were System Administrators (32%) Developers / Engineers (18.8%) and Managers / Directors (16.4%). Within this segment, we also saw strong large enterprise representation with 38.4% of respondents stating they work at companies with over 5000 employees.

Open Source Adoption

Our survey data painted a clear picture, with a combined 85.4% of respondents from these industries increasing their use of open source software. 59.4% said they’re increasing their use of open source significantly. This rate of open source adoption within a heavily regulated set of verticals shows how many companies are confidently deploying open source for their mission-critical applications.

Looking more granularly at areas of open source investment, we saw 37.3% from this segment investing in analytics, 30.8% investing in cloud and container technologies, and 30.3% investing in databases and data technologies.

When asked for the reasons for adopting open source technology, our respondents identified improving development velocity (53.51%), accessing innovation (35.14%), and the overall stability (28.11%) of these technologies as the top drivers. Cost reduction and modernization rounded out the top 5, at 24.86% and 21.08% of responses within the segment, respectively.


Top Challenges When Using Open Source Software

When we asked teams to share the biggest issues they face as they work with open source software, some key themes emerged. Companies within this segment identified maintaining security policies and compliance (56%), keeping up with updates and patches (49.09%), and not enough personnel (49.05%) as the most challenging.

Later in the survey, we asked specifically about how organizations are addressing open source software skill shortages within their organizations. The top tactics selected by our respondents were hiring experienced professionals (48.18%), hiring external consultants/contractors (44.53%), and providing internal or external training (40.88%).

Infrastructure scalability and performance issues (67.98%), and lack of a clear community release support process (59.75%) represented the least challenging areas for respondents within this segment.

Top Open Source Technologies

The State of Open Source Report has sections dedicated to technology categories (i.e. programming languages, databases) to assess which projects have gained adopters and are going strong vs. those that may be declining in popularity. As a reminder, the following results are specific to the Banking, Insurance, and Financial Services verticals.

When looking at Linux distributions, the top five selections were:

  • Ubuntu (33.75%)
  • Amazon Linux (21.88%)
  • Oracle Linux (20.00%)
  • Alpine Linux (16.88%)
  • CentOS (15.62%)

Here’s the full breakdown:

Get Expert Enterprise Linux Support

OpenLogic supports top community and commercial Linux distributions including AlmaLinux, Rocky Linux, Oracle Linux, Debian and Ubuntu. We also offer long-term support for CentOS.

Explore Enterprise linux Support

Looking at cloud-native technologies, the top five selections were:

  • Docker (32.50%)
  • Kubernetes (26.25%)
  • Prometheus (18.13%)
  • OpenStack (15.63%)
  • Cloud Foundry (13.12%)

This chart shows the top 10:

For open source frameworks, we did notice a surprising amount (26.62%) of respondents reporting usage of Angular.js (which has been end of life since 2021).

For those who indicated using Angular.js, we asked a follow up question regarding how they plan on addressing new vulnerabilities. 30.77% expressed that they won’t patch the CVEs, 26.92% noted that they have a vendor that provides patches, and 19.23% said that they will look for a long-term support vendor to help when it comes time.

In terms of open source data technology usage, we saw MySQL (31.08%) and PostgreSQL (30.41%) at the top of the list, with MongoDB (23.65%), Redis (20.27%), and Elasticsearch (18.24%) rounding out the top 5.

In the full report, we also look at the top programming languages/runtimes, infrastructure automation and configuration technologies, DevOps tools, and more. You can access the full report here

Open Source Maturity and Stewardship

At the end of the survey, we asked respondents to share information about the overall open source maturity of their organizations. 55.88% noted that they perform security scans to identify vulnerabilities within their open source packages, 41.91% noted that they have established open source compliance or security policies, and 34.56% have experts for the different open source technologies they use.

Another marker for organizational open source maturity is the sponsorship of nonprofit open source projects. The most supported organizations among Banking, Insurance, and Financial Services verticals were the Apache Software Foundation (27.94%), the Open Source Initiative (22.06%), and the Eclipse Foundation (19.85%). It’s also worth noting that 19.85% of respondents didn’t know of any official sponsorship of these projects within their organization. Overall, 89.41% noted that they sponsored at least one open source nonprofit organization.

Banking on Open Source: Finding Success With OSS in the Finance Sector

 

In this on-demand webinar, hear about how banks, Fintech, and financial services providers can meet security and compliance requirements while deploying open source software.

Final Thoughts

In this blog, we looked at segmented data from our 2024 State of Open Source Report specific to the Banking, Insurance, and Financial Services verticals. Considering these industries are heavily regulated, with most required to meet compliance requirements with their IT infrastructure, it was encouraging to see over 85% increasing their usage of open source software.

Not surprisingly, maintaining security policies and compliance was a top challenge for this segment. Given the current pace of open source adoption within this space, we expect this to continue to be a pain point. It’s up to organizations to manage the complexity that comes with juggling so many open source packages, and ultimately ensure that they have the technical expertise on hand to support that software — especially when it’s used in mission-critical IT infrastructure. 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Harbor Registry Overview: Using Harbor for Container Image Management

Learn about Harbor and the benefits of using it for container image management in cloud-native environments like Kubernetes. In this blog, our expert describes key features and ideal use cases, and discusses the pros and cons of two Harbor alternatives. 

 

What Is Harbor?

Harbor is an open source registry for securely storing and managing container images in cloud-native environments.

Originating as an internal project at VMWare, Harbor entered the open source scene in 2016. Its focus was clear: Storing and securing container images in a cloud-native environment. In its ideal configuration, Harbor is typically deployed to a Kubernetes cluster, where it provides container images from all sources a single home.

Providing a unified storage space proved invaluable when it came to managing images. As Harbor is capable of pulling from other registries as well as accepting user submissions, teams could route all images through their Harbor deployment, ensuring consistent policies would be applied. Vulnerability scanning, access control, signature verification — all of it could now be configured and controlled in one place.

Owing to its ease of use and substantial benefits, Harbor took off in popularity. By 2018, Harbor had joined the Cloud Native Computing Foundation (CNCF) and reached “Graduated” status by 2020. Since then, Harbor has continued to grow and remains a staple in Kubernetes environments.

Harbor Registry Key Features

Harbor comes with a host of features tuned to address common challenges in containerized environments. Instead of jumping straight into a list of everything Harbor can do (which might be overwhelming), let’s start with some of the core concepts, and build an understanding of Harbor’s feature set piece by piece.

Interface

Once deployed, Harbor exposes a web interface for interacting and exploring its artifacts and configurations. In addition to this, an API is exposed such that common tools, like the Docker client, can push and pull images directly to the registry.

Users

The interface, and much of the registry’s functionality, is locked by permissions granted to users. In the simplest cases, users can be created by Harbor itself and managed internally. However, this doesn’t scale particularly well, so Harbor also provides integration into other popular services such as OIDC, Active Directory, and LDAP.

Projects

Artifacts within Harbor are owned by a project. This grouping allows settings and permissions to be tuned for sets of artifacts as opposed to a purely global level. From there, users can be granted a role in a project, such as Guest (read-only), Developer (read-write), or Project Admin (read-write-configure).

Security

Aside from access control, Harbor includes several other critical security features. By utilizing popular image scanners, such as Trivy, images can be automatically scanned for known vulnerabilities. The results of these scans can be leveraged to prevent pulling of artifacts with unaddressed security issues.

On top of scanning, Harbor also includes support for signature verification. After using a tool like Notary or Cosign to sign an artifact, Harbor is capable of verifying each signature and rejecting artifacts which fail the verification process.

Additional Features

With the core functionality out of the way, we can now take a brief look at some of the other features of Harbor.

  • Storage of OCI Artifacts – In addition to container images, Harbor can store OCI artifacts such as Helm charts.
  • High-Availability – As Harbor deploys to Kubernetes, it follows the common pattern of providing a high-availability configuration, ensuring maximum uptime.
  • Registry Replication – While users can manually push and pull from the registry, images may also be automatically replicated to and from external registries. This is highly configurable, allowing for control over how and when artifacts are replicated.
  • Observability – Harbor natively supports a standard suite of observability features, including logging, metrics, and tracing.
  • SBOM – As well as scanning artifacts, Harbor can generate a Software Bill of Materials (SBOM), which acts as a list of all found dependencies within an image.

Harbor Registry Installation

Harbor provides two paths for installation:

  1. The first is to use their own installer, which deploys Harbor locally using Docker. This is a great option to try Harbor out or for small teams which will be leveraging Harbor in a limited fashion.
  2. The second path is to deploy to Kubernetes. This is accomplished via Helm and enables high-availability configurations. The Kubernetes deployment is the recommended approach for most teams.

To get started with either of these paths, we recommend following the official documentation for the most up-to-date instructions.

Need Help With Harbor?

OpenLogic now offers Gold-level, SLA-backed support for Harbor. Talk to an expert today to learn more or request a quote for Harbor technical support.

Talk to an Expert

Using Harbor for Container Image Management

With the high-level understanding of Harbor out of the way, we can dig a bit further into understanding when Harbor is worth considering. Typically, as organizations grow and their usage of containers increases, hosting your own registry becomes a stronger choice. While the operation cost of Harbor is low, any new piece of infrastructure must be maintained. As such, if your organization or team makes light use of containers, it may be better to look at cloud-based providers first.

Let’s take a look at three scenarios in which Harbor could be leveraged.

Private Registry

Let’s suppose your team is building and consuming their own container images. While these images shouldn’t contain any sensitive information, they may hold proprietary software or similar materials that need to remain safe. This, understandably, makes externally hosted options less desirable.

By deploying Harbor locally as a private registry, images can be kept on-site, greatly reducing the potential for accidental leaks. Furthermore, corporate security policies are enforced on all images, ensuring scanning and signing take place without ever leaving the network.

Proxy Registry

Now let’s consider a case in which a team makes heavy use of public images. This is a fairly common setup and typically not an issue. However, depending on how these images are being consumed, the team may find themselves running into rate limiting and bandwidth issues.

In this case, by using Harbor to mirror an external registry, each image only needs to be pulled by Harbor once, greatly reducing the load on the external service. As an added benefit, Harbor will remain available even when the external registry is not.

Air-Gapped Registry

Finally, let’s consider a critical system which relies on both public and internal container images to function. For security reasons, this environment is air-gapped, preventing access to public registries.

Here, a self-hosted image registry is the only viable option, making Harbor a smart choice. Images can be manually marshalled in and assigned different security policies by grouping based on source. On top of that, Harbor provides a mechanism for manually updating the security vulnerability database in its scanner, enabling up-to-date scans without a connection to the internet.

Back to top

Harbor Alternatives

Many options exist within the container registry space. As Harbor is a CNCF graduated project, it is typically the recommended choice for organizations looking to host their container images on-site. Instead of direct comparisons, let’s take a look at two alternatives with some significant tradeoffs.

Sonatype Nexus

Nexus is an artifact registry in a much broader sense than Harbor. While it does support acting as a container image registry, its strength lies primarily in the range of artifacts it can hold. This includes artifacts for Docker, Go, Maven, Python, Yum, and more. The advantage here is clear: If container images are smaller component of your broader technical needs, a general-purpose repository can provide quite a bit of value.

However, these features come with a drawback: Container images are supported, but many of the security features are not. At the time of this writing, Nexus does not support signing or vulnerability scanning on container images.

Artifactory

Similar to Nexus, Artifactory supports a much wider range of artifact types. However, unlike Nexus, it does not sacrifice container image security features. Instead, its drawback is a common one: Cost. Artifactory is an offering form JFrog, and while it has a wide range of features, it requires a paid license for full functionality. A side effect of this is that Artifactory tends to leverage other offerings from JFrog as well.

When considering Artifactory, it’s important to evaluate the surrounding ecosystem and community. While we recommend open source solutions for their flexibility and community support, options like Artifactory may fit particular use cases better.

Harbor Container Registry FAQs

In this section, we’ll answer some of the most common questions about Harbor.

What Is the Difference Between Docker Hub vs. Harbor?

Docker Hub is a popular cloud-based registry. It provides many of the features available in Harbor but cannot be hosted on-site. Additionally, some functionality is gated behind paid tiers of membership. By comparison, Harbor’s self-hosted nature is ideal for teams needing on-site security and control over their registry.

Is Harbor Free?

Yes. Harbor is both free and open source under the Apache License 2.0.

Can I Use Harbor With Kubernetes?

Yes. Harbor is built from the ground up to support Kubernetes, including high-availability configurations.

Where Can I Get Harbor Support?

The Harbor community is active and you can connect with other Harbor users on X and/or attend biweekly community meetings on Zoom to get updates and submit feedback. There are also distribution lists for both Harbor users and developers to join.

However, there is no guarantee someone in the community will have expertise or knowledge that relates to your particular use case, or that you’ll be able to get help quickly. This is why some teams opt for SLA-backed support provided by vendors like OpenLogic. The advantage of this is having an exact timeline for resolution and the ability to talk directly, 24-7, with an Enterprise Architect.

Final Thoughts

As covered in this blog, Harbor is an excellent choice for container image management, particularly in instances where you want to host your registry on-site, mirror an external registry, or have an air-gapped environment. As a project, Harbor is well-maintained and benefits from a robust community of contributors. While it is free to deploy, all infrastructure software requires some degree of maintenance, so it’s always a good idea to consider the “soft cost” in terms of your team’s time to decide whether it makes sense to get support from a third party like OpenLogic. 

 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

NGINX vs. HAProxy: Comparing Features and Use Cases

NGINX and HAProxy share much in common at a high level: Both are open source technologies used to manage web traffic. However, the more specific the use case and volume of data, the more the minor differences become significant. This is when weighing the benefits and drawbacks of NGINX vs. HAProxy can be beneficial.

In this blog, our expert highlights the key differences between NGINX vs. HAProxy and explains how to determine which is more suitable for your website or application.

Note: While both NGINX and HAProxy have commercial versions (NGINX Plus and HAProxy Enterprise), this blog is focused on the FOSS versions. 

NGINX vs. HAProxy: Overview

The main difference between NGINX vs. HAProxy is that while both are effective as load balancers and reverse proxies, NGINX is a web server with broader range of capabilities, making it more versatile. HAProxy is ideal for complex load balancing scenarios where high throughput and low latency are needed to manage a high volume of web traffic.

The key technical differences between NGINX and HAProxy come into play in two areas: the efficiency of the worker processes and load balancing health checks of the next endpoint. The latter is particularly limited in NGINX (less so in NGINX Plus, which has a number of premium features left out of the free OSS version). 

 

What Is NGINX?

NGINX is an HTTP web server, reverse proxy for TCP/UDP and web traffic, and mail proxy server. It’s characterized by its lightweight footprint, and efficient and modular design.

What Is HAProxy?

HAProxy is a layer 4 TCP proxy and an HTTP gateway/reverse proxy that can handle HTTP 1.1, HTTP2, and HTTP3 requests/responses on either end and a combination of protocols. Due to its queue design and features, HAProxy can terminate TLS and normalize HTTP and TCP traffic.

While there are many use cases where HAProxy shines, it is not capable of per-packet load balancing or serving static web content, nor is it a good fit as a dedicated, large-scale caching proxy.

NGINX vs. HAProxy: Key Similarities and Differences

When it comes to reverse proxying and load balancing, there are more similarities than differences between NGINX and HAProxy. However, we’ll explore a few areas where the two technologies differ and when/why it matters.

Architecture

NGINX and HAProxy both utilize event-driven architecture, though HAPRoxy has a multi-threaded single process design and NGINX uses dedicated worker processes.

Configuration

NGINX uses a hierarchical block structure for configuration. The main NGINX configuration file is typically nginx.conf with additional configuration loaded in a separate file (for example, the TLS configuration). The directives in the configuration blocks are structured in key-value pairs and encapsulated in curly brace blocks.

The main contexts are http, server, and location. The context is inherited from parent context and directives have priorities. When building more complex ‘location’ and ‘match’ logic, the directive order and priority is often overlooked.

Here are some best practices for location blocks in NGINX:

  • Use exact matches for static pages that you know won’t change.
  • Utilize regular expressions for dynamic URI matching but be aware of the order of precedence.
  • Prefix matches (^~) can be used for performance benefits if you do not need regular expression matches.
  • Root-level (/) location should be your fallback option.

The most common issues when configuring location blocks in NGINX include:

  • Regular expressions evaluated out of order can lead to unexpected results.
  • Overusing regular expressions can degrade performance.
  • Prefix directives without the ^~ modifier may be overridden by regular expressions.

Get more NGINX setup and configuration tips >>

Now let’s compare to HAProxy, which uses a flat section-based configuration. The configuration file for HAProxy is commonly haproxy.cfg. The main sections are global, defaults, frontend, backend, and listen.

Some common issues to be aware of regarding HAProxy configuration:

  • Not using graceful reload to avoid connection interruptions.
  • Lack of observability implementation for the golden signals of the HAProxy Frontend and Backend systems (Latency, Service Saturation, Errors, and Traffic Volume).

Key difference: HAProxy configuration tends to be more specific to load balancing and proxying, while NGINX configuration can cover a broad range of web server functionalities that HAProxy lacks.

Performance

When evaluating the performance of NGINX vs HAProxy, the differences are pretty nuanced, and comparable only on a use case by use case basis. Generally speaking, they are both considered high-performance in terms of delivering content to clients and users.

There are some features of HAProxy that can be useful in scenarios where NGINX does not have an equivalent function. For example, HAProxy’s design with multiple threads on the same process allows it to share resources among the processes. This is advantageous when many different clients access similar endpoints that share resources or web services.

Scalability

Again, both NGINX and HAProxy are highly scalable. One drawback of NGINX is that each request can only be served by a single worker. This is not optimal use of CPU and network resources. Because of this request-process pinning effect, requests that do CPU-heavy or blocking IO tasks can slow down other requests.

Security

HAProxy offers fine-grained Access Control List (ACL) configurations via a flexible ACL language. NGINX, on the other hand, uses IF statements for routing.

For observability, NGINX relies on logging, and an OpenTelemetry module can be added during build time, whereas HAProxy offers a native API and statistics on demand.

Learn more about web server security >>

Support

Both NGINX and HAProxy have a very large user bases and communities, and are being actively developed with new features (e.g. QUIC, HTTP/3) and updated regularly with security patches. Additionally, both also have active Github projects with discussion forums where users can submit questions and share feedback on features.

For teams that need immediate, expert-level remediation beyond what OSS communities provide, OpenLogic offers SLA-backed technical support up to 24/7/365 for both NGINX and HAProxy.

Use Cases: NGINX vs. HAProxy

On a qualitative basis, NGINX is the go-to option for fast and simple builds. This is also why NGINX is so popular as an ingress controller in Kubernetes and edge deployments.

While HAProxy will cover all the same use cases as NGINX, it is more feature-rich as a reverse proxy. For example, you could use HAProxy for a layer 4 database frontend for a MySQL cluster/replication architecture, multiple monolithic web applications or services, DNS cache, and initial Denial of Service protection via queueing. SRE Engineers will appreciate the detailed real-time metrics and monitoring capabilities in HAProxy as well.

Using NGINX and HAProxy Together

In large, data-intensive distributed architectures, there are some use cases where the upsides of combining the strengths of NGINX and HAProxy are appealing. However, there are also some drawbacks worth considering.

Use cases

  • High-traffic websites and microservices requiring both content delivery and load balancing
  • Applications with mixed static and dynamic content, especially beyond web type content

Upsides

  • Complementary strengths: NGINX excels at content caching and serving static content, while HAProxy is optimized for load balancing.
  • Enhanced security: NGINX can act as a reverse proxy, adding an extra layer of security before requests reach HAProxy.
  • Flexibility: This setup allows for more complex architectures and fine-tuned control over traffic flow.

Drawbacks

  • Increased complexity: Managing two separate systems can be more challenging.
  • Potential bottlenecks: If not configured properly, the additional layer can introduce latency.
  • Higher resource usage: Running both services requires more server resources.
  • Configuration challenges: Ensuring both systems work harmoniously together can be tricky.

Final Thoughts

Hopefully it is now clear that comparing NGINX vs. HAProxy is a worthwhile exercise. There are use cases that favor each, as well as situations when deploying them together can be an effective strategy. Most agree that the learning curve for NGINX is less steep, with easier setup and configuration, so for simpler applications delivering static content where speed is prioritized over complexity, NGINX works. However, for applications that require real-time responsiveness and high availability, and teams that want more advanced customization for traffic routing and better observability, HAProxy is probably a better fit. 

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Perforce Acquires Delphix

We are delighted to announce our acquisition of Delphix, a best-in-class leader in Enterprise Data Management solutions. I want to share with you why I am personally excited about this major milestone in our company’s continued DevOps evolution and the benefits this acquisition provides to our customers. Data is at the heart of how enterprises operate today and essential for successful software development, but accessing and managing that data is extremely challenging. Many teams do not have rapid access to solid, high-quality test data. Imagine something the size of a relational database, with all the data to collect and piece together to make it testable — this is both labor-intensive and very difficult to achieve. All that changes with Delphix. This truly outstanding platform provides on-demand, easy access to data very quickly and in a safe way. Delphix protects and masks customer data giving teams the right data, securely and quickly, so they can focus on creating great software.

More Stand-Out Advantages

Another unique ability of the Delphix platform is how it virtualizes data and ultimately reduces storage footprints, which is good news for sustainability and operational costs. Furthermore, it works across a wide variety of our customers’ environments, from mainframes to Oracle databases, ERP applications, multi-cloud, and containerized environments. The acquisition of Delphix is a reflection of what our customers tell us they need and how we respond. I cannot think of another solution better aligned with what we are trying to achieve: helping our customers innovate at speed and automate their developer environments. We aim to solve DevOps’ biggest challenges without stifling innovation, and Delphix is an excellent example of how we can do that. Moreover, our two organizations are extremely complementary — from our technology, teams, and shared dedication to delivering exceptional customer support. Like us, Delphix has a global presence, and we serve many of the same esteemed customers, including some of the world’s largest and most successful organizations.

Immediate Customer Benefits

Our customers can immediately reap the benefits of this acquisition. They gain access to enhanced capabilities within our already robust testing portfolio, complemented by Delphix’s expertise and the addition of skilled teams worldwide. This is just the beginning. We are committed to exploring how Delphix can further augment our comprehensive portfolio, aiming to become the preferred partner for all enterprise DevOps needs. Delphix represents a critical step forward, among many more to come. Stay tuned for what comes next. If you want to learn more about Delphix, please head over to their website https://www.delphix.com/

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

How Does UL 4600 Keep Autonomous Trucking Systems Safe?

The third edition of UL 4600 was released in 2023 to add specific requirements for the use case of autonomous trucking and to address changing industry trends.

Here, we explain what’s happening in the autonomous trucking industry, why UL 4600 is important for autonomous trucking specifically, and how static analysis helps to overcome challenges for this evolving technology.

Read on or jump ahead to the section that interests you most:

➡️ start Your Free Static Analysis Trial

While some just dream of fleets of autonomous trucks efficiently delivering goods across the country, others are already at work to ensure they can do so safely and resiliently. With the update to ANSI/UL 4600, the Standards for Safety for the Evaluation of Autonomous Products, Edition 3, in play, embedded software teams now have better safety guidance just as self-driving technology is ramping up to make shipping faster, more cost-effective, and more efficient in the face of driver shortages and rising transportation costs.

Autonomous trucking software is on the cusp of widespread adoption. To prepare for the inevitable transition from closed-course testing to widespread deployment, OEMs need to understand the current state of trucking automation and how to effectively implement software safety practices.

What Is Autonomous Trucking?

Autonomous or driverless trucks operate with minimal to no human input. Instead, autonomous trucks or autonomous tractor-trailers rely on sensors — usually combinations of cameras, LiDAR, and radar — to feed environmental data into algorithms and actuators that control the vehicle.

The Society of Automotive Engineers (SAE) have defined levels of driving automation that are adopted across the industry, which range from Level 0 (completely manual) to Level 5 (completely autonomous). While the lower levels with an automated driving system (ADAS) are already in play for autonomous trucks, software development teams will need to ensure functional safety for the higher autonomy levels, ideally Levels 4 (High Automation) and 5 (Full Automation) for the freight industry to benefit from autonomous trucking.

Autonomous Trucking Development Today

Trucking is the dominant mode of inland freight transportation in many countries, with several trucking companies competing to be the first to operationalize autonomous driving. Driverless technologies offer significant potential value for fleet operators: They can reduce operating costs, overcome driver shortages, and improve efficiency.

For example, driverless semi trucks (or lorries) can employ truck platooning more effectively than human drivers, where vehicles follow each other at the same speed to improve fuel economy and reduce their impact on traffic.

While there have been a few setbacks in recent years — for example, TuSimple, Navistar and UPS shut down their “Driver-out” self-driving truck system in 2023 and Waymo, Embark and Locomation are no longer actively developing autonomous trucks — there are many more new entrants working toward wide-scale deployment:

Traditional OEMs are nearing full operationalization, with trucks already being tested in North America. In partnership with Aurora, Volvo revealed its first production-ready autonomous truck in May, and Daimler Truck reported that its Freightliner Cascadia semi-trucks are meeting closed-course acceptance tests in October of 2024.

These announcements indicate that the automotive industry is driving autonomous trucking plans forward — so software development teams in automotive manufacturing will need to get familiar with the challenges, solutions, best practices, and compliance with UL 4600 to ensure they are prepared for these exciting advancements.

Trucking Automation Software Challenges

There are five main challenges for teams developing autonomous trucking software:

It’s hard to reuse autonomous software built for cars.

The development and testing of self-driving cars focus on short, low-speed routes with stop-and-go traffic. In contrast, autonomous trucks will operate on long-haul highway routes at higher speeds and will encounter less traffic, more variable terrain, and transitions between urban and rural roads.

More critically, a Class 8 driverless semi-truck has a gross vehicle weight eight times that of the average passenger vehicle — before loading its cargo. This means autonomous driving software has to account for a larger turn radius, longer stopping distances, and the presence of a trailer that can weigh as much as 14,000 U.S. pounds unloaded.

Software has to handle many use cases.

Trucking automation software must accommodate different vehicle classes, cargo conditions, and route types. While some fleets deliver consumer packaged goods within cities, most autonomous trucks are particularly good for long distances, which will need to be accounted for in the software. Still others transport hazardous, refrigerated, or liquid materials across international borders.

There is also the scalability factor: To build and test multiple branches of software to handle various scenarios at scale, developers should design systems to accommodate a wide range of input and control conditions before deploying them into real-world trucking environments.

Verification and validation require long highway driving.

Once acceptance tests are performed on closed-loop circuits, autonomous trucks must conduct road tests on highways, ideally for hundreds of miles across the country. That way, developers can ensure their autonomous trucks can cover the distances, runtime, and road conditions necessary for the long haul. The U.S. Department of Energy reports that the average semi-truck travels over 62,000 miles annually.

Security must be a top priority.

Similar to their automotive counterparts, driverless trucks must address the following security concerns:

  • Protect connected infrastructure and endpoints such that there is acommon baseline of trust between nodes.
  • Track and adapt to vulnerabilities that will continue to grow as malicious actors realize new opportunities to attack and destabilize critical trucking networks.
  • Secure the manufacturing supply chain with vendors who may not have had to deal with software security before.
  • Comply with cybersecurity regulations and best practices, such as ISO/SAE 21434.

Safety compliance must be accounted for.

Safety will be the key differentiator between manufacturers that make it to market and those stuck in verification and validation activities. Fully unmanned trucks operating at highway speeds present real concerns, including difficulty in handling unexpected situations and decision-making capabilities, in addition to more typical concerns about undefined behaviors and software malfunctions.

The challenges of developing safe trucking automation systems lie in their components. Everything from sensors to decision-making algorithms to vehicle motion control must be scrutinized. Given this complexity, manufacturers will find themselves relying on automated tools, like Perforce Static Analysis, to help with UL 4600 compliance.

Understanding UL 4600: Safety Principles and Processes for Autonomous Trucking

UL 4600 is the first safety standard designed specifically for autonomous and connected vehicles. Unlike a traditional UL satefy standard, UL 4600 takes a “safety case” approach with real-world applications in a specific environment — and the inclusion of autonomous trucks in UL 4600 Edition 3 includes trucking-specific examples. The standard helps developers build a safety case for carious aspects of system development and maintenance:

“It offers framework that leads designers of autonomous systems through the required thought process to ensure all possible complications have been considered. What are the safety questions that need to be considered in design? How do you think beyond design and for the lifecycle of the vehicle? Can quality and consistency be assured across manufacturers?

Dr. David Steel, Executive Director of UL Standards & Engagement, ULSE, Inc.

The UL 4600 Edition 3 standard requires developers to follow a three-step approach for assessing and validating driverless truck safety:

  1. Make a measurable safety claim, where developers state how the autonomous truck should operate.
  2. Make an argument that proves the claim is true by describing the perception technologies and the systems that are triggered by them.
  3. Provide evidence that the system will perform as expected by providing simulation results, road test outcomes, and other proof that the autonomous truck will perform as stated.

The end result is a safety case arguing that an exceptionally robust combination of analysis, simulation, closed course testing, and public road testing have been performed — with evidence given — to ensure an appropriate level of system safety.

How Static Analysis Helps Achieve Autonomous Truck Safety

There is a specific requirement in UL 4600 for coding standard compliance, as the development process should be similar to that of IEC 61508 or ISO 26262, so developers should use static analysis to some degree and produce the results of source code analysis. Static code analyzers — like Perforce Helix QAC and Klocwork — support these goals by ensuring comprehensive code coverage and sufficient supporting evidence in these areas:

Amid the pressure of getting driverless trucks to market, these tools enable developers to focus on feature development rather than compliance activities.

About Perforce
The best run DevOps teams in the world choose Perforce. Perforce products are purpose-built to develop, build and maintain high-stakes applications. Companies can finally manage complexity, achieve speed without compromise, improve security and compliance, and run their DevOps toolchains with full integrity. With a global footprint spanning more than 80 countries and including over 75% of the Fortune 100, Perforce is trusted by the world’s leading brands to deliver solutions to even the toughest challenges. Accelerate technology delivery, with no shortcuts.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×