Skip to content

Surely nobody would write a web service in C++

A while back, one of my colleagues was hanging out in an online developer forum and some people there were starting up a good old-fashioned language war (the type of exchange where one person familiar with language A will announce its superiority over language B, with which the other person isn’t familiar – not really a productive use of time when you think about it, but a popular pastime nonetheless).

During this debate, one developer confidently proclaims that ‘surely nobody would ever write a web service in C++,’ to which my colleague responds, ‘well that’s exactly what we did here at Keepit.’ This prompted some questions, and this piece is an attempt to explain why we did what we did and to explain how this choice has been working out for us, given that this code base started life about 10 years ago.

To put things into perspective, it will be necessary to start with some minimal background information about this service we set out to build.

What is Keepit?

Keepit is a backup service in the cloud. We will store backup copies of your (cloud) data so that if—or when—your primary data is compromised for one reason or another (but most likely because of ransomware or account takeover via phishing), then you will still have complete daily copies of your data going back as many years as you want.

Years of data. This should make you think.

Several years ago, Microsoft claimed having 350 million seats on their 365 platform, which is one of the cloud platforms that we protect. Let’s say we get just 10% of that market (we should get much more because we are by far the best solution out there, but let’s be conservative for the sake of argument), that means we need to store all data for 35 million people (and that’s just on one of these platforms – we protect several other platforms as well).

It doesn’t end there: being backup, we copy all your changes, and we hold your old data, and that means when you clean up your primary storage and delete old documents, we keep the copy. Many customers want a year or three of retention, but we have customers who pay for 100 years of retention.

One hundred years. That means our great grandchildren will be fixing the bugs we put in our code today. This should make you think too.

Knowing the very high-level goals of our service, let’s talk about requirements for such a thing.

Core system: storage

We knew from the get-go that we would be implementing a storage solution which would need to store a very large amount of data (as everything is moving to the cloud, let’s say a few percent of all the world’s data) for a very long period of time (say a hundred years).

Now everyone in the storage business will talk about SSDs, NVMe, and other high-performance data storage technologies. None of this is relevant for large scale, affordable storage, however. Spinning disks is the name of the game and probably will be for at least another decade.

SSDs are getting the density, sure, but they are still not close to viable from a cost perspective. This means we will be writing all data to rotating magnetic media. When you write data to magnetic media, over the years, your media will demagnetize. That means, if we store a backup on a hard drive today, we probably can’t read it back just ten years from now.

That means we need to regularly move all this data from system to system to keep the data ‘fresh.’ Talking about performance, large capacity hard drives today rotate at 7200rpm, exactly the same speed as back in 1992. Access time is dominated by the rotational latency, which means that this is really an aspect of computers that has been almost at a standstill for 30 years while everything else has become faster in every way. We knew we had to deal with this.

I should probably note here that yes, we are talking about running our software on actual physical computers – no public cloud for us. If you want to go big, don’t do what the big players say you should do, do what the big players do. If public cloud was so great, Microsoft wouldn’t have built their own to run 365 – they would have run it on AWS which was very well established long before Microsoft thought about building 365. This doesn’t mean you can’t prototype on public cloud of course.

To solve our core storage need, we designed a filesystem—basically, an object storage system optimized for storing very large-scale backup data. Clearly, we expect the implementation of this storage system to have a significant lifespan.

We may want to create a better implementation one day in the future when hardware has evolved far beyond what we can imagine today, but it is worth pointing out that the storage systems we use today are very similar in architecture to what they would look like 30 years ago, and I would assume in 30 years from today. Clearly, the core code that manages all of your data is not something you want to re-write every few weeks.

So, to implement this system, we went out looking for which new experimental languages had been invented in the six months leading up to implementation start. No wait, we didn’t.

What we need from a language

There are really two types of languages:

1: Systems programming languages – those that have practically no runtime, where you can look at the code and have a high degree of confidence in understanding exactly what that leads to on your processor, the type of language you would write an operating system kernel in. This would be languages like C, C++, and who knows – maybe Rust or something else.

2: The higher-level languages, which often have significant runtimes. The good ones of these offer benefits that you cannot get in a language without a significant runtime. This would be a language like Common Lisp, but people more commonly talk about C# and Java even though I will argue they only do so because nobody taught them Lisp.

And then you have the other languages that fit various niche use cases. This could be Python, Haskell, JavaScript, and so forth. I don’t mean to talk them down, but they are not reasonable languages for software development of the type we are talking about here; and since what we’re talking about here isn’t actually so special, you could take my argument to mean that they are just not very reasonable languages for software development outside of niche uses, and that would be a fair interpretation of my opinion.

So, to be a little more concrete, what it is that we really need from a language is:

1: It must support the efficient implementation of algorithms and data structures; meaning we must have tight control over memory when we need it, our language must support the actual hardware data types like 64-bit integers on modern processors, etc. So, this rules out Python (not compiled), Ruby (not compiled) and JavaScript (JIT but doesn’t have integers or arrays).

2: When we write code today, the tool chain in 20 years’ time must still support our code with little or no changes. Because we simply can’t rewrite our code every few years. We will get nowhere if that’s what we do. That’s why large, important software systems today are still often written in C – because they started out life in the 80s or 90s and they are still the most significant operating system kernels or database management systems that exist to this day.

Steady evolution is the recipe, not rewrite from scratch every three years. This basically rules out any language that hasn’t been standardized and widely used for at least 10 years before we start the project. Meaning, since we started in 2012, that rules out any language that came out after 2002, so Go, Rust, and many other languages would have been out of the picture. C and C++ would work though.

3: We run on Linux. If you do anything significant with computers on a network, you probably run on Linux, too. We don’t want a language that is ‘ported’ to Linux as a curiosity – like C#. We need a language that is native on Linux with a significant and mature toolchain that is certain to receive significant investment for decades to come. Again, that’s really C and C++.

4: You need to design for failure. Everything from writing to a disk, to allocating the smallest piece of memory, can and will eventually fail. Relying on the developer to check error codes or return values at every single call to a nontrivial function (and too many trivial functions too) is rough. Yes, it can be done and there are impressive examples of this.

I am humbled by software such as the Postgres database or the Linux kernel which are very reliable pieces of software written in C which require such tedious checking. C++, in my experience, with RAII and exceptions, offers a much safer alternative. It is not free, of course – it avoids one set of problems and introduces another. In my experience however, it is less difficult to write reliable software using RAII and exceptions than to rely on developers not missing a single potential error return and correct recovery and cleanup. For this reason, I will prefer C++ over C and over both Rust and Go.

5: Obviously the language must offer sufficiently powerful functionality to make the implementation of a larger application bearable and maybe even enjoyable. In reality, however, if your language has functions, you can accomplish a lot; Fortran got functions in 1958 and since then most languages have had them.

Yes, generic programming is nice in C++. A real programmable language like Common Lisp would be preferable of course. Any other modern programming language will surely have some other feature which was added because it is potentially nice and potentially justifies the existence of the language.

But in reality, the hard part is getting your data structures right. Getting your algorithms right. Knowing what you’re trying to build and then building exactly that, nothing more and nothing less.

If we are honest, most languages would work. However, C++ is a nice compromise: it has some generic programming, the STL is incredibly useful, it offers basic OO concepts, and RAII (and structured error handling).

If we look at the criteria here, there really aren’t that many candidate languages to choose from, even if we compromise a bit here and there. Therefore, the question really isn’t ‘why’ we would write a web service in C++, the question really is ‘why wouldn’t we’ write a web service in C++. Realistically, what else would you use, given the scope of what we’re solving here?

Versatility

Performance matters. Don’t let anyone tell you otherwise. Anyone who says that ‘memory is cheap’ and uses that as an excuse should not be building your large-scale storage systems (or application servers or anything else that does interesting work on large amounts of data).

Donald Knuth said, ‘Premature optimization is the root of all evil’ and I absolutely believe that. However, ‘no optimization and elastic scaling is the root of all public cloud revenue’ is probably also true. Don’t go to extremes – don’t put yourself in a situation where you cannot, at the appropriate time, optimize your solution to be frugal with its resource use. When your solution is ‘elastically scaling’ for you in some public cloud on a credit card subscription, it is very hard to go back and fix your unit economics. Chances are you never will.

The typical computer configuration for a storage server in Keepit is 168 18 TB hard drives attached to a single-socket 32-core 3.4GHz 64-bit processor and 1TiB of RAM. It’s really important to note here that we use only one TiB of RAM for three PiB of raw disk: this is a 3000:1 ratio – it is not uncommon to see general purpose storage systems recommend a 30:1 ratio of disk to RAM (which would require us to run with 100TiB of RAM at which point memory most certainly isn’t cheap anymore). Through the magic of our storage software, this gives us about 2PiB of customer-usable storage in only 11U of rack space. This means we can provide a total of 8PiB of usable storage in a single 44U rack of systems, consuming less than 10kW of power. This matters.

If you run a business, you want to be able to make a profit. Your customers will want you to make a profit, especially if they bet on you having their data 100 years from now. If you want to grow your business with investments, your investors will think this matters. In Keepit, we have amazing unit economics – we got the largest series A round of investment for an IT company in the history of Denmark ever – and part of the reason for that was because of our unit economics. Basically, our storage technology, not least the implementation of it, enabled this.

The choice of C++ has allowed us to implement a CPU- and memory-efficient storage system reliably that uses the available hardware resources to their fullest extent. This ranges from careful layout of data structures in memory to an efficient HTTP stack that exposes the functionality and moves more than a GiB of data per second per server over a friendly RESTful HTTP API on the network. C++ enables and supports every layer of this software, and that is quite a feat.

Let me briefly digress with another note on versatility. I have this personal hobby project where I am developing a lab power supply for my basement lab (because every basement needs a lab). In order to adjust current and voltage limits, I want to use rotary encoders rather than potentiometers.

A rotary encoder is basically an axle that activates two small switches in a specific sequence and by looking at the sequence you can detect if the user is turning the axle in one direction or the other. The encoder signal gets fed to a 1MHz 8-bit processor with 1 kB of RAM and 8 kB of flash for my code.

To implement the code that detects the turning of these encoders, it makes sense to use a textbook, object-oriented approach. Create a class for an encoder. Define a couple of methods for reading the switches and for reading out the final turn data. Declare a bit of local state. Beautifully encapsulated in pure OO style. The main logic can then instantiate the two encoders and call the methods on these objects. I am implementing the software for this project in C++ as well – try to think about that for a moment: The same language that allows us to efficiently and fully utilize a 32-core 3.4GHz 64-bit processor with 1TiB of RAM and 3PiB of raw disk works ‘just as well’ on a 1-core 1MHz 8-bit processor with 1kiB of RAM and 8kiB of flash storage – and the code looks basically the same.

There are not many languages that can stretch this wide and not show the slightest sign of being close to its limit. This is truly something to behold.

The rest of the stack

The storage service exposes a simple RESTful API over HTTP using an HTTP stack we implemented from scratch in C++. Instantiating a web server in C++ is a single line of code – processing requests is as trivial as one could wish for.

I’ve heard plenty of arguments that doing HTTP or XML or other ‘web’ technology work would be simpler in Java or C# or other newer languages, but really, if you write your code well, why would this be difficult? Why would you spend more than a line of code to instantiate a web server? Why would parsing an XML document be difficult?

For XML, we implemented a validating parser using C++ metaprogramming; I have to be honest and say this was not fun all the way through and I couldn’t sit down and write another today without reading up on this significantly first. C++ metaprogramming is nothing like a proper macro system – but it can absolutely solve a lot of problems, including giving us an RNC-like schema syntax for declaring a validating XML parser and generating efficient code for exactly that parser.

This also means when we parse an XML document and we declare that one of the elements is an integer, then either it parses an integer successfully or it throws. If we declare a string, we get the string properly decoded so that we always work on the native data – we cannot ever forget to validate a value and we cannot ever forget to escape or un-escape data. By creating a proper XML parser using the language well, we have not only made our life simpler, we have also made it safer.

The entire software ecosystem at Keepit may revolve around our storage systems, but we have several other supporting systems that use our shared components for the HTTP and XML stack.

One other notable C++ system is our search engine. Like so many other companies, we found ourselves needing a search engine to assist us with providing an amazing end user experience when browsing their datasets. And like so many others we fired up a cluster of Elasticsearch servers and went to work.

Very quickly we got hit by this basic fact that Elastic is great at queries and not very good at updates – and we have many more updates than we have queries. We simply couldn’t get this to scale like we’re used to. What to do?

While struggling with Elastic, we started the ‘Plan-B’ project to create a simple search engine from scratch – this engine has been our only search engine for years now and to this day, the process is still called ‘bsearch.’

Our search engine offers a google-like matching so that you can find your documents even if you misspell them, and it is a piece of technology that we are quite actively developing both to improve matching capabilities across languages and to allow for more efficient processing of much larger datasets, which will open up for other uses in the future.

Of our backend code base, about 81% of our code is C++. Another 16% is Common Lisp. The remaining 3% is Java.

We use Common Lisp in two major areas: For ‘general purpose’ business functions such as messaging, resource accounting, billing, statistical data processing, etc. And we use it for backup dataset processing. These are two very different uses.

The first is a more classical application of the language where performance is maybe less of a concern but where the unparalleled power of the language allows for beautiful implementations of otherwise tedious programs.

The second use is a less traditional use case where enormous datasets are processed and where the majority of the memory is actually allocated and managed outside of the garbage collector – it is truly a high-performance Lisp system where we benefit from the power of the language to do interesting and efficient extractions of certain key data from the hundreds of petabytes of customer data that pass through our systems.

Many people don’t know Common Lisp and may propose that ‘Surely nobody would write a web service in Common Lisp.’ Well, as with all other languages you need to understand the language to offer useful criticism; and the really groundbreaking feature of Common Lisp is its macro system. It is what makes Common Lisp by far the most powerful language in existence by a large margin.

This is nothing like C pre-processor macros; the Common Lisp macro system allows you to use the full power of the language to generate code for the compiler. Effectively, this means the language is fully programmable. This is not something that is simple to understand since there is no meaningful way to do this using C-like language syntax, which is also why the Lisp dialects have a syntax that is fundamentally different from other languages.

In other words, if you do not understand the Lisp syntax, you are not equipped to comprehend what the macro system allows. This is not simple to wrap your head around, but, for example, I can mention that Common Lisp was the first general purpose programming language to get Object Orientation added to it, and this was done not with a change to the language and the compiler, but with a library that contained some macros. Imagine that.

Fortran allows you to implicitly declare the type of variables by using certain letters in the first character of the variable name – just for fun, I implemented that with a macro for Common Lisp. If I wanted to do that with C or C++ or any other language, I would need to extend the compiler.

The idea of using the first character in the name of the variable to implicitly declare its type is of course ridiculous, but there are many little syntactical shortcuts or constructs that can help you in daily life that you may wish was present in your language of choice which you can only hope the language steering committee may one day add to the standard.

With Common Lisp, this is everyday stuff – no need to wait. If you want a new type of control structure or declaration mechanism, just go ahead and build it. The power of this cannot be overstated. C++ metaprogramming (and go generics and everything else) pales in comparison, useful as it is.

Lessons learned

First of all, it really sucks to have multiple languages; you can’t expect everyone to be an expert in all, so by having more than one language, you decimate the effective size of your team. However, we picked Common Lisp to replace a sprawling forest of little scripts done in more languages than I could shake a stick at—meaning we are fortunate to have only two languages on our backend.

C++ and Common Lisp are so different that they complement each other well. Yes, we could have done everything in C++, but there are problems we solve in Common Lisp which would have been much less enjoyable to solve in C++. Now on the downside, we have two HTTP stacks, two XML stacks, two database libraries, two connection pools, and so on and so forth. There is no simple perfect solution here; the compromise we have arrived at is indeed working out very well for us.

We’ve been told many times that recruiting for C++ is hard because recruiting for ‘web technologies’ is so much simpler. Well guess what, finding good JavaScript developers is just as hard as finding good C++ developers in my experience. With Common Lisp it’s different again: it’s harder to find people, but the percentage of the candidates that are actually qualified is higher, so all in all, it’s actually fine. Recruitment is difficult across languages, period.

The best you can do is go to a conference, talk about your tech, and hope that some developers show up at your booth to talk about employment.

Old grumpy man’s advice for youngsters considering a career in software engineering

First of all, seriously consider a computer science education. There exist amazingly qualified people who do not have this and some of them work for us, but in my experience most really good developers have this. It certainly helps to get a foundation of mathematics, logic, and basic computer science. Knowing why things work will make learning new things infinitely simpler.

Learn multiple, properly different programming languages and write actual code in them. You need to experience (by failing) how functions are useful as abstractions and how terrible it is to work with ill-designed abstractions. You need to fail and spend serious time failing.

Make sure one of those languages is a compiled language with little or no runtime: C, C++, Rust, or even Fortran for that matter (not sure Fortran has much long-term perspective left in it though – it’s probably time to say goodbye). Now challenge yourself to write the most efficient implementation of some simple problem – maybe a matrix multiplication for example.

Disassemble the code and look at it. At least get some understanding of the processor instructions and why they are generated from the code you wrote. Learn how cache lines matter. Time your code and find out why your solution isn’t faster than it is. Then make it faster until you can prove to yourself that your instructions pipeline as much as they can, your cache misses are minimal, you don’t wait on register write delays and so on and so forth.

Also, make sure that one of those languages is Common Lisp. It should be a criminal offence for a university to not teach Common Lisp in their computer science curriculum. Read ‘The Structure and Interpretation of Computer Programs – SICP’ too. Even if you will never use Lisp again, knowing it will make you a better developer in any other language.

And finally, as much as I dislike JavaScript, you should learn that, too. The most beautiful backend code will too easily be ignored if you cannot beautifully present its results – and today this means doing something with JavaScript.

Aside from my previous criticisms, you can make working with JavaScript more bearable, for example, by creating your own framework rather than relying on the constantly changing megabyte sized atrocities that your common web projects rely on. However, this is probably a topic for future discussion.

About Version 2
Version 2 is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Keepit
At Keepit, we believe in a digital future where all software is delivered as a service. Keepit’s mission is to protect data in the cloud Keepit is a software company specializing in Cloud-to-Cloud data backup and recovery. Deriving from +20 year experience in building best-in-class data protection and hosting services, Keepit is pioneering the way to secure and protect cloud data at scale.

ICS / OT Security News Update | SCADAfence – June 20

Our research team compiled the latest updates on newly announced CVEs, recent ransomware attacks and IoT security news. They also offer analysis of the potential impacts and their expert recommendations:

ICS

Siemens DoS Vulnerability (CVE-2022-24040)

A vulnerability affecting Siemens’ PXC4.E16 building automation controllers can be exploited to conduct a DoS attack (CVE-2022-24040).

Attack Parameters: The web application fails to enforce an upper bound to the cost factor of the PBKDF2 derived key during the creation or update of an account.

Impact: An attacker could make the device unavailable for days by attempting a login.

Recommendations: Siemens released a patch for this vulnerability.

SCADAfence Coverage: The SCADAfence Platform detects new connections, connections to and from external devices, connection to and from the Internet, and unauthorized connections to OT assets.

Open Automation Software Platform Vulnerabilities

Multiple vulnerabilities were found affecting Open Automation Software (OAS) platform, leading to device access, denial-of-service, and remote code execution. The OAS platform is a widely used data connectivity solution that unites industrial devices (PLCs, OPCs, Modbus), SCADA systems, IoTs, network points, custom applications, custom APIs, and databases under a holistic system.

Targets: OAS is used by Michelin, Volvo, Intel, JBT AeroTech, the U.S. Navy, Dart Oil and Gas, General Dynamics, AES Wind Generation, and several other high-profile industrial entities.

Attack Parameters: The most critical of these vulnerabilities, CVE-2022-26833, can be exploited by sending a series of HTTP requests. Most of the other vulnerabilities can be exploited using a variety of specific network requests.

Impact: Successful exploitation of these vulnerabilities may lead to DoS and RCE.

Recommendations: While patches are still unavailable for these vulnerabilities, they can be mitigated by disconnecting the OAS platform from the Internet and from Internet-facing devices.

SCADAfence Coverage: The SCADAfence Platform detects DoS attempts, such as HTTP flooding attempts. 

IT

Microsoft Office MSDT Vulnerability (CVE-2022-30190)

A new zero-day vulnerability, dubbed “Follina”, allows attackers to execute malicious PowerShell commands using Microsoft Office programs (CVE-2022-30190).
This is a new attack vector leveraging Microsoft Office programs as it works without elevated privileges, bypasses Windows Defender detection, and does not need macro code to be enabled to execute binaries or scripts.


Targets: Threat actors, such as Chinese APT groups, used this vulnerability to target organizations in Russia and in Tibet, and government entities in Europe and in the U.S.

Attack Parameters: The vulnerability leverages malicious Word documents that execute PowerShell commands via the Microsoft Diagnostic Tool (MSDT). It is triggered when an office application, such as Word, calls MSDT using the MS-MSDT URL protocol.

Impact: Attackers can exploit this vulnerability to remotely execute arbitrary code with the privileges of the calling app to install programs, view, change, or delete data, or create new Windows accounts as allowed by the user’s rights.

Recommendations:

    1. Microsoft has released a patch for this vulnerability. 
    2. Microsoft recommended that affected users disable the MSDT URL.
    3. An unofficial patch has been released, adding sanitation of the user-provided path to avoid rendering the Windows diagnostic wizardry inoperable.

SCADAfence Coverage: The SCADAfence Platform detects new connections, connections to and from external devices, connection to and from the Internet, and unauthorized connections.

Confluence Server and Data Center RCE Vulnerability (CVE-2022-26134)

A vulnerability affecting Confluence Server and Data Center was disclosed, which allows unauthenticated attackers to gain remote code execution on unpatched servers (CVE-2022-26134).


Attack Parameters: This vulnerability can be exploited without needing credentials or user interaction, by sending a specially crafted web request to the Confluence system.


Impact: Threat actors were observed exploiting this vulnerability to install BEHINDER, a web shell that allows threat actors to execute commands on the compromised server remotely and has built-in support for interaction with Meterpreter and Cobalt Strike.

A PoC exploit for this vulnerability has been published.

Recommendations: Atlassian released patches for this vulnerability.

SCADAfence Coverage: The SCADAfence Platform detects exploitation of this vulnerability, as well as the use of Meterpreter and Cobalt Strike. 

Ransomware

Foxconn Ransomware Attack by LockBit
Foxconn electronics manufacturer has confirmed that one of its Mexico-based production plants has been impacted by a ransomware attack. While the company did not provide information about the responsible group, LockBit gang claimed the attack.

Attack Parameters:

  1. Initial Access – LockBit operators often gain access via compromised servers, RDP accounts, spam email or by brute forcing insecure RDP or VPN credentials.
  2. Execution – LockBit is executed via command line or created scheduled tasks.
  3. Credential Access – LockBit was observed using Mimikatz to gather credentials.
  4. Lateral Movement – LockBit can self-propagate using SMB. PsExec and Cobalt Strike were used to move laterally within the network.

Impact: According to Foxconn, the impact on its overall operations will be minimal, and the recovery will unfold according to a pre-determined plan.

Recommendations:  Following are additional best practices recommendations:

  1. Make sure secure offline backups of critical systems are available and up-to-date.
  2. Apply the latest security patches on the assets in the network.
  3. Use unique passwords and multi-factor authentication on authentication paths to OT assets.
  4. Encrypt sensitive data when possible.
  5. Educate staff about the risks and methods of ransomware attacks and how to avoid infection.

SCADAfence Coverage: The SCADAfence Platform detects the creation of scheduled tasks, as well as the use of Mimikatz, PsExec, and Cobalt Strike.

RDP and SMB connections can be tracked with User Activity Analyzer.
SFP detects suspicious behavior, which includes LockBit’s, based on IP reputation, hash reputation, and domain reputation.

For more information on keeping your ICS/OT systems protected from threats, or to see the SCADAfence platform in action, request a demo now.

About Version 2
Version 2 is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About SCADAfence
SCADAfence helps companies with large-scale operational technology (OT) networks embrace the benefits of industrial IoT by reducing cyber risks and mitigating operational threats. Our non-intrusive platform provides full coverage of large-scale networks, offering best-in-class detection accuracy, asset discovery and user experience. The platform seamlessly integrates OT security within existing security operations, bridging the IT/OT convergence gap. SCADAfence secures OT networks in manufacturing, building management and critical infrastructure industries. We deliver security and visibility for some of world’s most complex OT networks, including Europe’s largest manufacturing facility. With SCADAfence, companies can operate securely, reliably and efficiently as they go through the digital transformation journey.

Pandora FMS boosts its cloud solution. Safer, easier and faster IT monitoring

For their MaaS Solution (monitoring as a service), Pandora FMS   partners with OVHcloud to ensure data sovereignty, European technological independence and top quality service with datacenters in Europe, America and Asia.

Learn all about Monitoring as a Service (MaaS)

Pandora FMS, the renowned Spanish technology company, launches its SaaS solution on the market: MaaS, Monitoring as a Service, a flexible subscription model, easy to understand and that covers all the monitoring needs of companies of all sizes.

The MaaS Solution complies with easy integration with business processes, permanent security and 24/7 availability, to be accessed anytime and anywhere.

A crucial weapon to safeguard companies from the growing demand for the Cloud, and the difficulties of finding personnel to manage the ever-increasing technological complexity. 

Particularly, since it is well known that the use of heterogeneous tools and the lack of use of AI in multi-cloud environments prolongs and delays innovation, forcing IT teams to devote almost half of their time to maintenance tasks.

In collaboration with OVHcloud

Since 1999, the year of its foundation, OVHcloud has had a single objective: to offer their customers a complete range of innovative Cloud products and bare metal with a marked European accent.

Since then, the group has provided world-class infrastructures to more than 1.6 million customers worldwide, and has an ecosystem of partners who bring in their added value to accompany the digital transformation of  different companies. 

Pandora FMS is incorporated as Advance Partner from OVHcloud, combining its extensive experience in IT monitoring software with the sovereign infrastructures of OVHcloud, and includes its ready-to-use solutions in the Marketplace of the European Cloud leader.

At the moment, OVHcloud’s powerful high-end dedicated servers are helping Pandora FMS provide its customers with its MaaS service: a service with very high standards by offering 24/7 operation. 

Thanks to the smooth operation and reliability of this product, they can also comply with the SLA of the services offered hosted on these servers.

“The ease of provision, the transparency and the wide range of options that OVHcloud now has to choose, not to mention the possibility of having servers in Europe or America, is key to deploying our solutions,” tells us Sancho Lerena, CEO and founder of Pandora FMS.
“In addition, it is a pleasure to be part of the OVHcloud ecosystem, and contribute jointly to the promotion of innovative, interoperable and trusted solutions. One more way to try in face of all the range of possibilities of our MaaS Solution,” concludes.

Pandora FMS

Pandora FMS, as many of you know, is the total monitoring solution, which allows you to observe and measure all kinds of technologies regardless of where they are located: Cloud, SaaS, virtualization or on-premise: a flexible solution that unifies data visualization for full observability of the entire organization.

With more than 50 thousand facilities in 53 countries, among its customers there are companies such as:

  • Salvensen
  • Prosegur
  • Repsol
  • CLH
  • Euskaltel
  • Adif 
  • Santalucía
  • Cofares 
  • AON
  • El Pozo 
  • The EMT 

And other foreign companies such as:

  • Rakuten
  • Nazareth University in New York 
  • Ottawa’s main hospital 

Also, public administrations such as:

  • La Junta de Castilla-La Mancha 
  • Community of Madrid
  • La Diputación de Barcelona 
  • And numerous municipalities in France, Portugal and Spain

About Version 2
Version 2 is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About PandoraFMS
Pandora FMS is a flexible monitoring system, capable of monitoring devices, infrastructures, applications, services and business processes.
Of course, one of the things that Pandora FMS can control is the hard disks of your computers.

ESET Leader in G2 summer report for ESET PROTECT Advanced

Bratislava, June 23rd, 2022ESET, a global leader in digital security, has been awarded Leader status in G2’s report for summer 2022. The ESET PROTECT Advanced solution has consistently been highly ranked by G2 users and achieved substantial satisfaction and market presence scores in their Grid® Reports, which represent the democratic voice of real software users, offering benchmarks for product comparison and market trend analysis.

In the summer edition of the G2 Grid® Reports, ESET was a Leader in several of the reports, including for Antivirus Software, Endpoint Management Software, Endpoint Protection Suites, and Mobile Data Security Software. And in terms of the Mid-Market reports aimed at companies with up to 1,000 employees, ESET was a Leader in Mid-Market Endpoint Protection Suites, Mid-Market Antivirus Software, and Mid-Market Endpoint Management Software.

“We are delighted to be ranked as Leaders in the G2 reports. Our objective is to provide the digital security that keeps an organization’s systems working smoothly and securely, and being recognized for our efforts is always an honor,” comments Michal Jankech, VP for the SMB and MSP segment at ESET. “No modern business, large or small, can survive without an effective response in the face of an IT breach. We believe, that by employing ESET´s strong prevention, detection and response technologies delivered in the form our modular ESET PROTECT platform, businesses can benefit from the most densely multilayered and effective protection in the industry.”

For more than 30 years, ESET has continued to invest heavily in multiple layers of proprietary technology that prevent breaches of its customers’ endpoints and systems, by both known and never-before-seen threats. The ESET PROTECT platform has been designed with ESET’s customers in mind, with the main objective being in assisting IT admins to better manage the security risks in their environments.

As a privately owned, tech-focused company, ESET has always taken a science-based, security-first approach, with early adoption of machine learning and cloud computing power to develop its global threat intelligence systems. The company has continuously been named a top player and a leader in the industry for its business solutions.

About Version 2
Version 2 is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About ESET
For 30 years, ESET® has been developing industry-leading IT security software and services for businesses and consumers worldwide. With solutions ranging from endpoint security to encryption and two-factor authentication, ESET’s high-performing, easy-to-use products give individuals and businesses the peace of mind to enjoy the full potential of their technology. ESET unobtrusively protects and monitors 24/7, updating defenses in real time to keep users safe and businesses running without interruption. Evolving threats require an evolving IT security company. Backed by R&D facilities worldwide, ESET became the first IT security company to earn 100 Virus Bulletin VB100 awards, identifying every single “in-the-wild” malware without interruption since 2003.

The Power of Role-Based Access Control in Network Security

Determining the right network access control (NAC) security policy for your organization isn’t an easy task.

It’s often a balancing act between keeping your network secure and ensuring employees can access the systems they need to do their jobs.

Role-based access control (or RBAC) can be a good way of ensuring your network is protected. If you’ve been considering implementing RBAC in your organization but aren’t entirely sure of the benefits, this article will answer your questions.

What is role-based access control?

Role-based access control is a way of restricting access based on a user’s role within an organization. This means that users aren’t assigned permissions directly but are instead given roles that govern their levels of access. Depending on their job and responsibilities, a user may have one or more roles.

Let’s say, for example, you have a staff database on your network, which contains all your employees’ contact details and contractual information.

Everyone in the organization may have access to edit their own personal details. Managers may have access to edit their team’s information, but no one else’s. Your HR team may have full access to the database to view and edit everyone’s data.

RBAC works on the Principle of Least Privilege (PoLP). This means users have the minimal level of access needed to carry out their job.

RBAC isn’t the only access control method available. There are other options you can consider, like attribute-based access control (ABAC), policy-based access control (PBAC) and access control lists (ACL). However, role-based access control is one of the most effective ways of not only keeping networks secure but improving organizational efficiency.

A study by NIST has shown that role-based access control addresses most of the needs of government and commercial organizations.

Why is role-based access control so important when it comes to network security?

Networks are more susceptible to security breaches than ever before. People working from home and the introduction of BYOD policies mean more endpoints that can be compromised.

In fact, according to IBM, it’s estimated that data breaches in 2021 cost businesses an average of $4.24 million.

With this in mind, it’s essential to ensure networks stay safe. Here’s how role-based access control can provide security for businesses large and small.

I. It makes it easy to ensure networks are secure

Setting up permissions for networks is relatively straightforward. However, as people start, leave, and move around organizations, permissions can become less efficient. Users may end up with access to systems they no longer need.

RBAC means IT departments can effectively manage what access people have with a click of a button.

Let’s go back to the example of the staff database above and say that a new staff member has joined the HR team. Rather than setting access at a user level, you can add them into the ‘HR’ role so they can have full access to the system.

A few years later, let’s say the staff member moves into the sales team, meaning they no longer need full access to the staff database. Rather than changing every single point of access they have, it’s just a case of adding them into the ‘sales’ role instead.

II. It reduces the attack surface

It’s estimated that one in four data breaches result from human error. With RBAC, if a member of staff causes an accidental (or intentional) data breach, there will be less impact.

Let’s say someone is a victim of a phishing attempt, and a hacker obtains their login details. The hacker will only be able to access the information that the member of staff has through the roles they have been allocated.

This means even if a data breach occurs, most of your information will still be safe.

III. It eliminates the risk of ‘insider threats’

Disgruntled employees can often try and settle the score by leaking confidential data or deleting important information. Earlier this year, an IT technician in the UK was jailed for 21 months for wiping data from the school he was formerly employed at after being fired.

As role-based access control gives just enough access to ensure staff can carry out their jobs, it minimizes the risk of users causing intentional harm to your networks.

Similarly, if you work with any third parties, you can use RBAC to assign them pre-defined roles and limit what they can view or edit. Once you stop working with them, you can quickly remove their permissions.

IV. It can quickly scale and adapt

As RBAC deals with overarching roles rather than individual permissions, it can grow as an organization’s IT requirements do.

Let’s say you acquire a new application for your organization. Role-based access control makes it easy to create new permissions as well as set different levels of permissions quickly. As a result, you can ensure any new hardware or software stays secure and that the right people have access.

V. It can ensure you stay compliant

Some industries, like healthcare and financial services, are heavily regulated and have stringent compliance regulations in place. For example, the Health Insurance Portability and Accountability Act (HIPAA) states that only certain people should be allowed access to specific systems.

Role-based access controls can ensure that organizations in these industries do what is required of them, minimizing the risk of security breaches as well as fines for willful violations of the law.

How Portnox can help with your RBAC requirements

Role-based access control can be an extremely efficient way of ensuring network security and can be as top-level or granular as your organization demands. The key is developing a solid strategy before creating and assigning roles.

Which parts of your network need access control, which departments need permissions, and how will you assign people to the right roles?

If you need extra support keeping your network safe, Portnox is here to provide you with peace of mind. Our NAC security solutions come with role-based authentication and access policies to ensure the right people can access your network at the right time.

Contact our team today to find out more.

About Version 2
Version 2 is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Portnox
Portnox provides simple-to-deploy, operate and maintain network access control, security and visibility solutions. Portnox software can be deployed on-premises, as a cloud-delivered service, or in hybrid mode. It is agentless and vendor-agnostic, allowing organizations to maximize their existing network and cybersecurity investments. Hundreds of enterprises around the world rely on Portnox for network visibility, cybersecurity policy enforcement and regulatory compliance. The company has been recognized for its innovations by Info Security Products Guide, Cyber Security Excellence Awards, IoT Innovator Awards, Computing Security Awards, Best of Interop ITX and Cyber Defense Magazine. Portnox has offices in the U.S., Europe and Asia. For information visit http://www.portnox.com, and follow us on Twitter and LinkedIn.。

×

Hello!

Click one of our contacts below to chat on WhatsApp

×