Skip to content

Backup Strategy and the 3-2-1 Principle

Data loss comes in all sizes: small (individual files), medium (SharePoint site), and large (ransomware and disaster recovery). No matter the size of the loss of data, none of them are fun, and even the smallest of data loss events could leave you lacking your most critical data. That one spreadsheet or that one hard disk drive could have what you and your business rely on most – it’s not always something someone can “just create again” on a whim as data loss is indiscriminate in its impact. All data loss events negatively impact workflow, and all are risk and data protection concerns that ultimately are a business imperative. Proactive data protection through backup and data management is at the forefront of all of our minds—or at least should be. Now why is that? Years ago, the assumption prevailed that cloud services would “take care of everything” once you signed up for a cloud service, with backup being lumped in. But now, more than ever, as the awareness of shared responsibility models for SaaS applications grows which states it is the user who is responsible, it’s clear the onus is on you to have that backup strategy in place. That’s why the 3-2-1 backup rule—a principle established for on-premises infrastructure which requires multiple copies of backup data on different devices and in separate locations—is still relevant to today’s cloud-based infrastructures by providing essential data-protection guidelines.

Why Back Up Cloud SaaS Data, and Why Now?

Your data is critical to your business operations, and in many cases, maintaining control of and access to it is required by law. (Read more about how third-party security keeps companies in control of their data here.)

SaaS Shared Responsibility Model

Software-as-a-service providers have established documentation that clarifies the areas of responsibilities they have and also those responsibilities that are retained by the customer. Microsoft, well known for its Microsoft 365 SaaS offering, delineates the boundaries of shared responsibility in the cloud. While Microsoft does provide some degree of data protection, many people are not aware of the limitations of this protection. The short of it is that Microsoft does not provide suitable backup and restore functionality to customers. Learn more about why your M365 is not backed up (and how to fix it) in our in-depth article here.
And it’s not only Microsoft that has a shared responsibility for their SaaS services. Google (and backup files to Google drive) has what they refer to, almost ominously, as “shared fate” on Google cloud shared responsibilities. Likewise, Amazon Web Services (AWS) have their own shared responsibility model. It’s vital customers know and understand the extent of their agreement.

Risks to Data Security

In the days of on-premises backup, the only credible risks were acts of mother nature and hardware failure. That is, of course, if you ignore software issues. Lots of software (from firmware on RAID adapters to drivers to operating system filesystem implementations and the user applications) problems would cause data loss and a need for restore, from system level down to file level. (That’s one thing I don’t miss about the ‘90s.) However, in the cloud-computing era, the risks have evolved as much as the ways in which we create, share, and store data, so things are much more complicated now. With both the prevalence and penetration of ransomware, cybercrime, and not to mention the increased access users have in order to streamline collaboration interactions and boost productivity, data—the lifeblood of a company—has, in many ways, never been more susceptible to data loss, regardless of whether it’s international (malicious actors, ransomware, etc.) or unintentional (human error, accidental deletion). Sometimes going back to basics can be the place to start in developing or hardening security.

3-2-1 Backup Method

The 3-2-1 principle comes from the days of on-premises data storage. It is still commonly referenced today in the modern, cloud-computing area. Even though it isn’t directly applicable, word for word, to cloud data, this well-known and widely used principle can still be used today to guide security decision makers in their process of improving their security infrastructure against today’s data risks.
Roughly speaking, the 3-2-1 backup rule requires 3 copies of data, across two types of storage media, with one off-site copy stored.

What Is the Origin of the 3-2-1 Rule?

Backup and recovery solutions have existed since long before cloud computing. However, the methodologies have shifted due to the modernization of the infrastructures, behaviors, needs, and of course a lot more variables (but we won’t get into that here), which has resulted in some discrepancies between best-practice principles and their application to modern data infrastructures. This is also the case with the 3-2-1 backup rule, with the biggest change being the shift of how data is created and stored (or rather where). Formerly, production data was created on site and stored in on-premises hardware, alongside one backup copy, and the third being stored off premises and typically on tapes. ComputerWeekly has a feature on if the cloud has made 3-2-1 obsolete. In the cloud era, data is created in numerous places by remote workers in SaaS applications, where it is often transferred around the globe, and is stored “somewhere else” from a business’s physical office. More than likely, the extent of an answer to the question of “where is your data stored” is that it’s in the cloud. But is that backup? And what is true backup in the cloud?

How Does the Rule Apply to Cloud Backup?

We often see iterations of this backup principle in fancy infographics that almost forget to translate the rules to apply to the current scenarios. However, with a few tweaks, there’s plenty of relevant guidance that can help lead to a successful, modern, data security system.
Let’s look at the rules with a modern lens:

3 Copies of Your Data

The ‘3’ in the rule refers to the number of “copies of your data,” with one being the primary dataset in the production environment while the remaining two copies are backups. This is still applicable to modern data protection best practices.

2 Administrative Domains

As mentioned, the ‘2’ can be understood as “two administrative domains” so that copies are managed independently from the other or are stored within separate logical environments. You often see this written as “two types of media,” which is a relic from the on-prem past when it was made up of disks and tapes. Now, it’s about having copies across multiple disks and across two administrative domains so that one data-loss event cannot possibly—or is extremely unlikely to—impact all copies of the data. This is known as a logical gap. Without it, should there be a cloud-wide compromise (such as a breach) or data loss event of the cloud where your primary data lives, your data would not be available to you. One of the best-known examples of this is the Danish shipping giant Maersk and the infamous NotPetya cyberattack, dubbed “the most devastating cyberattack in history” in the full Wired story here. When working “in” the cloud, the building you are in isn’t of any real consequence to the data. Rather, it’s the cloud you are working in and storing data in that matters. In many regards, this step could envelop the step below, “1 copy external,” but in respect to the principle, it serves us here to keep it a separate consideration. Should there be a cloud-wide compromise or data loss event of the cloud where your primary data lives, your data would still be available to you by following the rule. Without doing so, you’ve lost access to your data (or even lost your data permanently), with an impact that has a massive potential for business disruption and costs (as in the case of Maersk).

1 Copy External

Formerly the ‘1 off-site storage copy,’ this still applies for the same reasons as it did in the past: You don’t want to store all of your data in the same exact location, and whether all are aware or not, the cloud is located in physical data centers. From the on-premises days, this meant literally having a copy of disks and/or tapes in a different location from your business in case someone, something, or some event with the power to destroy the building did so. Let’s call this the “in case of fire” step. In cloud computing, this means having a backup copy outside the cloud of the production environment and outside the administrative domain of the other backup. Remember, the cloud is ‘just’ physical data centers, so by working in the cloud, the centers you are storing your data in are of real importance to the data. What if the data center of the cloud you are working in is also the same data center that your backup cloud data is stored in? Should there be a data loss event at that center, all of your data would be at risk from that event. That’s bad.

Use Case: What would this look like in real life?

If, for example, you are working on a Microsoft Word document and you save it to OneDrive that has OneDrive Backup turned on, you’re totally protected, because it says “backup,” right? This is an example where the 3-2-1 principle still helps shed light on modern data protection in the cloud. By following the 3-2-1 rule above, one can deduct that this example isn’t backup (but neither is a lot of what SaaS providers offer as ‘backup’) because true backup requires a logical infrastructure separate from the primary data. As the “in case of fire” step requires, you must have one copy outside of the administrative domain. By working in and backing up OneDrive data to Microsoft’s cloud services, the data remains in the same administrative domain. What if something were to happen to Microsoft servers? You’d lose access to your primary data and the copies “backed up” since they all relied on the same cloud. What’s even worse is that since the backup is configured by “you” (i.e., the admin), a compromise of your account can unconfigure it, too. So, a simple case of ransomware could completely and automatically disable or work around such in-service protections—even leading to immediate backup data deletion. Keepit, on the other hand – aside from being separate (and therefore unlikely to be compromised at the same time by the same mechanism), as a dedicated backup solution – will actually protect even the administrator from quickly or immediately deleting backup data. In this respect, Keepit offers some of the most desirable features of “the tape in an off-site vault” in a modern cloud service solution.

Here’s how to use the 3-2-1 backup rule to ensure you’re covered: Independent cloud

If you’re interested in further reading, check out our e-Guide on SaaS data security for a thorough look into leading SaaS data security methodologies and how companies can raise the bar for their data protection in the cloud era. Convinced you need backup, but want to know more about data protection and management for your particular SaaS application, then explore how Keepit offers cloud data backup coverage for the main SaaS applications here.

About Version 2
Version 2 is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Keepit
At Keepit, we believe in a digital future where all software is delivered as a service. Keepit’s mission is to protect data in the cloud Keepit is a software company specializing in Cloud-to-Cloud data backup and recovery. Deriving from +20 year experience in building best-in-class data protection and hosting services, Keepit is pioneering the way to secure and protect cloud data at scale.

Cyber Kill Chain

Intro

This is an important concept and I want to provide you with a quick overview of what are kill chains, what is threat modelling, why we do these things, and why we need them. This understanding is crucial in creating a stable and strong security posture.

One other thing to note is that all these frameworks are generally made to complement other frameworks. For example, the UKC – Unified Kill Chain – is made to be a complement to MITRE.

Cyber Kill Chain – what is it?

This term is a military term/concept that relates to an attack, in particular its structure. Lockheed Martin (security and aerospace company) is the one that established the Cyber Kill Chain in 2011, based on the aforementioned military concept. The idea of the framework is to define the steps adversaries are taking when attacking your organization. In theory, to be successful, the adversary would pass all the phases within the Kill Chain.

Our goal here is to understand what the Kill Chain means from an attacker’s perspective, so that we can put up our defences in place and either pre-emptively mitigate that, or disrupt their attacks.

Why do we need to understand the (Cyber) Kill Chain?

Understanding of the Cyber Kill Chain can help you protect against myriad of attacks, like ransomware, for example. It can also help you understand in what ways do APTs operate. Through this understanding, as a SOC Analyst, or Incident Responder, you can potentially understand the attacker’s goals and objectives by comparing it to the Kill Chain. It can also be used to find those gaps and remediate on missing controls.

The attack phases within the Cyber Kill Chain are:

  • Reconnaissance
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • Command & Control
  • Actions on Objectives (Exfiltration)

Reconnaissance

As we all know, this means searching for and collecting the information about system(s). In this phase, our adversaries are doing their planning. This is also where OSINT comes in, and is usually the first step an adversary will take, before going further down the chain. They will try and collect any possible piece of info on our organization, employees, emails, phones, you name it.

This can be done, for example, through email harvesting – process of collecting email addresses from online (public, paid, or free) services. These can further be used for a phishing campaign, or anything else.

Within the recon step, they also might collect the socials of the org’s employees, especially if some employee in particular might seem of interest or is a bit more of an easier target. All this information goes into the mix and is nothing new. First step – Recon.

Weaponization

After the initial recon phase, our adversary goes on to create their weapons! Usually, this entails some sort of malware/exploit combination bundled into a payload of sorts. Some adversaries will straight up buy the malware on the Darkweb marketplaces, but some more sophisticated attackers, as well as the APTs, will usually write their own malware. This is advantageous as it might actually evade your detection systems.

They can go on about this in numerous ways, but some examples might include them creating a malicious MS Office document with bad macros or VBA scripts, they could also use Command & Control techniques so that your affected machine calls to the Command server for more of those malicious payloads. (Yikes!) Or, they could add a backdoor, some other type of malware, or anything really.

Delivery

This step entails the attacker choosing a way to deliver the payload/malware to their victim. There are many options here, but in general, the most used one is good old phishing email.

With a phishing email that’s sent after the successfully completed reconnaissance phase, the attacker can target a specific person (spearphishing), or a group of employees at your organization. Within the email would be the embedded payload.

Other ways to distribute the payload may include the attackers planting infected USBs in public places, like parking lots, the streets, etc. Or, they could use a so-called watering hole attack which basically aims at a specific group of people by sending them to an attacker controlled website, by redirecting them to that site off a site they generally use but is now compromised by the attacker.

The attacker exploits the website, and then somehow tries to get the unsuspecting users to browse to the site where the victim basically downloads the malware/payload unintentionally.

Exploitation

Before finally getting access to our systems, the attackers need to carry out an actual exploit. Suppose the previous steps worked and the user downloaded or somehow ran the malicious payload, the attacker is ready for the next steps… or whatever’s in between! They can try to move laterally, get to your server, escalate privileges, anything goes.

This step boils down to

  • Victim opening the malicious file, thus triggering the exploitation
  • The adversary exploits our systems through a server, or some other way
  • They use a 0 day exploit

Whatever the vector, it comes down to them exploiting our systems and gaining access.

Installation

This step comes after the exploitation and it usually pertains to the adversary trying to keep a connection of sorts to our system. This can be achieved in many ways, for example, they might try to install the backdoor on the compromised machine, or they could modify our services, they could also install a web shell on the webserver, or anything else that helps them achieve persistence. This is key for the Installation phase.

The persistent backdoor is what will let the attacker interact and access our systems that were already compromised.

In this phase, they also might try to cover their tracks from your blue team by trying to make the malware look as if it was a legitimate app/program.

Command & Control

The Command & Control, also known as C2, C2 beaconing, or C&C is the penultimate part, and this is where the adversary uses the previously installed malware on the victim’s device to control the device remotely. This is usually built into the malware itself, and it has some sort of logic through which it calls back home to its Control server.

As the infected device calls back to the C2 server, the adversary now has full control over the compromised device. Remotely!

The most used C2 channels these days are:

  • HTTP protocol 80, HTTPS port 443 – HTTPS is interesting as it can help hide within the encrypted stream and it can potentially help the malicious traffic evade firewalls
  • DNS – the infected host will make constant DNS queries calling to its C2 server

An interesting fact – In the past, adversaries used IRC to send C2 traffic (beaconing), but nowadays it’s became obsolete as this type of bad traffic is much more easily detectable by the modern/current security solutions.

Actions on Objectives (Exfiltration)

This is your exfiltration (or exfil) step, where the adversary tries to gather all the goodies they just stole, so, user credentials, messing with the backups and/or shadow copies, corrupt/delete/overwrite/exfiltrate data. They can also escalate to domain admin, for the keys to the kingdom, or move laterally through the organization. They could also try to find vulnerabilities of your software internally, and much more.

This step depends on their specific goals/objectives, and is where all the action will happen, thus actions on objectives.

Conclusion

I hope I’ve managed to give a brief overview of this incredibly important concept. I will cover it a bit more in the future, but for now, I felt like this was a good (traditional) start. I do hope to cover the Unified Kill Chain soon, though. So – stay tuned!

Ps: you’ll notice there are a couple of other frameworks/variations aside from the Cyber Kill Chain, and I will try to explain the distinctions. Just remember these are models/methodologies and there’s no silver bullet. They should be used in conjunction with other security controls.

Image source – https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html

Cover image by Linus Sandvide

#kill-chain #cyber #C2 #threat-modelling

About Version 2
Version 2 is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About VRX
VRX is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×