Skip to content

Step-by-Step Guide to Backup OpenStack Using Storware

Learn how to safeguard your OpenStack environment with Storware. This step-by-step guide provides a comprehensive overview of backup processes, ensuring data integrity and disaster recovery. 

Prerequisites:

  • OpenStack environment setup and running.
  • Storware Backup and Recovery software installed and configured.
  • Administrative access to both OpenStack and Storware systems.
  • Backup storage configured in Storware.
 

Step 1: Configure Storware to Connect with OpenStack

1. Login to Storware Backup and Recovery Console:

  • Open a web browser and navigate to the Storware Backup and Recovery console URL.
  • Log in with administrative credentials.

2. Add OpenStack Environment:

  • Go to the Environments section.
  • Click on Add Environment.
  • Select OpenStack from the list of supported environments.

3. Enter OpenStack Credentials:

  • Provide the OpenStack API endpoint.
  • Enter the necessary credentials (username, password, tenant/project name).
  • Specify the domain name if using Keystone v3.

4. Test Connection:

  • After entering the details, click on Test Connection to ensure Storware can communicate with your OpenStack environment.
  • Once the connection is successful, save the configuration.

Step 2: Define Backup Policies

1. Create Backup SLA:

  • Navigate to the SLA Policies section.
  • Click on Create SLA Policy.
  • Define the backup schedule (e.g., daily, weekly), retention period, and any other relevant parameters.
  • Save the policy.

2. Assign SLA Policy to OpenStack Instances:

  • Go to the Virtual Machines or Instances section under your OpenStack environment in Storware.
  • Select the instances you want to back up.
  • Assign the previously created SLA policy to these instances.

Step 3: Perform Backup

1. Initiate Manual Backup (Optional):

  • Although backups will be performed according to the SLA policy, you can initiate a manual backup.
  • Select the instance you want to back up.
  • Click on Backup Now.
  • Monitor the backup progress in the Job Monitor section.

2. Monitor Backup Jobs:

  • Check the status of backup jobs in the Job Monitor section.
  • Ensure that backups are completed successfully.

Step 4: Recovery of OpenStack Instances

1. Identify the Backup to Restore:

  • Navigate to the Backup section.
  • Select the OpenStack environment.
  • Choose the instance you want to restore.
  • Browse through the available backup points.

2. Initiate Restore Process:

  • Select the backup point you wish to restore.
  • Click on Restore.
  • Choose the restore options (e.g., restore to the original instance or create a new instance).

3. Specify Restore Details:

  • If restoring to a new instance, provide the necessary details (e.g., instance name, flavor, network).
  • Confirm the restore operation.

4. Monitor Restore Jobs:

  • Go to the Job Monitor section to track the progress of the restore job.
  • Once the job completes, verify that the instance is restored correctly.

Step 5: Verify and Validate Backup and Restore

1. Verify Backups:

  • Periodically check the backups to ensure they are performed as per the defined schedule.
  • Conduct test restores to validate that backups are not corrupted and are usable.

2. Automate Monitoring:

  • Configure alerts and notifications in Storware to be informed of backup and restore job statuses.
  • Regularly review logs and reports for any anomalies or issues.

Step 6: Maintenance and Best Practices

1. Regular Updates:

  • Keep both OpenStack and Storware Backup and Recovery software updated to the latest versions to ensure compatibility and security.

2. Audit and Compliance:

  • Maintain logs of backup and restore activities for auditing purposes.
  • Ensure compliance with organizational data protection policies and regulatory requirements.

3. Disaster Recovery Planning:

  • Develop a comprehensive disaster recovery plan that includes detailed procedures for backup and restore.
  • Regularly test the disaster recovery plan to ensure readiness in case of an actual disaster.

By following these steps, you can effectively manage the backup and recovery of your OpenStack environment using Storware Backup and Recovery, ensuring data protection and minimizing downtime.

About Storware
Storware is a backup software producer with over 10 years of experience in the backup world. Storware Backup and Recovery is an enterprise-grade, agent-less solution that caters to various data environments. It supports virtual machines, containers, storage providers, Microsoft 365, and applications running on-premises or in the cloud. Thanks to its small footprint, seamless integration into your existing IT infrastructure, storage, or enterprise backup providers is effortless.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Autonomous Data Protection

Will robots take over data management? In recent years, backup and disaster recovery system vendors have introduced several significant innovations. But the best is yet to come. 

Modern data protection solutions, encompassing backup, disaster recovery, replication, and deduplication, are constantly evolving. Manufacturers have moved from a stage of manual configuration to automation. However, this is not the end of the road. There is increasing talk about the era of autonomous backup and even autonomous data management. Is this a near future reality, or just a fantasy?

Opinions on this matter are divided. Skeptics cite the example of autonomous cars. Although prototypes have appeared on the streets of San Francisco, the road to their widespread adoption seems to be a long way off. On the other hand, proponents point to robotic vacuum cleaners that are displacing traditional vacuum cleaners from homes. If humans can be eliminated from processes that require high precision, why not do the same in areas closely related to IT?

Automation and autonomy are very similar concepts, sometimes incorrectly used interchangeably. Nevertheless, there are some subtle differences between them. Automation means that the tasks performed are based on pre-defined parameters that must be updated as the situation changes. This is how elevators, office software, washing machines, robotic assembly lines, and most backup and DR systems work.

On the other hand, autonomous processes differ from automated ones in that they are constantly learning and adapting to the environment. In such cases, human intervention is not needed or is minimal. A great example is the aforementioned robotic vacuum cleaners or driverless cars.

The authors of the concept of autonomous data management assume that processes should take place invisibly, although under human control. Autonomy somehow combines automation with artificial intelligence (AI) and machine learning (ML), so that the data protection system intuitively adapts to the situation.

AI and ML technologies enable the automation of data management processes and minimize human intervention and supervision. Proponents of such a solution argue that it increases operational efficiency, extends uptime, improves security, and the level of services offered.

Clouds Force Change

If companies only stored data in on-premises environments, it would be possible to do without autonomous tools, but in the last two years, things have become much more complicated. Enterprises have moved some of their assets to the public cloud, which has contributed to the growing importance of hybrid and multi-cloud environments. It was supposed to be easier and cheaper, but the ongoing adoption of cloud services is causing sleepless nights for many IT managers.

The main problem lies in the excessive dispersion of data, which is located both in the local data center and in external service providers such as Amazon, Google, Microsoft, or smaller local providers. Managing, and especially protecting, digital assets scattered across various locations is a challenge. The situation is worsened by the relatively narrow range of vendors’ tools optimized for managing corporate data for hybrid and multi-cloud environments.

Part of the products provide support for multiple clouds through centralized control, although they consume many expensive resources. There are also efficient solutions, but only within a single cloud environment. Their main drawback is scalability in the clouds of different providers. In any case, in both of the aforementioned cases, operating costs are higher than desired.

Another problem is the excessive haste in implementing cloud technologies, leading to an increase in the number of point solutions. Cloud environment architects, application developers, and analysts implement independent data management solutions, which deepens the chaos and limits the possibilities of central management.

The data protection strategy in the cloud environment also leaves much to be desired. Security specialists emphasize that in today’s world, the most effective way to stop attackers is through preventive measures. Unfortunately, most modern technologies take a passive approach to resources stored in the cloud. In practice, this means that they create backups and restore backups after an attack, which results in unplanned downtime.

In summary, autonomous backup supports operations in multiple clouds, eliminates functional silos, automates all processes with minimal human intervention, and increases cyber resilience through active methods of detecting and preventing ransomware attacks.

It has long been known that people are the weakest link in the data protection system. This is particularly evident in environments that require fast and data-driven decision-making. It is also undeniable that people are prone to errors and slower than AI-based solutions, especially when it comes to mundane, repetitive tasks.

So will robots send IT department employees to the pasture in the near future? So far, no one is talking about it loudly. According to the authors of the concept of autonomous data management, the best solution in a complex, hybrid and multi-cloud environment is autonomous work. This means that data will self-optimize and repair itself, as well as move between different environments. Self-optimization uses artificial intelligence and machine learning to adapt to the principles and services related to data protection and management. Self-healing is the ability to predict, identify, and correct service errors or performance issues.

On the other hand, self-service assigns appropriate protection policies and manages and deploys applications and services without human intervention. What does this mean?

In the traditional model, a programmer deploying a new application relies on manual processes, which lengthens it. Autonomous data management eliminates all manual tasks, while protecting the application throughout the process, without the need for additional actions on the part of the application developer or IT staff.

Autonomous Data Management – Is It Worth It?

The concept of autonomous data management looks very promising. Importantly, some backup and DR system vendors are announcing the launch of such solutions in the near future, not in the coming years. On the market, you can already find products that use Machine Learning to early detect anomalies that signal an attempt to attack the backup system. Some companies also use partially AI-based solutions combined with DLP systems, which helps classify and tag information, and thus copy and protect the most important data.

However, only the widespread adoption of systems that provide autonomous data management will allow us to answer the fundamental question – is it worth the effort?

Some data protection specialists warn against excessive optimism. In their opinion, the biggest obstacle to the adaptation of autonomy in backup and DR processes may be the collection of a sufficiently wide range of data to be able to analyze various scenarios. It is difficult to imagine that vendors of solutions would share such information with each other.

It is also difficult to count on the openness of IT department employees, as they may fear that new products will deprive them of their jobs. It can also be safely assumed that the term “autonomy” will be overused by marketers, which on the one hand encourages customer investment, and on the other hand, threatens that low ratings of disappointed users will deter potential customers. It is possible that there will be limitations related to computing power, as well as the costs of such a solution. Nevertheless, it is worth closely following such initiatives, especially as it concerns large companies and institutions storing data in different environments.

Storware develops towards autonomous

While full autonomy might still be a distant goal, Storware’s focus on AI and automation is a significant step in that direction. These features have the potential to significantly improve efficiency, reduce human error, and enhance overall data protection.

In the near future, Storware will implement a number of improvements that will allow for:

  • Automation: The Backup Assistant and conversational layer aim to automate routine tasks and provide intelligent responses, reducing human intervention.
  • Intelligence: Storebrain’s ability to learn from collective data and provide optimal configurations demonstrates a move towards intelligent decision-making.
  • Proactive Protection: The integration of AI into Isolayer for threat prevention showcases a proactive approach to data management, essential for autonomous systems.

However, key to achieving full autonomy would be further development in areas like:

  • Self-healing capabilities: The system should be able to identify and resolve issues independently.
  • Predictive analytics: Accurate forecasting of system behavior and potential problems.
  • Continuous learning: The system should constantly improve its performance based on new data and insights.

About Storware
Storware is a backup software producer with over 10 years of experience in the backup world. Storware Backup and Recovery is an enterprise-grade, agent-less solution that caters to various data environments. It supports virtual machines, containers, storage providers, Microsoft 365, and applications running on-premises or in the cloud. Thanks to its small footprint, seamless integration into your existing IT infrastructure, storage, or enterprise backup providers is effortless.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Snapshots and Backups: A Nearly Perfect Duo

Snapshots and backups are both crucial for data protection. However, to maximize their benefits, it’s essential to understand their capabilities.

As data volumes and value continue to grow, data has become an invaluable asset for businesses, governments, consumers, and cyber-criminals alike. Cyber-criminals will stop at nothing to steal information or block legitimate users from accessing it. Fortunately, organizations have various tools and methods to protect their data, including backups and snapshots. While these methods share some similarities, they are often mistakenly seen as interchangeable. This article will delve into the fundamental differences between backups and snapshots and how they can complement each other.

The Indispensability of Backups

Until recently, it was common to say that people were either backing up their data or were planning to do so. However, this saying is no longer accurate. It’s increasingly difficult to find individuals or businesses that don’t perform backups. Backups are typically created on a regular schedule (e.g., nightly or multiple times a day) and can include all files on a server, emails, or databases. By archiving data in backups, users are protected against accidental data loss caused by errors, accidental deletions, or other failures. This is why backups are often referred to as “security copies.”

There are several types of backups. The simplest is a full backup, which creates a complete copy of the data to a destination storage device. Other methods include differential and incremental backups. A differential backup only backs up data that has been added or changed since the last full backup. An incremental backup, on the other hand, uses the previous backup as a reference point rather than the initial full backup.

A full backup is a complete copy of the data. If each backup is 10TB, for example, it will consume an additional 10TB of storage. Creating a backup every hour would consume 100TB of storage in just 10 hours. For this reason, storing multiple versions of backups is not a common practice.

The Role of RPO

A challenge with backups is achieving a suitable Recovery Point Objective (RPO), which defines the maximum amount of data loss that can be tolerated and the maximum acceptable time between a failure and the restoration of a system to normal operation. Businesses have varying requirements—some may be satisfied with a 24-hour RPO, while others strive for an RPO as close to zero as possible. For example, losing even a small amount of data in manufacturing companies can lead to production line downtime, lost product batches, and significant financial losses.

Some businesses determine their RPO based on the cost of storage compared to the cost of data recovery. These calculations help determine the frequency of backups. Another approach is to assess risk levels. In this case, a company evaluates which data can be lost without significantly impacting the quality and continuity of its business.

Backups are not optimal for creating short recovery points. Snapshots are much better suited for this purpose, which is why the two technologies should be used together. Snapshots are the preferred solution when high RPO requirements must be met, such as in 24/7 environments like internet service providers.

Snapshots for Specialized Tasks

A snapshot is a point-in-time capture of stored data. Its main advantage is its creation time, which is typically measured in minutes or even seconds. Snapshots are usually created every 30 or 60 minutes and have minimal impact on production processes. They allow for quick recovery to previous file versions at multiple points in time. For example, if a system is infected with a virus, files, folders, or entire volumes can be restored to a state before the attack.

However, snapshots are often a feature of NAS or SAN storage and are stored on that storage. This means they occupy relatively expensive storage capacity, and if the storage fails, users lose access to recent snapshot copies. While individual snapshots do not consume much space, their combined size can increase, leading to additional processing costs during recovery. Therefore, it’s good practice to limit the number of stored copies. Experts recommend not storing snapshots for longer than the last full backup.

Furthermore, migrating a snapshot from one physical location to another does not allow for environment restoration, which is possible with backups. Since a snapshot is not a complete copy of the data, it should not be considered the sole backup and should be combined with backups. In summary, backups provide the ability to restore data over long RPOs, often quickly and in detail, down to the file level.

Types of Snapshots

While snapshot creation processes vary by vendor, there are several common techniques and integration methods.

  • Copy-on-write: This method copies any blocks before they are overwritten with new information.
  • Redirect-on-write: Similar to copy-on-write, but it eliminates the need for a double write operation.
  • Continuous Data Protection (CDP): CDP snapshots are created in real-time, capturing every change.
  • Clone/mirror: This is an identical copy of an entire volume.

Summary

Snapshots and backups have their strengths and weaknesses. Generally, backups are recommended for long-term protection, while snapshots are intended for short-term use and storage. Snapshots are typically useful for restoring the latest version of a server within the same infrastructure.

Both snapshots and file backups can be used together to achieve different levels of data protection, and this is actually the most recommended configuration for backup strategies.

About Storware
Storware is a backup software producer with over 10 years of experience in the backup world. Storware Backup and Recovery is an enterprise-grade, agent-less solution that caters to various data environments. It supports virtual machines, containers, storage providers, Microsoft 365, and applications running on-premises or in the cloud. Thanks to its small footprint, seamless integration into your existing IT infrastructure, storage, or enterprise backup providers is effortless.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

Real-world recovery: SaaS data backup as the cornerstone of cyber resilience

In today’s digital landscape, where cloud-based services like Microsoft 365 dominate, cyber resilience has become a top priority for organizations. Businesses are increasingly relying on SaaS (software-as-a service) platforms, assuming that all their data is secure. However, without the right backup strategy, they may find themselves vulnerable in the face of data loss or a cyberattack.

In this post, I want to share a real-world experience from a client that highlights the importance of SaaS data recovery and how it plays a crucial role in maintaining operational continuity.

A crisis unfolds

One of our partner’s clients, who remains anonymous due to confidentiality agreements, faced a daunting cyberattack that compromised their Microsoft Azure AD (Entra ID). Like many businesses, they believed their data was safe in the cloud, under Microsoft’s protection. What they didn’t realize, however, is that Microsoft’s services are not immune to data loss or breaches, and the responsibility for safeguarding data ultimately lies with the customer. This is part of what’s known as the shared responsibility model, where cloud providers handle infrastructure security, but data protection remains the customer’s responsibility. Read Microsoft’s shared responsibility in the cloud (source: Microsoft).

When the attack occurred, the client was caught off-guard. Administrative accounts were compromised, some user accounts were maliciously deleted, and there were attempts to exfiltrate sensitive data from SharePoint. The customer’s crisis committee immediately launched an investigation, but they quickly ran into a roadblock: Azure AD only retains logs for 30 days, making it impossible for them to perform an in-depth forensic analysis of what had transpired.

The flow of recovery

By using Keepit backup and recovery for Microsoft Entra ID (formerly Azure Active Directory), we were able to act quickly in response to the cyberattack in three broad steps:

  • Downloaded the last 12 months sign-in and audit logs of Entra ID for investigative analysis using Keepit’s unlimited storage and retention capability.
  • Traced when and what changes happened on the compromised accounts using Keepit for Entra ID metadata previewer feature.
  • Restored the affected user accounts along with their configurations and permissions without needing to recreate accounts from scratch using Keepit for Entra ID’s powerful recovery features.

The power of backup

This is where Keepit became a game-changer. By leveraging Keepit’s robust backup capabilities, we were able to provide the customer with access to logs that spanned an entire year. This historical data was critical for the investigation, as it allowed the customer to trace the breach back to its origins, determine the extent of the damage, and understand when the attack had taken place.

But data recovery goes beyond simply accessing logs. The compromised user accounts needed to be restored, along with all their associated settings (metadata) — something that would have been a nightmare without the right backup solution. Keepit’s ability to restore not just user accounts but also their configurations, MFA settings, and group memberships ensured the client could recover quickly without having to start from scratch. If the client had relied on a standard backup solution, the process would have taken significantly longer, jeopardizing their recovery time objective (RTO).

The lesson: Backup is non-negotiable

This experience underscores a key lesson: Having a comprehensive SaaS data backup and recovery strategy is essential to cyber resilience. It’s not just about restoring files but about maintaining business continuity, even when the unexpected happens. The ability to quickly recover from a breach can mean the difference between a short disruption and a prolonged, business-threatening downtime.

For companies operating 100% in the cloud, like our client, backing up identity systems (such as Entra ID) is as crucial as backing up files. When administrative accounts are compromised, and there’s no backup, organizations face the risk of losing more than just data — they risk losing control over their entire cloud environment. Read more on why Microsoft Azure AD needs to be backed up in the cloud.

Cyber resilience starts with recovery

The ease and speed with which we were able to restore the client’s operations, thanks to Keepit, reaffirmed the central role that data recovery plays in cyber resilience. It’s not just about preparing for attacks but also about having the right tools in place to recover from them. This customer, through our guidance, has now included Keepit as a key component in their cyber resilience strategy. They understand that backup is no longer a nice-to-have; it’s a critical aspect of their business continuity planning.

In a world where the question isn’t if an attack will happen, but when, the ability to recover swiftly is a vital need. With Keepit, we were able to help our client turn what could have been a catastrophic breach into a manageable incident, all thanks to a well-implemented SaaS data recovery strategy.

 

This article is part one of a two-part series sharing a real-world customer story on cyber resilience. In part two, we look into how cyber insurance played a critical role in protecting a business from the financial impact of a ransomware attack. Read part two entitled “Real-world recovery: The role of cyber insurance in ransomware resilience.”

About Keepit
At Keepit, we believe in a digital future where all software is delivered as a service. Keepit’s mission is to protect data in the cloud Keepit is a software company specializing in Cloud-to-Cloud data backup and recovery. Deriving from +20 year experience in building best-in-class data protection and hosting services, Keepit is pioneering the way to secure and protect cloud data at scale.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×