acm-header
Sign In

Communications of the ACM

BLOG@CACM

5 Best Practices for Cloud-Based Backup and Recovery in 2023


View as: Print Mobile App Share:

Effective backup and recovery strategies are paramount as businesses increasingly rely on digital technologies to store and process data. This is why I've written this article to discuss five best practices for implementing a successful cloud-based backup and recovery strategy in 2023.

These practices help technical professionals ensure their organization's data is protected and can be quickly recovered in the event of a hack or system failure. By following these steps, technical professionals can help ensure their organizations are prepared to handle any potential data loss or downtime scenarios.

Understanding Recovery Point Objectives (RPO)

The recovery point objective (RPO) measures how much data loss your organization can afford to suffer in a system failure. It's typically expressed in terms of time, such as "you can afford to lose up to one day's worth of data."

To determine an appropriate RPO, you must consider the importance of your data, the cost of data loss, and the resources required to recover lost data. A business that relies heavily on real-time data and has a high cost of data loss will likely have a lower RPO than a business with less critical data and a lower cost of data loss.

To calculate your RPO, you need to consider the amount of data you generate and how often you need to back it up. Your RPO will also help determine whether your organization should invest in just backup systems or system redundancy as well.

Redundancy During Backup & Recovery

Redundancy is crucial in protecting your organization against disasters or data breaches. In the event of a disaster, such as fraud, phone hacking, and natural disasters, redundant systems allow your organization to continue functioning.

For instance, a hacker may gain access to sensitive company data if an employee falls victim to a phishing attack and gives away login credentials. In this case, redundant data systems enable you to continue offering your services despite a data location being compromised. Similarly, if a natural disaster hits your office and your data is only stored on local devices, your whole operation will come to a halt. Having redundant data systems in various locations will help you recover your data and get your operations back up and running.

In terms of backup, redundancy refers to creating multiple copies of data sets in different locations or on different devices to ensure that it is protected against loss or damage.

There are three main types of redundancy: local redundancy, remote redundancy, and cloud redundancy. Local redundancy involves creating multiple copies of data on different devices within the same location. Remote redundancy involves creating copies of data on devices in different locations. Cloud redundancy involves creating copies of data in the cloud, which can be accessed from anywhere with an internet connection.

To recover from redundancy, you need to know and use the most up-to-date dataset to restore data across all locations and devices. An automated backup workflow is highly beneficial here.

It is important to note that backup and redundancy are different. Backup involves creating a copy of data for recovery in the event of data loss. In contrast, redundancy involves creating multiple copies of data to reduce downtime and ensure systems can keep running in real-time.

Consider Both Data Loss and Downtime

Data loss and network downtime can significantly impact a business's operations. Data loss can lead to lost revenue, decreased productivity, and a loss of customer trust. Network downtime can also disrupt operations and lead to lost revenue and increased expenses to restore systems and services.

To prevent sensitive data being lost or stolen and network downtime, businesses can invest in redundant systems, use cloud-based backup and recovery solutions, and implement strong cybersecurity measures. They can also minimize downtime and data loss during migration by carefully planning and testing the migration process and using tools and strategies to minimize disruptions.

Some strategies for minimizing downtime and data loss during migration include using a phased approach, testing the migration process in a staging environment, and using migration tools that can automatically handle data synchronization and reconciliation. By using these strategies, businesses can ensure that their data is protected and that their operations are not disrupted during the migration process.

Use Data Classification to Protect Data and Aid Recovery

Data classification is the process of organizing data based on its importance or sensitivity. There are five types of data classification: public, internal, confidential, secret, and top secret.

  • Public data is information that can be shared freely with anyone, such as marketing materials or company policies.
  • Internal data is information intended for use within an organization, such as employee personal data or financial reports.
  • Confidential data is sensitive information that should only be shared with authorized individuals, such as social security information, client lists or trade secrets.
  • Secret data is highly sensitive information and should be protected from unauthorized access, such as classified government documents.
  • Top secret data is the most sensitive type of information and requires the highest level of protection, such as military secrets or nuclear codes.

Classifying data in this way helps organizations protect data by ensuring it is only accessed by authorized individuals and stored and transmitted securely. It also helps businesses comply with data protection regulations and industry standards.

In terms of data recovery, data classification can help businesses prioritize their data and determine which data needs to be recovered first in the event of a disaster or system failure. For example, suppose a business has classified its data as confidential or secret. In that case, it may prioritize recovering this data over public or internal data to minimize its impact on its operations and protect sensitive information.

Data classification can also help businesses identify which data needs to be backed up more frequently or stored on redundant systems to ensure that it can be recovered quickly and efficiently.

Consider Using a Recovery Cloud

A recovery cloud is a cloud-based solution to data storage and recovery that provides businesses with a secure and scalable way to back up and recover data. There are several types of recovery clouds, including private recovery clouds, hybrid recovery clouds, and public recovery clouds.

  • Private recovery clouds are owned and managed by a business and can store and recover data within the organization.
  • Hybrid recovery clouds combine private and public clouds to allow businesses to store and recover data in different locations.
  • Public recovery clouds are owned and managed by a third party and can be used by multiple businesses to store and recover data.

Recovery clouds also offer aggressive lines of protection, and work by continuously backing up data, which can then be accessed and recovered if necessary. They offer several benefits, including offsite storage, scalability, and the ability to recover data quickly and efficiently.

Conclusion

After considering data backup and recovery best practices, it is clear that cloud-based systems are essential for businesses looking to protect their data and minimize downtime in the event of a disaster or system failure.

The practices I've discussed here are crucial for businesses of all sizes and industries, as the importance of digital technologies continues to grow and data becomes an increasingly valuable asset.

 

Alex Tray is a system administrator and cybersecurity consultant. He is currently self-employed as a cybersecurity consultant and as a freelance writer at NAKIVO Backup and Replication company.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account