Digital Transformation projects now abound within enterprise IT. Depending on the vertical industry segment we see IoT, Customer 360, Industry 4.0 and Insurtec, to say nothing of the multi-cloud architectures cropping up across the IT landscape. As the operational techs now ease these systems into production, a critical question looms: Can they be recovered when an unexpected failure occurs with minimal disruption and data loss?
When new Digital Transformation systems are built and tested, it is common to see them added to an existing data protection and disaster recovery infrastructure. Why create another one when we already have the capability that’s been functioning for decades?
Recovery from disasters, however minor, remains one of the most basic requirements of enterprise IT. Data protection technology continues to evolve from recoveries using backup tapes, to the use of disk-based purpose built backup appliances (PBBAs), and now to the latest advances using storage array-based software solutions – a trend to called “self-protecting” storage. However, it is now incumbent on IT operations to not only keep up with but anticipate the recovery of these new systems as they become increasingly critical to the future of the organization. In developing a strategy that encompasses them, enterprise IT should to consider:
- The source, owner, application and provenance of the data.
- Long-standing business and organizational requirements for disaster recovery and recovery from failures including the most common one: human error.
- Security and compliance issues that these new systems may introduce to the IT environment.
Because enterprise IT rarely if ever builds data protection and recovery systems from scratch using open source software and commodity hardware, vendor selection will be a critical phase in the effort to modernize. Matching these applications with the self-protecting storage systems now available can allow IT to meet performance demands while delivering nearly immediate recoverability at the same time. New software is now available that automates the management of data protection and recovery operations across different applications, storage systems, and target protection devices (including public and private clouds) is part of an advanced data protection strategy. Additional capabilities needed to manage data retention, identification of unprotected data, real-time status of protection operations, and integration with copy data management further add value to the overall data protection environment.
Because there are multiple elements to an overall data protection and disaster recovery strategy, the vendor with a portfolio of integrated solutions often fares better in an evaluation than a collection of independent “parts” that put an additional integration and support burden on administrators. Vendors in this category include CommVault, Dell EMC, IBM and Veritas. But there are also solutions available that support multiple application sets running on the distributed architectures (Cassandra, HDFS, MongoDB) now common to Digital Transformation initiatives. These include Datos IO. Either way, the vendor’s support infrastructure is critical and should be viable for the long-term.
Getting the right protection and recovery strategy for Digital Transformation early-on is essential. Administrators of NoSQL databases such as Cassandra and MongoDB as well as distributed file systems like HDFS commonly believe that maintaining three copies of data is protection enough until a data corruption event occurs, and all three copies are corrupted. If deemed of value to business objectives these systems will likely be long-lived in enterprise IT. Their processes and procedures will become ingrained and more difficult to change over time. Enterprise production-grade data protection and recoverability is therefore a must for Digital Transformation.