Resilient application architectures have evolved dramatically over the years. In the age of monolithic applications, with static application deployments in large datacenter setups, resiliency required depth and redundancy in individual deployments. It needed always-on scale to meet the maximum expected workload, along with redundant connectivity and power.
Within a monolithic application environment, individual components – like servers – were expected to fail, and organizations built deployments with component-level redundancy as a result. For example, they created multiple database servers in a primary/secondary config or multiple application servers in an active/active config.
Over time, as resiliency demands increased, disaster recovery (DR) setups became more prevalent. They had full application infrastructure deployments on standby with data regularly or continuously replicated. In the event of a major failure, the IT team would flip the “big switch” to shift the workload to the DR deployment.
The emergence of cloud and IaaS has dramatically changed the way we think about application resiliency. Thin provisioning and auto-scaling for rapid deployment of new resources are now possible as conditions change and workloads shift. Spinning up secondary and tertiary DR environments is easy. There are now technologies that enable active/active setups, such as multi-master database replication systems and global load balancing technologies like those provided by modern DNS and traffic management services.
Today, we’re seeing a new shift in the way resilient applications are built, because of the emerging criticality of cloud services in application stacks. Cloud services include Software-as-a-Service (SaaS)-style technologies like cloud storage, Database-as-a-Service (DBaaS), Artificial Intelligence-as-a-Service (AIaaS), content delivery networks (CDNs) and Managed DNS networks. Increasingly, today’s applications are built in architectures where cloud services are critical path components.
What happens when a key cloud service in your stack fails? The 2017 AWS S3 outage provides a real-world example. It resulted in the failure of many major websites and applications dependent on S3’s cloud storage service.
Just as redundancy was introduced in the days of monolithic application architectures, today’s cloud service-enabled applications demand redundancy at the cloud service layers of the stack:
- Primary/secondary cloud storage providers
- Multiple DNS networks
Cloud service redundancy is critical to building resilient architectures for today’s applications, where SaaS technologies and cloud services are critical components that nevertheless can and do fail.
Introducing cloud service redundancy starts with managing application workload: how do you direct work across multiple cloud service providers? DNS is one of the most powerful tools in the stack for managing workload. You can leverage the traffic management tools of modern DNS providers to weight traffic across cloud services, shift workload in response to real-time conditions and fail away from broken cloud service providers.
Of course, it’s also critical to introduce DNS redundancy to mitigate the impact of major service provider outages due to attacks or other issues. Some modern DNS providers can help you easily introduce DNS redundancy by deploying multiple DNS networks.
Thinking about how to improve the resiliency of your application architectures? Think about your cloud service providers and how to introduce redundancy at the cloud services layer, and talk with your DNS and traffic management provider about how to manage multi-cloud, multi-CDN and other multi-cloud services setups.