Business2Community

Tyler Keenan

Edge Computing Takes the Cloud Back to the Future

Cloud computing has been one of the driving forces behind many of the last decade’s biggest advances, allowing businesses of any size to tap into the kind of goliath computing power that was previously available only to a few of the biggest companies in the world. The near-instant availability of practically unlimited computing and storage resources has changed nearly every industry and reshaped our lives in both obvious and subtle ways.

But as often happens, the very technologies that cloud computing helped enable are now showing signs of outgrowing it. To power the next wave of technological change and growth, cloud computing will have to change and grow as well. One of the most promising advances is called “edge computing,” a cloud computing trend that moves some of the cloud’s resources and capabilities closer to the places they’ll actually be used.

The Cloud and Its Discontents

But wait, what’s wrong with the cloud? After all, costs for storage and processing have been falling while capacity only keeps going up. So what’s the problem exactly? According to Werner Vogels, the CTO of Amazon Web Services, there are actually three problems that illustrate the need for edge computing:

The first is plain physics, which Vogels calls, appropriately enough, “the law of physics.” No matter how quickly the cloud can process a request, it still takes time for the request to reach the data center and for the response to get back to the client. Most of the time, this delay won’t matter much. But for some applications – self-driving cars, for example – delays of even a few dozen milliseconds can have huge consequences.

As Wired notes, it can take around 100 milliseconds for that data to get from the car’s navigation system to a distant data center. That might be fine for human drivers who just need to know about an upcoming turn, but computer drivers will need to be able to respond more quickly than that in the event of an emergency. And the more internet-connected devices proliferate, the more pressure will be put on our cloud computer infrastructure.

The second problem has to do with volumes of data, what Vogels calls “the law of economics.” As organizations collect tremendous volumes and types of data, the task of prioritizing that data for processing becomes a huge task, since our ability to collect data currently outstrips our bandwidth to process it. Simply put, most of the data we collect isn’t that important, but some of it is extremely important. Using the public cloud to sort through this data requires a lot of processing power that isn’t going toward other tasks.

The final problem has to do with regulatory regimes, which Vogel calls “the law of the land.” This basically applies to sensitive data like financial information and medical records that may be subject to rules that limit how it can be stored or transmitted. That could mean that hugely valuable insights could remain trapped inside data because it can’t be processed by the cloud.

Giving Cloud Computing an Edge

How does edge computing work exactly? A central fact of our cloud computing paradigm is that processing power often lives far away from the people who need it. This allows the providers of that infrastructure to build their data centers where it’s most economical, helping keep costs low for everyone. For the majority of organizations, it doesn’t matter if your New York-based company stores and processes its data in South Carolina or Ohio.

But for other applications, even a little latency can be too much. That’s why an increasing number of startups and established IaaS players are taking a different approach: they’re moving the computing, storage, and networking resources closer to where they’re needed.

For those who know the history of computing, this can seem like a step backward. In fact, in many ways it resembles the old client-server model of computing that existed before the growth of cloud computing centralized everything. But many, including at least one partner at Andreessen Horowitz, believe the profusion of mobile and IoT-connected devices will require a new distributed processing paradigm to achieve the speed and responsiveness we’ve come to expect from our devices.

Public vs. Private Edge

As with “the cloud,” “the edge” is a general description that encompasses a number of different technologies. What all edge computing systems have in common is that they position some computing, storage, and networking resources closer to where they’ll actually be used and away from the massive centralized data centers that form the backbone of the cloud.

Edge computing comes in public and private flavors. Public edge computing, more commonly called “cloud edge” or “network edge,” extends the public cloud to bring it closer to where the demand is. In principle it’s not so different from other distribution networks that use both central and regional warehouses to move goods and services to consumers. After all, delivery times are just another name for latency.

In its most basic form, the cloud edge consists of a number of smaller data centers positioned near major cities or other sites that require extra computing power. Even relatively small increases in computing power and proximity can yield considerable gains in speed. How much of a difference? The edge computing startup Packet claims that its micro data centers have been able to achieve a 10x increase in speed relative to other cloud computing services, bringing response times of 100 milliseconds down to around 10.

There’s another advantage to cloud edge systems that has less to do with performance than security. We’ve looked at how hackers can hijack unsecured IoT devices to launch massive DDoS attacks. These botnets have successfully brought down major pieces of Internet infrastructure, but fortunately, cloud edge services can act as a buffer against these high-volume attacks, redirecting malicious attacks and spreading them across their network of data centers. As part of its network edge service, Cloudflare offers DDoS mitigation through its global network of data centers.

The private model of edge computing, which you can call “device edge,” is more suited to industrial IoT settings. Where the cloud edge generally provides additional processing power nearer to the site of use, the device edge typically brings a subset of services associated with IaaS providers closer to the devices that produce the data.

In this model, the additional computing power might actually take the form of extra hardware to be installed near where it will be used. This software can handle a number of typical IaaS tasks, like messaging and analytics. For instance, a robotic assembly line with a number of IoT-connected sensors might run through a Raspberry Pi that acts as a gateway to AWS or Microsoft Azure.

Want to Learn More?

If you’re looking to move all or part of your operation to the cloud, you may want to consult with an expert in AWS or Google Cloud Platform. Or if you’re looking for ways to secure your IoT devices and mitigate damage from DDoS attacks, talk to a cloud security specialist.

 

This article was written by Tyler Keenan from Business2Community and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to legal@newscred.com.