With the industry experiencing massive adoption of the public cloud, it is clear that the enterprises have crossed the cloud chasm. Some of the most skeptic CXOs from the financial and public sector domains are now convinced about the value of cloud. With cloud computing becoming mainstream, what’s next for the enterprise infrastructure?
The next big thing for enterprise IT comes in the form of edge computing – a paradigm where compute moves closer to the source of data. Edge computing addresses many challenges that enterprise IT faces when running data-centric workloads in the cloud. It reduces the amount of data that flows back and forth between the datacenter and the public cloud. Edge computing will enable IT to retain sensitive data on-premises while still taking advantage of the elasticity offered by the public cloud. It reduces the latency involved in dealing with public cloud platforms.
There is a misconception that edge computing is designed only for the Internet of Things. Edge computing is all set to become the most preferred architecture for running data-driven, intelligent applications. Though edge computing is ideal for IoT solutions, it offers tremendous value for the departmental and traditional line of business applications.
The edge computing layer will be running closer to the data sources. Each unit of edge computing will have its own set of resources in the form of compute, storage, and networking. These units will be configured for a particular function which becomes the primary job of the device. The edge computing devices will be primarily responsible for handling network switching, routing, load balancing, security, and audit trail while also doubling up for running data processing pipelines.
The cluster of edge computing devices becomes the local ingestion point for the data originating from a variety of sources. Each data point will be analyzed by a complex event processing engine which decides the path that it takes. Based on predefined policies and rules, it may be processed locally or may be sent to the public cloud for further processing. The “hot data,” which is critical to the operations of the local infrastructure will be analyzed, stored, processed immediately by the edge computing layer. The “cold data” that contributes to long-term analytics based on historical trends moves to the public cloud for batch processing.
The future applications built for edge computing will be based on a three-tier architecture, which is fundamentally different from the three-tier architecture of the ’90s. During the transition from client/server to distributed computing architectures, Microsoft, Sun, IBM, and Oracle advocated a pattern where user interface, business logic, and databases were run in separate layers. This was the traditional three-tier architecture that many J2EE architects are familiar with. But the emerging three-tier architecture for future applications will not have any resemblance of the past design patterns. It’s a brand-new paradigm structured around the cutting-edge technologies based on cloud, machine learning, and fast data.
The emerging three-tier architecture will consist of the following logical layers:
Data Sources – Computing is becoming increasingly data-driven. From TVs to smartphones, to industrial equipment to CRM, SCM, and ERP, everything is a source of data. With compute and storage becoming affordable, it is easier to acquire and store data from a variety of sources. Combining and correlating these datasets help us unlock new insights. The data source tier includes anything that can generate data, which includes machine logs, click streams, social media feeds, RDBMS, unstructured data, and structured data. In the new three-their architecture, the source of data becomes the first tier.
Intelligence Layer – Machine Learning (ML) is becoming an integral part of user experience. Microsoft, Google, Amazon, and IBM are working towards embedding ML in mobile phones, applications, platforms, and cloud. In the contemporary three-tier architecture, machine learning will span both the edge computing layer and cloud computing platform to deliver intelligence. Data scientists will exploit the power of the cloud to create ML models, that require access to raw computing power, which is available in the public cloud. The innovations around GPUs, FPGA, and custom chips will make it feasible to create trained ML models based on large datasets and complex algorithms. These models, which are tested in the public cloud, will be moved to the edge location to deal with real-time datasets. Whenever a new model needs to be created, or an existing ML model requires optimization, it goes back to the public cloud. Thus, the public cloud will tackle the heavy lifting while the edge computing layer will deal with the production datasets. This intelligence layer, which cuts across the edge layer and public cloud is the second tier of our architecture.
Operational and Actionable Insights – This layer is responsible for taking actions based on the intelligence offered by the previous tier. Enterprise decision makers will be able to gain accurate insights based on the analysis delivered by the intelligence layer. This would accelerate the decision-making process of the CXOs and executives. This tier can also be authorized to act on behalf of the user. For example, when a particular condition is evaluated as true by the rules engine, a component in this tier can control the machinery or equipment. In short, this is where the users will access those rich dashboards with KPIs.
The affordability of compute and storage combined with the rise of machine learning will drive the adoption of edge computing. It’s not just IoT, even traditional business applications will start to take advantage of this architecture.
This article was written by Janakiram Msv from Forbes and was legally licensed through the NewsCred publisher network.