Considerations for Orchestrating Geo-Distributed Edge Applications
This blog is 1st in series of blogs which attempt at addressing the challenges around Orchestration of Geo-Distributed Edge applications and their integration with Public Cloud.
Executive Summary
The rapid evolution of cloud computing and the emergence of 5G technology has increased the need for effective orchestration of geo-distributed edge applications. This blog post highlights the challenges and use-cases associated with integrating these applications with public cloud infrastructure.
Key catalysts for standardizing edge infrastructure include latency, context/proximity, privacy/legal considerations, and bandwidth optimization.
The article emphasizes the importance of developing standardized APIs for seamless integration with public cloud providers (hyperscalers) and addresses potential challenges, including public cloud management interfaces, API management, and security guidelines. By overcoming these challenges, industry bodies can promote the development of competitive offerings for consumers, enterprises, and various industry end-user segments, ultimately paving the way for more efficient and dynamic edge application ecosystems.
Introduction
Telco CSP’s have now started realizing the benefits of Cloud Computing and Fast time to market enabled by transforming their apps to cloud. Now there is a renewed push from customers to provide high computation intensive tasks closer to the devices where data is being generated.
The 5G era is revolutionizing various industries by enabling the rapid progression of emerging technologies such as augmented reality, autonomous vehicles, drones, and smart cities etc by addressing submilli-second latencies through computations closer to the devices/users. There is a whole ecosystem being built around multi-site deployment of interconnected network functions across on-demand distributed cloud , and providing API-driven access to the edge.
To standardize composite applications across geographical locations based on the deployment intent there are 4 common catalysts described as below
- Latency — The demand for new low-latency application use cases, such as AR/VR, necessitates ultra-low latency responses.
- Context/Proximity — Executing portions of the application on edge servers close to the user is crucial when local context is required for eg in gaming.
- Privacy/Legal — Certain data might be subject to legal obligations mandating that it remains within a specific geographical area eg PCI-DSS Compliance.
- Bandwidth — By processing data at the edge, it eliminates the expenses related to transferring data to cloud-based systems for processing.
To support the above catalysts , there is a need for a fully integrated edge solution. This requires work to be done on
- Faster Edge Innovation — Faster innovation through incorporating hardware acceleration, software-defined networking, and other emerging capabilities into a modern Edge stack.
- End-to-End Ecosystem — Definition and certification of H/W stacks, configurations, and Edge CNFs.
- User Experience — Address both operational and user use-cases.
- Provide End to End Stack- End to end integrated solution with demonstrable use cases.
- Complete Life Cycle Management — LCM of Composite applications on multiple K8 Clusters across central cloud and Edge and need for structured automation around Day 0/1/2 … n operations.
Emerging areas for the Egde
Some of the emerging areas being enabled are explained below.
- Telco NFV Edge Infrastructure — Running cloud infrastructure at the network edge allows for the virtualization of applications key to running 5G mobility networks at a larger scale, density and lower cost using commodity hardware. In addition, this infrastructure can also enable the virtualization of wireline services, Enterprise IP services and even supports the virtualization of client premises equipment. This reduces the time to provision new services for customers and even, in some cases, allows those customers to self-provision their service changes.
- Autonomous devices — Drones, Autonomous Vehicles, Industry Robots and such customer devices require a lot of compute processing power in order to support video processing, analytics and etc., Edge computing enables above-said devices to offload the computing processing to the Edge within the needed latency limit.
- Immersive Experiences — Devices like Virtual Reality (VR) headsets and Augmented Reality applications on user’s mobile devices also require extremely low levels of latency to prevent lag that would degrade their user experience. To ensure this experience is optimal, placing computing resources close to the end user to ensure the lowest latencies to and from their devices is critical.
- IoT & Analytics — Emerging technologies in the Internet of Things (IoT) demands lower latencies and accelerated processing at the edge.

For starters a brief overview of the definition of edge is depicted in the diagram below and shows the segregated between User-Edge ( Dedicated , Operated ) and Service Provider Edge ( Shared , XaaS).

Edge Integration with Public Clouds ( Hyperscalers)
There is a need to reduce the complexity of managing N number of Edge sites with significant number of clusters on Kubernetes. This can be achieved by one-click deployment of complex applications and simple dashboards to know the status of complex deployments. This necessitates the need for a Multi-Edge / Multi-Cloud Distributed application orchestrator (MEC).
To facilitate a smooth MEC Orchestrator , a public cloud (hyperscaler) edge integration blueprint is essential, along with the establishment of a collection of open APIs that facilitate smooth cooperation between different functional entities or domains.
Telco/Mobile Network Operator (MNO) Edge deployments present numerous collaborative opportunities to provide enhanced service quality for end-users and applications.
Integration Challenges — Edge with Public Cloud ( Hyperscalers)
To bring edge capabilities and integrations with Public Cloud Providers (Hyperscalers) into the mainstream, industry bodies are working on blueprints aimed at maturing the industry and addressing various challenges, which include:
- Standardization of public cloud management interface with the telco orchestration interface.
- Expanding MNO capabilities to Public Cloud / 3rd-Party Edge Compute/Applications (and vice versa) through the DevOps model.
- Efficient management and monitoring of different APIs.
- Blueprinting security guidelines for eg around preventing DDOS or SQL injection attacks on the telco core network.
- Utilizing interconnection and network capabilities for delivering value-added services.
- Orchestrating suitable compute and network hardware resources for Edge functions/applications.
- Flexible API architecture for multi-MNO, multi-Cloud, and multi-MEC (Multi-access Edge Compute) collaboration.
Some Use Cases which can be built on top of such an integration
Some common use-cases (capabilities ) which these standardization initiatives across the various industry bodies envision through attempting to solve the above challenges.
- UPF Distribution/Shunting — Automated Distributing User Plane Functions to suitable Data Center Facilities on qualified compute hardware for directing traffic to desired applications and network/processing functions.
- Local Break-Out (LBO) — Examples include video traffic offload, low latency services, and roaming optimization.
- Location Services — Identifying the location of a specific User Equipment (UE) or UEs within a geographical area and facilitating server-side application workload distribution based on UE and infrastructure resource location.
- QoS Acceleration/Extension — Offering low latency and high throughput for Edge applications. Eg maintaining QoS provisioned for subscribers in the MNO domain across the interconnection/networking domain for end-to-end QoS functionality.
- Network Slicing Provisioning and Management — Ensuring continuity of network slices instantiated in the MNO domain across Public Cloud Core/Edge and third-party Edge domains, providing dedicated resources tailored for specific application and functional needs (e.g., security).
- Mobile Hybrid/Multi-Cloud Access — Providing multi-MNO, multi-Cloud, and multi-MEC access for mobile devices (including IoT) and Edge services/applications.
- Enterprise Wireless WAN Access — Offering high-speed Fixed Wireless Access to enterprises with the capability to interconnect with Public Cloud and third-party Edge Functions, including Network Functions such as SD-WAN.
- Security — Enabling security services, such as firewall service insertion.
Conclusion — Need for Standardized API’s
There is a need for industry to work together and define a standard set of APIs that can be exposed for interoperability between Public Cloud and 3rd-Party MEC Service Provider instances at the Edge.
The need for open APIs to facilitate information exchange will enable competitive offerings for consumers, enterprises, and various industry end-user segments , and pave way for easier deployment on public cloud edge compute platforms such as Google Cloud Platform (GCP) Anthos, AWS Wavelength, Microsoft Azure Edge Zones, and others.
These APIs will extend beyond basic connectivity services to include predictable data rates, latency, reliability, service insertion, security, AI, RAN analytics, network slicing, and more.
Disclaimer:
This article has been inspired from amazing work being done the The Linux Foundation.