Newsroom > Blog
September 30, 2022
Edge computing plays an important role in advanced embedded application areas, such as in sensor fusion and AI. These systems do more than just offer a local compute node that expands the capabilities of an embedded device. They can be engineered within a larger network to provide local context for captured data, delivered services, or the state of a network as a whole. This concept of “local context” is not something that network operators or digital service providers have had to consider in the past, but such context can be valuable for applications running at the edge.
What can companies do to take advantage of such localized contextual information without leveraging the cloud in high-compute applications? When compute is placed at the edge, new localized contextual data can be used in certain applications, but the right chipset is needed to quickly process and deliver results to end users. Scaling down data center computing infrastructure to small edge server units is not always the best strategy for taking advantage of local contextual information in edge computing systems.
While it may not be obvious, there are many applications that take advantage of local contextual information as part of the digital services they provide. Currently, this contextual data or information can only be processed in two possible locations: on an end-user’s device or in the cloud. In edge computing, the processing for that contextual information can occur closer to the end user in an edge server without relying on the cloud to execute high-compute workloads.
Compute workloads used by digital services are only projected to increase, which would then increase network backhaul congestion and reduce network throughput per user without a massive increase in backhaul bandwidth. Edge computing gives another option for compute resources that are closer to the end user; applications that only deliver or require local information can process it locally, with only critical results delivered to a larger cloud platform.
What types of applications might need only local context? There are several examples that currently leverage the cloud:
Just because information is captured and processed locally, that does not mean the cloud would not play a role. Data gathered within this paradigm is essentially being pre-processed before results are delivered to an end user, where they are then used in an application as part of service delivery. If the data is also required as part of a larger application, it can still be sent to an application that lives on the cloud.
Edge compute nodes are sometimes viewed as miniature data centers, and it is tempting to use the typical high-compute density architecture in a cloud server to build an edge server. This is not always the right approach when local context is required in specific applications. The reason for this is simple: the compute workloads are highly specific, and general-purpose chipsets do not implement a digital architecture that is application specific.
Implementing a CPU and GPU architecture in certain edge servers is fine when general purpose computing resources are needed. Developing and manufacturing custom silicon gives developers a way to optimize hardware architecture, but at significant development, verification, and manufacturing costs. FPGAs are a much better option when highly specific compute is needed because they can be made into highly application-specific processors, giving much lower latency when local context is needed.
FPGAs are often seen as more difficult from a development standpoint, requiring knowledge of VHDL, translators, and interconnect architecture. While it is true that some specialized and highly optimized application development is needed, much of the other hardware development (interfaces, core architecture, ISA, etc.) can be sped up. Today, FPGA vendors have development tools that accelerate much of the SoC implementation in an FPGA. Developers can then focus on building a specialized application on top of the core ISA and digital architecture implemented in the chip.
Edge computing systems based on FPGAs can be designed as highly application-specific systems, where the hardware and chipset are dedicated to very specific workloads. Vendor IP can help accelerate innovation of these systems for certain applications through instantiation of specialized data processing blocks, high-speed digital interfaces, and a general-purpose ISA that ports to an embedded OS like Linux (Yocto). Make sure your FPGA vendor can provide the development tools needed to quickly get a new edge computing system to market.