Edge Computing with FPGAs

Embedded devices, consumer products, and data centers contain more computing power than ever before—and yet they can still struggle to deliver services and experiences to end devices with low latency and high user count.

Table of Contents

The problem? There is a computing gap between data centers, where majority of the world’s processing power is concentrated, and end devices that provide data-intensive services. Edge computing has arisen to fill this gap.

Edge computing is a distributed computing paradigm that offloads data-intensive tasks required in service provision to an intermediate location that exists at the far reaches of the network. It provides low-latency, high-bandwidth access to compute resources required in modern digital services. Placing high-bandwidth compute resources at the far end of the network brings new intelligence to end users at the network edge without requiring additional computing power in end user devices.

To bring edge computing into service provision, companies need to build an edge architecture that combines the best characteristics of data centers, embedded systems, and application-specific compute architectures. FPGAs are a competitive solution for these systems as they can be used in all three areas—and have unique advantages over more familiar computing architectures based around GPUs or CPUs.

What is Edge Computing?

At the hardware and application level, edge computing systems act as intermediaries between the data center, where high compute activities are implemented, and the client, where services are delivered to the end user.

Within a large, cloud-connected network, edge compute systems offer a hub for time-critical and high-bandwidth tasks, easing the processing and storage burdens on the data center and alleviating traffic on the network—as illustrated in the graphic below.

Edge Computing

Edge nodes primarily facilitate communication with end clients on a network for moderate compute operations. They can also offload compute tasks to each other in applications requiring parallel compute. Edge nodes will delegate high compute operations required in service delivery to the data center.

As an example, consider sensor fusion as one advanced application. The edge node is primarily responsible for capturing data from clients and implementing high-compute processing operations. The data center may then aggregate data from multiple edge nodes and implement this in a more advanced application. Consider a factory floor as an example. In this environment, an edge compute node might gather processing or quality information that requires low-latency, real-time processing, with results delivered directly to end devices (IIoT modules in this case). The data aggregated in the edge node can also be sent back to the data center or a cloud service for further processing, such as predictive maintenance.

Construction of Edge Computing Systems

Within an edge node, individual edge servers do not resemble the types of rack-mount servers you would find in a data center. Edge compute systems are very different because the deployment environment is different.

Being deployed outside the confines of a data center, and sometimes in remote areas—requires much more rugged construction that can support high computing with reasonable power consumption. Some of the form factor requirements include:

  • Rugged form factor - Edge servers are normally designed for deployment in uncontrolled or harsh environments, so they must have rugged construction.
  • Expansion card/daughtercard - Some edge nodes can integrate directly with other equipment through a backplane or in an expansion card configuration.
  • Deployment environment - Edge servers may be located in a base station or integrated into infrastructure, so form factors must account for any hazards found in the deployment environment.
  • Thermal management - Edge servers will not have access to the standard cooling measures found in data centers and instead may rely on conductive cooling.
  • Small footprint - Edge compute systems may be deployed in confined areas, so they must have a small device footprint or system footprint.
  • Network connectivity - As intermediaries between end users and the cloud, edge servers must include multiple protocols, enabling network connectivity.
  • Reconfigurable - Edge servers should be designed to be future-proofed as much as possible, both at the hardware and software level. Another useful feature is remote reconfigurability down to the hardware level, especially when deployed in areas that are difficult to access.

These challenges motivate any means possible to maximize compute density and minimize power consumption. When we look at popular applications for edge computing, we can see where different computing architectures have their advantages. In most cases, an FPGA offers many advantages as the primary processor, as an accelerator or peripheral ASIC, or as part of a coprocessor architecture.

Applications of Edge Computing

Technologies—where edge computing is highly useful—include any area where network connectivity to the end user enables service delivery, particularly when the service or user experience is data intensive. The types of high-compute applications where this is necessary generally involve aggregation, storage, management, and processing of data with low latency.

Edge Computing

Data being aggregated and processed in these applications tends to have some local context, meaning the data only needs to be used and captured locally, and most often with the lowest possible latency and highest possible network uptime. Therefore, it does not make sense to send data in these applications back to a data center, thus processing and service delivery can be implemented at the network edge.

Some of these characteristics are found in the following table:

Application Characteristics
Sensor fusion and processing at the edge This is the broadest edge computing application area—it relies on high-bandwidth data aggregation from multiple sources. Example application areas include vision systems, medical, IoT devices, and much more. Edge computing systems can accept sensor data from multiple sources and provide fast processing as part of a larger application.
Industrial systems, including IIoT Advanced manufacturing systems require dedicated network connectivity and processing with near-real-time response. Edge computing provides a dedicated option enabling industrial systems connectivity without sending data back to the cloud or exposing it on public networks. These systems often implement sensor fusion as well, particularly when integrated with IIoT devices.
Smart infrastructure Embedded systems used for smart infrastructure require low-latency connectivity and data processing that can extend out to remote areas. Edge computing is ideal for service provisioning in smart infrastructure systems where data collection and processing is required for time-critical services. Traffic monitoring, autonomous vehicles, utility monitoring, and advanced emergency services can be made more reliable with edge computing.
AI at the edge AI involves inference, which may be a low-compute process, and training, which is almost always a high-compute process. Distributed groups of end users on a network may not have enough processing power to perform inference and training on client devices, but these functions can be offloaded to edge nodes (inference) and the cloud (training).
5G and mobile services Edge nodes enable content delivery and localized data processing in client applications and services being accessed by end users. With the expected massive increase in 5G-capable devices, edge computing systems reduce network traffic and workloads by distributing computing power closer to large groups of end users
Security Municipal, commercial, government, and military security systems require significant data collection and interoperability that is very challenging to implement in the cloud. Edge computing systems can enable this interoperability in a sensor fusion model with low-latency data processing enabling fast alerts and decision making. These systems also implement AI elements for threat identification and monitoring.
Banking Applications in banking are probably not obvious until you look at the ever-growing need for real-time asset management, visibility, and security. Edge nodes provide all these benefits with the additional advantage of the remote location of end users; companies may not need to be located across the street from an exchange or data center to access the high compute needed for real-time management tasks.
Defense systems Autonomous defense systems require extremely low latency—and high compute in scenarios where secure access to the cloud is often denied. Deploying edge servers in the field provides support for autonomous systems, security systems, sensor networks, navigation systems, communications, and much more.

Within these application areas, there is a very clear trend where edge computing plays an important role. Edge computing platforms enable low-latency, moderate-compute portions of a client’s application or services, while the cloud provides broader storage and high-latency/high-compute operations.

The classic computing architecture in data centers is still needed within this computing paradigm. However, the design and construction of edge nodes requires rethinking system architecture to enable these advanced application areas.

FPGAs provide an alternative to sequential/combinational logic implemented in legacy processor architectures. Using an FPGA has many benefits, ranging from reduced power consumption to reconfigurability and customizability. FPGAs can play either a central or supporting role in edge computing applications.

Edge Computing Architecture with FPGAs

Edge computing servers can range from highly application-specific systems supporting compute-intensive applications to general-purpose servers with a rugged form factor. Advanced technologies need the higher compute and lower latency provided by FPGAs, but FPGAs might not always play the central role in an edge compute node. Some example system architectures where FPGAs play a role are listed below. Depending on your definition of the edge, these architectures can be implemented directly in an IoT device, as mid-sized, ruggedized server form factors, or as an FPGA sitting on the same board as an ARM processor.

Single FPGA

An individual FPGA can be used as the main system processor in an edge server that requires lower power consumption and smaller footprint than a traditional server architecture. Most vendors provide IP required for interfacing with memory, NICs, high-speed peripherals, and analog peripherals, which can be instantiated in the FPGA.

The advantage of the FPGA in this architecture is the ability to instantiate application-specific compute directly on silicon, rather than executed from memory. This decreases latency and power consumption.

Co-processor Architecture

An FPGA can be used within a coprocessor architecture alongside another processor, either as the master compute element or as a supporting peripheral. A typical combination could involve a CPU, GPU, FPGA, and possibly some SoC. In this instance, you might see the FPGA as the main processor communicating with a GPU for certain tasks. Alternatively, the FPGA could be configured as an application-specific compute element on an expansion card.

In either case, the FPGA provides the application-specific capabilities that are needed in advanced edge compute applications.

SiP With FPGA Processing Block

While this architecture requires some custom development of the FPGA, SiP packaging, and the processor die itself, this is one of the most flexible of all edge computing chipset options. The FPGA block in this architecture can be developed as a highly specialized in processing block to target compute-intensive tasks that would be inefficient in standard sequential/combinational logic.

Example tasks in this type of processor include AI compute, collection and processing of video data streams, and highly parallelized signal processing.

Single FPGA Multiple FPGAs Co-Processor SIP + FPGA
Sensor Fusion
Industrial Systems, including IIoT
Smart Infrastructure
AI at the Edge
5G and Mobile Services
Security
Banking

Benefits of FPGAs in Edge Computing

In each of the above application areas, the use of an FPGA provides specific benefits in an edge compute application:

  • Decreased power consumption through the instantiation of compute directly on silicon
  • Faster processing and delivery of services to users by eliminating unnecessary or redundant logic, and by providing highly parallel application-specific compute
  • An interoperable, reconfigurable component that can interface with legacy systems to provide edge compute services
  • Comparable or smaller footprint compared to a traditional processor or chipset for embedded computing
  • Reduced supply chain risk by using commodity parts to create custom silicon functions on FPGAs from the same vendor rather than requiring low-volume custom silicon
  • A faster path to custom silicon, either with a standalone component or as an integrated block in a custom SiP
  • Hardware-level security through elimination of external peripherals, and reduced signal exposure on the PCB

For edge computing, arguably the most important benefit listed above is reduced power consumption. Edge devices are not located in air-conditioned data center racks and might be co-located in a base station and in a confined space. This means reducing heat generation and managing heat are critical design aspects. Thanks to their vastly lower power consumption compared to GPUs, an FPGA is a superior choice for space-constrained and heat-constrained applications like edge computing.

Systems Design for Edge Computing with FPGAs

If you’ve opted to incorporate an FPGA as the primary processor in your edge computing system, either as the main processor or as a peripheral, there is the physical implementation to consider. FPGAs can be incorporated in the same way as either GPUs, expansion cards, or on-board elements. There are other standard peripherals required in edge compute systems built with FPGAs, as well as network connectivity elements and specialized peripherals required for the specific application areas.

Physical Design and Layout

Typical implementation of FPGAs in an edge server involves three possible design directions:

  • Directly on a custom motherboard with interfaces instantiated on silicon with vendor IP. Custom logic can also be implemented in some logic blocks for dedicated compute within the required application or service area. This would typically be used when an edge server has a dedicated FPGA or SiP as the main processor.
  • As an expansion card that is plugged into a board-to-board connector (mezzanine or edge connector). This type of physical layout would be used in a coprocessor architecture to provide application-specific compute alongside general-purpose server functionality
  • As a system-on-module (SoM) that attaches to a baseboard with board-to-board connectors. This is the typical approach when integrating evaluation products from FPGA vendors or with 3rd party FPGA SoMs into a custom design. These designs can provide some level of futureproofing and enable easier maintenance within a deployed edge node.

Application developers, board designers, and mechanical engineers will have to work together to determine the required architecture that will meet the edge server application requirements. Mechanical fit and form can limit certain options listed above.

Board Design Challenges with FPGAs

Once the system architecture and form factor are determined, the entire system will need to be built into the required single-board or multi-board system.

Individual Motherboard: This path can be taken when space savings are the main factor because everything can be consolidated onto the PCB. With this direction, you will never lose reconfigurability of the FPGA, but the entire board may need to be replaced in the event of malfunction or when upgrades are required.

This design direction requires complete consolidation onto a PCB and may require elimination of unnecessary peripherals. One advantage of FPGAs in this type of system is peripherals can always be instantiated and reconfigured in silicon if there is sufficient access to I/O on the FPGA package.

Co-processor Form Factor: When used as an expansion card or SoM as a co-processor, the FPGA will need high-speed PCIe or SerDes interfaces to route data to the other processor in the system. Within a commercial edge node, there is generally limited space for additional peripherals and expansion products like NICs, memories, or GPUs. It is recommended to consolidate as much as possible onto the motherboard to reduce system size.

Modular Form Factor: Other application areas like defense and aerospace prefer a modular approach to embedded computing. Here, an FPGA-based edge node would normally be used as a daughtercard in a modular form factor enclosure (OpenVPX).

SoMs enable an additional type of modular construction as these are more easily upgradable without entire replacement of a motherboard. These systems can be more easily upgraded and reconfigured at the application level, on the FPGA silicon, and in terms of peripherals that might be separated onto the base board or backplane.

As high-speed digital systems, motherboards, and expansion cards for edge servers with FPGAs carry several challenges in terms of design, verification and manufacturing. The board design and construction will depend on the specific interfaces instantiated in the FPGA to support routing and placement of peripherals. The typical interfaces and peripherals found in these systems include:

  • Minimum of DDR3 memory interface to connect with on-board RAM chips or through a SODIMM connector
  • Multi-gigabit Ethernet (greater than 10 Gbps) over copper or fiber
  • PCIe 3 or faster interfaces to access data from peripherals
  • Multiple LVDS lanes for data capture from high-speed sensors and peripherals
  • Wireless capabilities for in-node networking, or possibly a cellular module if used in some IoT applications

These boards typically have high layer counts (sometimes 16 or more) to support the impedance-controlled digital interfaces listed above. FPGAs for the advanced applications listed above generally implement high ball count FPGAs with sub-1 mm ball pitch. Careful collaboration with electronics manufacturing service providers is needed to ensure FPGA boards for edge servers meet channel compliance and EMC requirements.

We can already see several essential benefits of adopting FPGAs in edge compute systems without added system complexity at the board level. Make sure you consider the appropriate role for your edge server and the other components needed to deliver services on the network before deciding the role of an FPGA in your system. Also, make sure to determine the IP available from the vendor, as this will reduce development time and risk.

Efinix for Edge Computing

Efinix is an innovative FPGA vendor targeting edge, wearable, IoT, sensor, and machine vision markets. Efinix’s two FPGA product lines target both ends of the compute spectrum with one device for end client devices, as well as a larger device targeting edge computing and high-compute embedded systems.

Titanium FPGAs

Titanium Logo

A larger FPGA solution for more advanced applications requiring more logic capacity and I/O count. These devices are built on a similar architecture as Trion products with broader support for standard data/memory interfaces, as well as SerDes interfaces with high data rate.

  • Up to 1M logic cells
  • 4-lane MIPI, DDR4/LPDDR4, up to 25.8 Gbps SerDes, PCIe Gen4 interfaces supported
  • DSP block optimized for high compute applications like AI
  • Efinix RISC-V core with integrated audio and vision interfaces
  • Ideal for high compute industrial systems, larger vision products, and edge AI applications

The Titanium FPGA is more ideal as the primary system controller in an edge server, either with a dedicated FPGA or in a co-processor architecture with high-compute peripherals like GPUs. Its larger logic cell count and additional interfaces are ideal when an FPGA is desired as the main system controller.

Trion® FPGAs

Trion Logo

A smaller FPGA option for embedded systems, these components take up small amounts of board space yet provide access to high I/O count and high logic cell count. These devices include standard integrated interfaces and will support additional high-speed interfaces required in high compute applications:

  • Up to 120k logic cells
  • 4-lane MIPI, LVDS, and up to DDR3/LPDDR3 interfaces supported
  • Dedicated interface block
  • Efinix RISC-V core with integrated audio and vision processing
  • Ideal for mobile/IoT products, smaller vision products, and edge AI applications

In an edge compute system, the Trion FPGA finds its home as an accelerator or expansion element for dedicated application-specific compute operations. It could also be used in smaller edge nodes as a slave co-processor or to instantiate various peripherals that would normally be placed as ASICs—this reduces system size and component count.

Developer Resources

FPGA developers can accelerate innovation cycles and the path to market with developer resources supporting advanced application areas. To help FPGA designers start developing their Trion or Titanium FPGAs for popular applications, Efinix offers libraries that help designers quickly specialize their device for popular applications.

The Efinity IDE is Efinix’s development environment for Titanium and Trion FPGAs. Efinity provides a complete environment for developing edge compute applications on Efinix FPGAs. Efinix offers access to comprehensive design libraries and IP through a GUI interface with command-line scripting support. Some of the major development features of the Efinix IDE include:

  • Project management dashboard
  • Interface and floorplan design tool for logic design, pin assignment, and routing
  • Timing analysis features to analyze and optimize device performance
  • Hardware debuggers for logic analysis
  • Simulation support with ModelSim, NCSim, and iVerilog simulators

1-year licenses are available to anyone who buys a development kit. The license includes one year of upgrades and a free maintenance renewal is available upon request.

RISC-V SoC - This development library allows users to implement a RISC-V core in Titanium or Trion FPGAs. Based on the VexRiscv core, this development library provides high I/O count, user peripherals (SPI, I2C, etc.) on an APB bus, and high-speed interfaces and memory controllers on an AXI bus. The result is a highly integrated SoC that still enables the customizability of any other FPGA platform.

Why Efinix?

Efinix has partnered with a broad range of customers and focuses its product lines on applications requiring high compute in small form factor devices. Its customers are innovative industrial, consumer, medical device manufacturers, and high-end automotive companies.

Ideal applications include vision, sensor fusion, and on-device AI/ML in the market. The low power consumption and small footprint of Efinix’s flagship products make them perfect for surveillance or industrial automation applications.

Efinix customers tend to require a tailored small form-factor FPGA solution that helps eliminate significant R&D costs while still allowing product customization and reconfiguration. The RISC-V and Edge Vision SoC libraries are instrumental for Efinix customers as they help expedite the creation of new designs and reduce time to market.

Developers requiring a faster path to market with an alternative option to larger FPGA platriontforms can leverage the significant developer resources and powerful compute capabilities provided by Efinix FPGAs. As a smaller company, Efinix takes a high-touch approach that leads to longer-term relationships and innovative FPGA applications.

Get Started with Efinix

To get started with Efinix FPGAs, take a look at our development kits and developer resources.