Newsroom > Blog

The Edge is a Dangerous Place:
Security Considerations

October 7, 2022

Cybersecurity threats are a constant problem, reaching from major websites down to individual network user devices and everywhere in between. As new computing paradigms enable advanced digital services for businesses and consumers, some of these services are enabled through edge computing, where processing resources and application execution are implemented closer to end users. Like any computing platform, edge computing resources can be vulnerable to security threats that could compromise user data and access to service.

Edge computing security faces all of the typical cybersecurity challenges and more, especially because they may be deployed in areas where physical access is much easier. Edge computing systems require security measures defined in both the hardware and software to fully protect user data. Software developers have their own challenges to deal with, but hardware designers can also take an active role in implementing security measures for edge computing infrastructure.

Security Challenges in Edge Computing

Edge computing promises to deliver a range of new services and experiences, as well as faster delivery of existing digital services, to end users with lower latency and higher throughput. They also enlarge the attack surface by providing a new entry into a larger network. Considering the role of edge compute systems in larger networks, vulnerabilities in an edge compute node could provide access to sensitive application data that would normally be distributed across a cloud network.

  • Software access - Vulnerabilities in an application can allow network access through an edge compute node, just as would be the case in another network server. With an edge compute node, attackers can get access directly to user data or crucial application data, or they could gain access to the broader network.
  • Physical access - In a cloud data center, physical access is very difficult as the facilities are generally secured. Edge servers could be located across multiple locations, such as in cellular base stations or built into other infrastructure. These locations may be unoccupied and physical access may be much easier to obtain.

Because of the hardware-level security challenges present in edge computing systems, it is tempting to avoid off-the-shelf components in favor of custom silicon or a hardware-programmable option like an FPGA. Rather than trying to scale down data center security and chip architecture measures to edge compute nodes, using an FPGA is a superior approach due to their development and security benefits.

Security Benefits of FPGAs

Interconnections are Proprietary

With an off-the-shelf component, or even with custom silicon, anyone could put the chip under a microscope and figure out the logic structure of the chip. The same applies to exposed interfaces, which would be specified in a datasheet. It’s often quite easy for someone to reverse engineer the functionality of a chip and the application by looking at these exposed interfaces and information in a datasheet.

With an FPGA, that knowledge is proprietary and does not have to be exposed to anyone, including the end customer. Users won’t know about the custom logic that is implemented in the FPGA, which makes it much harder to exploit. In some cases, this would require physically removing the device from its board, or destroying the device, both of which could destroy the data that is to be exposed from the device. In total, these facts make reverse engineering the logic in an FPGA much more difficult.

Reduced Signal Exposure

Standard processors (CPUs, GPUs, etc.) require external components to implement many of the basic system functions, and this can result in a large BOM. All signals being sent to those other components have to leave the main processor and will be exposed somewhere on a PCB or a cable/connector. This means it is much easier to probe this data once physical access is obtained.

With an FPGA, it’s possible to instantiate most or all of that external logic in the FPGA fabric. Rather than send data between peripheral components and the FPGA, instantiation eliminates these external components while data is processed inside the FPGA. Therefore, the attack surface in an FPGA-based system can be reduced because signals carrying sensitive data do not need to be exposed outside of the FPGA.

Reconfigurable

When a security vulnerability is found in software, it’s simple to develop a patch or update that eliminates the vulnerability. The same can be said for firmware used in an embedded application processor like a microcontroller. For hardware, the situation is different. With custom silicon, this requires a complete revision to the device, leaving products in the field that are still exposed to a security threat and often requiring an expensive upgrade.

With an FPGA, the hardware itself can be updated by modifying the binaries used to define the FPGA interconnect fabric. If a hardware-level vulnerability is discovered in an FPGA-based system, the hardware implementation can be patched and eliminated. This spans from the interconnect level all the way down to the base ISA level used to define the hardware implementation.

Hardware-Level Encryption

With so many attacks and successful breaches, OEMs are beginning to demand security be incorporated at the hardware level, while a product is still in the design phase. Software-level measures are important in terms of implementing network and security policies. However, hardware-level measures go deeper and focus on applying security policies directly on an embedded device.

Today’s best-practices dictate establishing trusted platform modules (TPMs), where cryptographic keys are integrated into processors such that they can be accessed and used within the hardware layer as well as the software layer. Security functions that can be addressed in this case include device authentication and encryption/decryption without transmitting keys on a bus. When combined with application-level and network-level policies, it becomes much easier to prevent hardware-level tampering in an edge computing system.

Other Reasons to Use FPGAs

The benefits of FPGAs in edge computing don’t end with security. Although the above advantages of FPGAs are very useful, there are many other benefits to incorporating an FPGA in an edge computing system.

  • FPGAs can be used as a main processor, co-processor, or accelerator
  • Availability of vendor IP speeds up hardware development
  • The most compute-intense logic can be made much more efficient by instantiating it in hardware
  • Greatly reduced costs for a custom processor compared to custom silicon

Depending on the specific application being served in an edge compute node, vendor IP can help speed up development of a customized processor core on an FPGA. Today’s embedded developers are leveraging RISC-V as an open-source ISA that can be ported to run Linux and used in many different application areas. Before you start developing an edge computing system, make sure your vendor can provide IP that supports your embedded application development alongside your custom logic implementation.

Get Started with Efinix

View our FPGA and RISC-V family pages to see the portfolio of Efinix solutions.