TinyML Platform

Designed for Compute Acceleration

Acceleration for AI Models
on the Sapphire RISC-V SoC

Efinix offers a TinyML platform based on an open-source TensorFlow Lite for Microcontrollers (TFLite Micro) C++ library running on the Sapphire RISC-V SoC with the Efinix TinyML Accelerator.

Open Source

Open Source

Field Configurable

Field Reconfigurable

Icon-AI Framework

Free AI Framework

Performance

High Performance and Low Power

There is a drive to push Artificial Intelligence (AI) closer to the network edge where it can operate on data with lower latency and increased context. Power and compute resources are at a premium at the edge however and compute hungry AI algorithms find it hard to deliver the performance required. The open source community has developed TensorFlow lite that creates a quantized version of standard TensorFlow models and, using a library of functions, enables them to run on microcontrollers at the far edge. The Efinix TinyML platform takes these TensorFlow Lite models and, using the custom instruction capabilities of the Sapphire core, accelerates them in the FPGA hardware to dramatically improve their performance while retaining a low power and small footprint .

Advantages of Efinix TinyML Platform

  • Flexible AI solutions with configurable Sapphire RISC-V SoC, Efinix TinyML Accelerator, optional user-defined accelerator, hardware accelerator socket to cater for various applications needs.
  • Support all AI inferences that are supported by the TFLite Micro library, which is maintained by open-source community.
  • Multiple acceleration options with different performance-and-design-effort ratio to speed-up overall AI inference deployment.

Efinix TinyML Flow

TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.

AI Face Recognition
AI Object Recognition
AI Medical
AI Audio Recognition
Efinix TinyML Flow

Acceleration Strategies

Efinix presents a flexible and scalable RISC-V based TinyML platform with various acceleration strategies:

  • Using open-source TFLite Micro C++ library running on Efinix user configurable Sapphire RISC-V SoC.
  • Efinix TinyML Accelerator for accelerating commonly used AI inference layers/operations.
  • Optional user-defined accelerator to accelerate other compute-intensive layers/operations, which are to be determined as per application need.
  • Pre-defined hardware accelerator socket that is connected to Direct Memory Access (DMA) controller and SoC slave interface for data transfer and CPU control, which may be used for pre-processing/post-processing before/after the AI inference.

Efinix provides an end-to-end design flow that facilitates the deployment of TinyML applications on Efinix FPGAs.The design flow encompasses all aspects of the flow from AI model training, post-training quantization, all the way to running inference on RISC-V with a custom TinyML accelerator. In addition, we are also showing the steps to deploy TinyML on Efinix highly flexible domain-specific framework.

Efinix TinyML Platform

To further explore Efinix TinyML Platform:

  • Training and Quantization
    • For users who are interested in exploring the model training and post-training quantization flow, refer to Efinix Model Zoo in the model_zoo directory to get started.
    • For users who would like to skip the training and quantization flow, proceed to try out TinyML Hello World design for static input AI inference on FPGAs. Pre-trained and quantized models are included in the TinyML Hello World example designs.
  • AI Inference on FPGAs
    • A GUI-based Efinix TinyML Generator in tools/tinyml_generator directory is provided for generating model data files and customizing Efinix TinyML Accelerator with different accelerator modes and levels of parallelism. User may skip this step for initial exploration of example designs provided by Efinix TinyML Platform.
    • TinyML Hello World design is provided for user to run AI inference on FPGAs based on TFLite Micro library with Efinix TinyML Accelerator.
    • AI inference with static input is crucial to facilitate model verification against golden reference model. In addition, profiling can be performed to identify compute-intensive operation/layer for acceleration.
    • Refer to TinyML Hello World to get started.
  • TinyML Solution on FPGAs
    • Flexible domain-specific framework is vital to facilitate quick deployment of TinyML solution on FPGAs.
    • To leverage on Efinix domain-specific framework for TinyML vision solution deployment, refer to Edge Vision TinyML Framework to get started.
Efinix TinyML Generator
Efinix TinyML Generator

Open-Source Code

Download open-source source code, example designs, and supporting materials for the TinyML platform from Github.