White Papers

4 Deep Learning Inferencing with Mipsology using Xilinx ALVEO™ on Dell EMC Infrastructure
Figure 2. Inference Flow.
The hardware accelerator must meet a set of basic requirements:
Deliver high throughput of computations with low latency.
Support a neural network as defined by AI scientists without changes to avoid a time-
consuming re-design or never-ending training.
Adaptable to accommodate different loads without long delay to execute different
models.
Be a proven hardware solution that can run 24/7 without interruptions.
Be flexible to support the constantly evolving neural network technology.
FPGA devices are perfect choices for the tasks. They deliver high processing power, especially
in fixed point computation, provide high adaptability, and consume low power. DL is evolving
rapidly, making the FPGA an excellent choice to accommodate new requirements avoiding
silicon re-spins, thus drastically reducing the cost of ownership. FPGAs come in various sizes
making them ideal for most of the places where inference happens, from IoT devices, to
embedded applications, to field, data centers and Clouds.
However, for all their noteworthy characteristics, FPGA’s have traditionally been complex to
program, requiring uncommon skills and knowledge, but new solutions available in the
marketplace are making implementation much easier”.
The focus of this blog is on FPGA-accelerated inference of (Convolutional Neural Networks)
CNNs, specifically on how Zebra from Mipsology enables ALVEO boards to perform the task
in Dell PowerEdge Servers.