White Papers

3 Deep Learning Inferencing with Mipsology using Xilinx ALVEO™ on Dell EMC Infrastructure
Overview of Deep Learning
The deployment of a Deep Learning (DL) algorithm proceeds in two stages: training and
inference. As illustrated in Figure 1, training configures the parameters of a neural network
model implementing an algorithm via a learning process based on a large dataset over several
training iterations and loss function [1]; the larger the dataset, the higher the accuracy of the
model. The output of this stage, the learned model, is then used in the inference stage to
speculate on new data.
There are two major differences between training and inference: training employs forward
propagation and backward propagation (two classes of the deep learning process), whereas
inference mostly consists of forward propagation [2].
Figure 1. Deep Learning phases.
Deep Learning Inferencing
Upon completing training, a model can be deployed on a variety of hardware accelerators such
as CPUs, GPUs, FPGAs or special purpose devices to perform a specific business-logic
function or task such as identification, classification, recognition and segmentation [Figure 2].