White Papers

PowerEdge Product Group
Direct from
Development
Building the Optimal Machine Learning Platform
Tech Note by:
Austin Shelnutt
Paul Steeves
SUMMARY
Machine Learning customers
have more choices than ever for
neural network models and
frameworks.
Those choices impact the type,
number, and form factor of the
preferred accelerator, the
dataflow topology between
accelerators and CPUs, the
amount and speed of direct
attached storage, and the
necessary bandwidth of I/O
devices.
This tech note provides a brief
overview of some of the basic
principles of Machine Learning
and describes the challenges
and trade-offs involved in
constructing the optimal
Machine Learning platform for
different use cases.
While various forms of machine learning have existed for several decades, the
past few years of development have yielded some extraordinary progress in
democratizing the capabilities and use cases for artificial intelligence in a wide
multitude of industries. Image classification, voice recognition, fraud detection,
medical diagnostics, and process automation are just a handful of the
burgeoning use cases for machine learning that are reinventing the very world
we live in. This blog provides a brief overview of some of the basic principles of
Machine Learning and describes the challenges and trade-offs involved in
constructing the optimal Machine Learning platform for different use cases.
Neural Networks are key to machine learning
At the center of the growth in machine learning is a modeling technique referred
to as neural networks (also known as deep neural networks, or deep learning),
which is based on our understanding of how the human brain learns and
processes information. Neural networks are not a new concept, and have been
proposed as a model for computational learning since the 1940’s. What makes
neural networks so attractive for machine learning is that they provide a
mathematical ecosystem that allows the decision making accuracy of a
computer to scale beyond explicit programming rules and, in a sense, learn
from experience.
Previously, the limiting factor of neural network models has been that they are
extremely computation intensive and require a tremendous amount of labeled
data input to be able to “learn”. This double hurdle of processing power and
available data had prevented them from becoming relevant…. until now.
© 2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries

Summary of content (5 pages)