Professional Documents
Culture Documents
AI is NOT new but compute technology and data advances are making it usable broadly
now: The field of artificial intelligence including machine learning, deep learning and other
variations of AI has existed for over 50 years but recently the availability of huge data sets to
train systems and sufficiently powerful compute capacity to execute the algorithms has made it
possible to exploit AI in an increasing large set of enterprise use cases.
AI is an enabling technology within almost all IT and business processes and systems:
AI is not a new workload or application but rather a new capability within existing and new
systems that allows for a shift of interpretation of data from humans to the underlying IT
infrastructure. This has a significant impact by reducing or eliminating the level of human
involvement in reasoning across data, making business and other decisions and even engaging
with other humans
AI success is dependent on expertise, data, code and hardware innovations: There are
many AI/DL/ML software frameworks ranging from Tenserflow (Google driven) to Caffe. To
exploit these frameworks businesses must have expertise to define their architecture, pick the
AI algorithms and write code to incorporate these frameworks into their workloads AND must
have access to large data sets for training. Finally, AI systems are both data and compute
intensive and require high performance scalable compute and storage to function.
AI is mostly being targeted at core mission critical business processes and functions
and that is very good for Dell Technologies: Unlike early big data efforts which mostly
started in new areas and non-core use cases using “good enough” infrastructure, AI is being
embedded into the core business processes such as patient care, financial trading, customer
care, fleet management etc. This means that the increased compute and storage demands of AI
enhanced systems drive consumption of enterprise class servers, storage, networking, software
etc and expect enterprise grade resiliency, availability, and performance.
AI will drive a new innovation cycle in hardware: AI/ML/DL systems are incredibly compute
and storage intense. To keep up we already see GPU and FPGA acceleration being used to
outperform general purpose CPUs and we have line of sight (and investments in many) to new
custom silicon processing optimized for running high precision, graph processing oriented AI
algorithms. It is estimated that by 2025 60% of AI will run on optimized silicon. This is also good
for Dell Technologies as every one of these optimized processing chips will reside in an
enterprise class server.
The hyper scale cloud providers have deep investment in AI but most AI is better
delivered on premise: AWS, IBM, Google and Microsoft all have large scale investments in
adding AI/ML/DL capabilities to their public cloud offerings. While public clouds are capable of
delivering AI for enterprises, the core criticality, performance, reliability and sensitivity of the
core business processes using AI mean that most production AI is better delivered on dedicated
1 | P a g e 4/5/2017
Office of the CTO
private infrastructure. AI deployments will bias to hybrid or private infrastructure when they are
used in core business processes but training is likely to be suitable for both dedicated and
shared public cloud infrastructure.
1. What are these terms: Artificial Intelligence, Machine Learning, Deep Learning,
Deep Neural Networks?
2 | P a g e 4/5/2017
Office of the CTO
1) There are many (mostly open source ) AI frameworks available today. They all differ in
both how they
operate and what
they are optimized
for. We expect that
many of these will
disappear and new
ones will emerge.
This creates
complexity for
customers as they
need to select a
framework for their
project and it is not
clear always which
one is best. The good news is that almost all of the frameworks are open and they
almost all benefit from hardware acceleration (GPU, FPGA, etc) thus drive demand for
enterprise class products.
2) What server products does Dell EMC have today to enable Machine Learning?
3 | P a g e 4/5/2017
Office of the CTO
Machine learning systems need huge data sets to be properly trained. That data varies
dramatically in structure but almost always includes huge quantities of unstructured data
(video, audio, images, logs etc) that must be aggregated and stored so that it is available
for the training algorithms to use. Additionally, the Machine learning models are very
large (multi gigabyte) and the data being processed in real time tends to also be
extremely high volume and real time. The unstructured portfolio of Isilon and ECS are
optimized for aggregation of PB's or Exabyte’s of unstructured data that will enable
machine learning to become accurate and store results. The All Flash storage systems
such as VMAX AF, XIO, Unity, and SC are optimized to provide high throughput access
both into and out of the inferencing elements of a real time Dell learning system.
5. What are some Innovation areas is focused on the move machine learning
forward?
a. Server Innovation: 14G servers will increase compute capacity and allow to
exploit Intel innovations in acceleration going forward
b. Silicon Investments: Dell Technologies has invested in numerous semiconductor
startups developing new processor models to accelerate machine learning use
cases. These innovations will all reside within enterprise class servers.
c. Stream Processing Storage: ECS is developing optimized unstructured storage
to handle streams of data versus just objects (project Nautilus). Since most
machine learning deals with time sequence streams this optimization will
increase performance and optimize for AI use cases
d. GPU as a service: OCTO is working with GE, Walmart and other key customers
developing new cloud abstraction software to make GPUs, FPGA’s and other
accelerators available as logical pools of capacity versus the model today where
even in public clouds these hardware elements are limited to single user (and
very low utilization) use. This will allow for dramatic improvements in utilization of
high performance hardware in AI use cases. This will also be contributed to
Pivotal Cloud Foundry as a new capability.
e. Pivotal Labs has data scientists and machine learning expertise to help
customers build their AI optimized cloud native applications.
f. Internal use of AI. In R&D, Services, and many other areas we are working with
AI systems to enhance fault prediction, customer care, inventory management
and even assessment of advanced technology data.
4 | P a g e 4/5/2017