Professional Documents
Culture Documents
3
FREE! Easy-to-use, powerful software tools
https://www.intel.com/content/www/us/en/programmable/solutions/acceleration-hub/downloads.html 4
DE-Series Development Boards Designed for
Student & Maker Projects
Source: https://www.terasic.com
5
Total access to all
developer resources
▪ Documentation
▪ Design examples
▪ Support community
▪ Virtual or on-demand trainings
6
SAMPLE COURSE: Foundations of Digital Logic
7
Important Links and References
8
Course Outline
Develop programmable solutions and validate your workloads on leading FPGA hardware
with tools optimized for Intel® technology. Use this cloud solution in the classroom to
support acceleration engineering curriculum.
Workstation
Data Center/
training
Learning
Mainstream Intensive
Training Training
Workstation
Mainstream AI Data Center/
inference
Intel
GNA
(IP)
Flexible Mainstream Higher Inference Vision Speech/Audio Autonomous Custom
Acceleration Inference Throughput 1-20W 1-100+mW driving Inference
Mustang-F-100-A10* Intel® FPGA PAC Intel® FPGA PAC Intel® FPGA PAC
N3000 for networking with Arria®10 GX D5005 for Datacentre
Intel Confidential
* Other names and brands may be claimed as the property of others. 21
Accelerating workload applications
22
Intel Workload Acceleration Solutions – Ready Now!
Deterministic High Ingest Speedup of Transcode Real-time AI Risk Analytics Deep Packet
low latency at rate with fast Broad GATK images faster inference within within Spark Inspection at
higher Q/S query/second pipeline Spark BigDL framework 40Gbps lossless
Est. 80% TCO Est. 50% TCO Est. 60% TCO Est. 45% TCO Est. 50% TCO Est. 50% TCO Est. 75% TCO
savings savings savings savings savings savings savings
23
Mainstream adoption by Worldwide server OEMs
Growing list of OEM partners For Intel® Programmable Acceleration Card
*
SUPPORTED
PLATFORMS FOR
FPGA Intel® Programmable Mustang- F100
Acceleration Card with
Intel Arria® 10 GX FPGA
CURRENTLY
MANUFACTURED BY1
INTEL
1Please contact Intel representative for a complete list of original design manufacturer (ODM) manufacturers. *Other names and brands may be claimed as the property of others.
25
Speed Deployment with Pre-Trained Models and Samples
Expedite development, accelerate deep learning inference performance, and speed production deployment.
26
FPGAs Are Ideal Devices for the Data Centric World
• https://www.intel.com/content/www/us/en/products/prog
rammable.html
27
Why Intel® FPGAs for Machine Learning?
Convolutional Neural Networks are Compute Intensive
Feature Benefit
Convolutional Neural Networks are Compute Intensive
Highly parallel Facilitates efficient low-batch video
architecture stream processing and reduces latency
Configurable
FP32 9Tflops, FP16, FP11
Distributed
Accelerates computation by tuning
Floating Point DSP
compute performance
Blocks
Tightly coupled >50TB/s on chip SRAM bandwidth,
high-bandwidth random access, reduces latency,
memory minimizes external memory access
▪ Massive Parallelism
– Millions of logic elements
– Thousands of embedded memory blocks
– Thousands of Variable Precision DSP blocks
– Programmable routing
– Dozens of High-speed transceivers
– Various built-in hardened IP Adaptive Logic Module (ALM)
29
Important Links and References