You are on page 1of 12

Deep Learning

Identify and build the right neural network


for solving your biggest challenges today—
and in the future.
Overview
Deep Learning is making the impossible a reality. Applications of artificial
networks are wide-reaching and include solutions for problems in language
(speech recognition, translation), transportation (autonomous driving, real-time
analysis), imaging (disease diagnosis, facial recognition), and many more areas
across sports, healthcare, and industry.

Expertise in deep learning is therefore quickly becoming an in-demand


requirement for technical positions in software engineering and data science.
Whether you are in an engineering or analytical role, this 10-week online
program from Carnegie Mellon University School of Computer Science Executive
Education will help you to understand how neural networks operate and to
identify the right architecture for addressing your current and future challenges.
By the end of this program, you will be expected to gain the confidence needed
to continue your own learning pursuits and to generate and refine deep learning
models capable of solving prediction and identification problems on your own.

This program begins with a gradual introduction to basic neural network


architecture and then advances through more advanced models (CNNs, RNNs)
and more sophisticated concepts necessary to solve time-based problems. With
this knowledge, you will be more prepared to discuss and tackle challenging
technical problems central to many global industries.

Key Takeaways

In this program, you will:

Develop an understanding of deep learning techniques

Understand the structure, function, and training of key neural network


architectures for building tools and systems

Build the confidence to apply deep learning methods to real-world problems

Deep Learning
01
Why Deep Learning FromCarnegie Mellon
University School of Computer Science?

Carnegie Mellon University's School of Computer Science


racked up five top-ranked specialty areas and maintained
its No. 1 overall ranking for computer science in U.S. News and
World Report's 2022 Best Colleges rankings.

The school ranked first in the specialty areas of


artificial intelligence AI, cybersecurity, game development,
programming languages, and software engineering.

Who Should Attend?


This program is meant for participants who seek a deeper understanding of
neural networks and wish to develop the technical ability to solve bigger
problems, such as those relating to image classification, natural language
processing, speech recognition, and more.

This program is most suitable for:

Data Scientists AI & Machine Learning ML


& Teams Professionals

Software Engineers & Technology


Software Developers Professionals

Deep Learning
02
Program Curriculum

Module 1: Receive an introduction to the foundational concepts of


neural networks, the basic architecture of a neuron, and
the history of the field.
Introduction and
Describe significant impacts and successful use cases
Universal
of neural networks in contemporary society
Approximation
Describe the origins of modern connectionist neural
networks with respect to early models of human
cognition

Describe the structure and function of a perceptron

Module 2: Explore the concept of learning as it relates to multilayer


perceptrons, including model parameters, gradients, and
loss functions.
Training
Describe a multilayer perceptron as a function with
Multilayer
tunable parameters
Perceptrons
Characterize learning as the process of estimating
model parameters to minimize error between the
network function and the target function

Define a gradient as a vector of partial derivatives

Module 3: Learn about the use of incremental updates and methods


for tuning convergence for optimal model performance.

Stochastic Describe the relationship of the learning rate and the


Gradient Descent ability of gradient descent to converge to a global
and Optimizers minimum of a function

Describe incremental updates (i.e., stochastic gradient


descent) to learn network parameters

Explain the necessary conditions for stochastic gradient


descent to converge

Deep Learning
03
Module 4: Explore convolution and its role in ensuring that neural
networks are invariant with respect to target pattern
location and also how shared parameters decrease
Basics of computational complexity, leading to model
Convolutional Neural performance gains.
Networks (CNNs)
Describe the need for shift-invariant pattern detection

Describe the process of shift-invariant pattern detection


using a scanning neural network

Explain the training of neural networks with shared


parameters

Module 5: This model examines CNN layers that perform a variety of


operations including alternating pooling and convolution.

Summarize the backpropagation process through flat,


CNNs: Training
convolutional, and pooling layers of a convolutional
and Variants
neural network

Explain how to compute derivatives for the affine map


in convolutional layers of a convolutional neural
network through backpropagation

Explain the dependency paths between individual


elements of an activation map and the loss

The time allotted to complete assignments has been extended after week 5.

Module 6: Explore RNNs and the types of problems that are better
suited for these types of models.

Basics of Describe the types of problems that require recurrence


Recurrent Neural in neural networks
Networks (RNNs)
Explain the pictorial representation of recurrent neural
networks used in this program

Explain why recurrent connections are needed to refer


to historical trends and patterns

Deep Learning
04
Module 7: Learn about variable-timing sequence problems and the
sequence-to-sequence model used for translation
problems.
Connectionist
Describe the architecture and training process for a
Temporal
time-synchronous recurrent neural network
Classification and
Sequence-to-Sequence
Describe the greedy approach to decoding the output
Models
of an order-synchronous but time-asynchronous
recurrent network

Identify the role of alignment in terms of computing


divergence between input and output for an
order-synchronous but time-asynchronous recurrent
network

Module 8: Delve into deeper concepts related to natural language


processing and the use of encoder-decoder networks for
translation.
Attention and
Explain how to compute embeddings of words from
Translation
one-hot encodings, using language prediction with
neural networks

Describe the architecture of a recurrent neural network


used for language prediction

Describe the synchrony problem of


sequence-to-sequence models

Module 9: Consider more deeply what networks learn at each layer.

Describe the problem of overlapping classes


Representations
Explain the relationship of the logistic regression model
and Autoencoders
and a perceptron with a sigmoid activation function

Describe how the maximum likelihood estimate is used


to learn the parameters of a logistic regression model

Deep Learning
05
Module 10: Explore additional topics in deep learning, gaining
practical information that prepares you for the next
phase of your deep learning journey.
Transformers and
Graph Networks, Describe the architecture and purpose of a multi-head
Variational self attention block in the context of encoders
Autoencoders,
Generative Summarize steps used to train the encoder and decoder
Adversarial Networks of a variational autoencoder

Contrast properties of variational autoencoders with


generative adversarial networks

Deep Learning
06
Program Experience The Carnegie
Mellon University
School of Computer
Office Hours With Science
Learning Facilitators

Science Expertise
Three Two-part Graded
Instructors that are blend of
Programming Assignments
thought leadership and practical
experience, taught by experts

Graphic-Rich
Lecture Videos
Connections
Ability to develop a suite of
interconnected learning modules
Knowledge Checks that leverage resources from
across Carnegie Mellon
University School of Computer
Science
Dedicated Program
Support Team
Interaction
Program interaction around
small-group learning that allows
Mobile Learning App
direct interaction with both
instructors and peers

Peer Discussion Reputation


Recognized worldwide as a
leader in academic research

Capstone Project

Bonus Content on
Advanced Topics

Deep Learning
07
Program Faculty

Bhiksha Raj
Professor, Language Technologies Institute, School of
Computer Science, Carnegie Mellon University

Bhiksha Raj works on a variety of areas related to AI, with a


focus on speech processing, and more generally on
intelligent systems that can learn to understand and
respond to their acoustic environment.

He also works on a variety of problems related to the basic theory and


applications of neural networks, such as the design of optimal losses, adversarial
defenses, and novel problems that may be amenable to a solution that involves
neural networks.

At CMU, he teaches the primary course, only to Deep Learning, which is taken by
several hundred students every year, both within CMU and across the world. He
also teaches courses on ML for signal processing and quantum computing.

Rita Singh
Professor, School of Computer Science, Carnegie
Mellon University

Rita Singh works on research in the area of AI-based


techniques for advanced human sensing and is
currently focused on AI-driven voice forensics. She has
worked in the areas of speech and audio processing for

more than two decades. In 2019, the first edition of her book, Profiling Humans
from their Voice, was published. It focuses on techniques that apply AI to deduce
all sorts of information about a person based on their voice.

She currently teaches three graduate-level courses at CMU: Computational


Forensics and AI; Multimedia Processing; and Quantum Computing, and is also
co-instructor for CMU's Deep Learning course.

Deep Learning
08
Program Prerequisites
The subject matter in this program is rigorous. To ensure success, participants
must have a strong working knowledge of linear algebra, calculus, statistics,
probability, and object-oriented programming including Python.

There are three two-part, extensive, and challenging programming assignments


in this program, all of which are required in order to achieve a passing grade and
receive the certification. Your success is supported through a variety of channels,
such as live office hours with subject matter experts, in-depth resource guides,
highly visual graphical depictions of course concepts, and peer learning
opportunities.

Certificate
Upon successful completion of the program, participants will receive a verified
digital certificate of completion from the Carnegie Mellon University School of
Computer Science Executive Education.

This document confirms that

[Recipient Name]
has successfully completed program

Deep Learning

Ram Konduru Professor Stephanie Rosenthal & Professor Reid Simmons


Director of Executive Education Machine Learning Department

Deep Learning
09
About Carnegie
Mellon University’s
School of
Computer Science
The School of Computer
Science (SCS) at Carnegie
Mellon University is recognized
and respected internationally
as a center for unparalleled
research and education in
computer science. A home to
world-class faculty, SCS offers About Emeritus
undergraduate and graduate
education and research
Carnegie Mellon University’s School of
opportunities that are second
Computer Science is collaborating
to none, along with executive
with online education provider
education programs designed
Emeritus to offer a portfolio of
for today’s professionals who
high-impact online programs. By
work in a variety of technical
working with Emeritus, we are able to
leadership roles. SCS is known
broaden access beyond our
for being at the
on-campus offerings in a collaborative
forefront—often setting the
and engaging format that stays true
course for advanced computer
to the quality expected from the CMU
science disciplines, including AI,
School of Computer Science Executive
computational biology,
Education.
human-computer interaction,
language technologies, ML,
The Emeritus approach to learning is
robotics, and software
grounded in a cohort-based design to
research.
maximize peer-to-peer sharing and
includes live teaching with world-class
faculty and hands-on project-based
learning. In the last year, more than
100,000 students from over 80
countries have benefited
professionally from Emeritus courses.

Deep Learning
10
DURATION
10 Weeks

FORMAT
Online

PROGRAM FEE
US$2,500

CONNECT WITH A
PROGRAM ADVISOR

Email: CMUSCS@emeritus.org
Phone: +1 412-314-2432

Easily schedule a call with a program advisor to


learn more.

SCHEDULE A CALL

You can enroll in the program here.

ENROLL

You might also like