You are on page 1of 52

6 Builders Guide

Alongside giant datasets and powerful hardware, great software tools have played an indis-
pensable role in the rapid progress of deep learning. Starting with the pathbreaking Theano
library released in 2007, flexible open-source tools have enabled researchers to rapidly proto-
type models, avoiding repetitive work when recycling standard components while still main-
taining the ability to make low-level modifications. Over time, deep learning’s libraries have
evolved to offer increasingly coarse abstractions. Just as semiconductor designers went from
specifying transistors to logical circuits to writing code, neural networks researchers have
moved from thinking about the behavior of individual artificial neurons to conceiving of net-
works in terms of whole layers, and now often design architectures with far coarser blocks in
mind.
So far, we have introduced some basic machine learning concepts, ramping up to fully-
functional deep learning models. In the last chapter, we implemented each component of
an MLP from scratch and even showed how to leverage high-level APIs to roll out the same
models effortlessly. To get you that far that fast, we called upon the libraries, but skipped
over more advanced details about how they work. In this chapter, we will peel back the cur-
tain, digging deeper into the key components of deep learning computation, namely model
construction, parameter access and initialization, designing custom layers and blocks, read-
ing and writing models to disk, and leveraging GPUs to achieve dramatic speedups. These
insights will move you from end user to power user, giving you the tools needed to reap the
benefits of a mature deep learning library while retaining the flexibility to implement more
complex models, including those you invent yourself! While this chapter does not introduce
any new models or datasets, the advanced modeling chapters that follow rely heavily on these
techniques.

6.1 Layers and Modules

When we first introduced neural networks, we focused on linear models with a single output.
Here, the entire model consists of just a single neuron. Note that a single neuron (i) takes some
set of inputs; (ii) generates a corresponding scalar output; and (iii) has a set of associated
parameters that can be updated to optimize some objective function of interest. Then, once
we started thinking about networks with multiple outputs, we leveraged vectorized arithmetic
to characterize an entire layer of neurons. Just like individual neurons, layers (i) take a set
of inputs, (ii) generate corresponding outputs, and (iii) are described by a set of tunable
parameters. When we worked through softmax regression, a single layer was itself the model.
However, even when we subsequently introduced MLPs, we could still think of the model as
retaining this same basic structure.
Interestingly, for MLPs, both the entire model and its constituent layers share this structure.
The entire model takes in raw inputs (the features), generates outputs (the predictions), and
possesses parameters (the combined parameters from all constituent layers). Likewise, each
individual layer ingests inputs (supplied by the previous layer) generates outputs (the inputs
to the subsequent layer), and possesses a set of tunable parameters that are updated according
to the signal that flows backwards from the subsequent layer.
210
211 Layers and Modules

While you might think that neurons, layers, and models give us enough abstractions to go
about our business, it turns out that we often find it convenient to speak about components that
are larger than an individual layer but smaller than the entire model. For example, the ResNet-
152 architecture, which is wildly popular in computer vision, possesses hundreds of layers.
These layers consist of repeating patterns of groups of layers. Implementing such a network
one layer at a time can grow tedious. This concern is not just hypothetical—such design
patterns are common in practice. The ResNet architecture mentioned above won the 2015
ImageNet and COCO computer vision competitions for both recognition and detection (He
et al., 2016) and remains a go-to architecture for many vision tasks. Similar architectures in
which layers are arranged in various repeating patterns are now ubiquitous in other domains,
including natural language processing and speech.
To implement these complex networks, we introduce the concept of a neural network mod-
ule. A module could describe a single layer, a component consisting of multiple layers, or
the entire model itself! One benefit of working with the module abstraction is that they can
be combined into larger artifacts, often recursively. This is illustrated in Fig.6.1.1. By defin-
ing code to generate modules of arbitrary complexity on demand, we can write surprisingly
compact code and still implement complex neural networks.

t
Figure 6.1.1 Multiple layers are combined into modules, forming repeating patterns of larger models.

From a programming standpoint, a module is represented by a class. Any subclass of it must


define a forward propagation method that transforms its input into output and must store any
necessary parameters. Note that some modules do not require any parameters at all. Finally a
module must possess a backpropagation method, for purposes of calculating gradients. Fortu-
nately, due to some behind-the-scenes magic supplied by the auto differentiation (introduced
in Section 2.5) when defining our own module, we only need to worry about parameters and
the forward propagation method.
To begin, we revisit the code that we used to implement MLPs (Section 5.1). The following
code generates a network with one fully connected hidden layer with 256 units and ReLU
activation, followed by a fully connected output layer with 10 units (no activation func-
tion).

import torch
from torch import nn
from torch.nn import functional as F

net = nn.Sequential(nn.LazyLinear(256), nn.ReLU(), nn.LazyLinear(10))

X = torch.rand(2, 20)
net(X).shape

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣

(continues on next page)


212 Builders Guide

(continued from previous page)

,→under heavy development so changes to the API or functionality can happen at␣
,→any moment.
warnings.warn('Lazy modules are a new feature under heavy development '

torch.Size([2, 10])

In this example, we constructed our model by instantiating an nn.Sequential, with layers in


the order that they should be executed passed as arguments. In short, nn.Sequential defines
a special kind of Module, the class that presents a module in PyTorch. It maintains an ordered
list of constituent Modules. Note that each of the two fully connected layers is an instance of
the Linear class which is itself a subclass of Module. The forward propagation (forward)
method is also remarkably simple: it chains each module in the list together, passing the
output of each as input to the next. Note that until now, we have been invoking our models
via the construction net(X) to obtain their outputs. This is actually just shorthand for net.
__call__(X).

6.1.1 A Custom Module

Perhaps the easiest way to develop intuition about how a module works is to implement one
ourselves. Before we implement our own custom module, we briefly summarize the basic
functionality that each module must provide:
1. Ingest input data as arguments to its forward propagation method.
2. Generate an output by having the forward propagation method return a value. Note that
the output may have a different shape from the input. For example, the first fully connected
layer in our model above ingests an input of arbitrary dimension but returns an output of
dimension 256.
3. Calculate the gradient of its output with respect to its input, which can be accessed via its
backpropagation method. Typically this happens automatically.
4. Store and provide access to those parameters necessary to execute the forward propagation
computation.
5. Initialize model parameters as needed.
In the following snippet, we code up a module from scratch corresponding to an MLP with
one hidden layer with 256 hidden units, and a 10-dimensional output layer. Note that the MLP
class below inherits the class that represents a module. We will heavily rely on the parent
class’s methods, supplying only our own constructor (the __init__ method in Python) and
the forward propagation method.

class MLP(nn.Module):
def __init__(self):
# Call the constructor of the parent class nn.Module to perform
# the necessary initialization
super().__init__()
self.hidden = nn.LazyLinear(256)
self.out = nn.LazyLinear(10)

# Define the forward propagation of the model, that is, how to return the
# required model output based on the input X
def forward(self, X):
return self.out(F.relu(self.hidden(X)))

Let’s first focus on the forward propagation method. Note that it takes X as input, calculates
the hidden representation with the activation function applied, and outputs its logits. In this
213 Layers and Modules

MLP implementation, both layers are instance variables. To see why this is reasonable, imagine
instantiating two MLPs, net1 and net2, and training them on different data. Naturally, we
would expect them to represent two different learned models.
We instantiate the MLP’s layers in the constructor and subsequently invoke these layers on
each call to the forward propagation method. Note a few key details. First, our customized
__init__ method invokes the parent class’s __init__ method via super().__init__()
sparing us the pain of restating boilerplate code applicable to most modules. We then instan-
tiate our two fully connected layers, assigning them to self.hidden and self.out. Note
that unless we implement a new layer, we need not worry about the backpropagation method
or parameter initialization. The system will generate these methods automatically. Let’s try
this out.

net = MLP()
net(X).shape

torch.Size([2, 10])

A key virtue of the module abstraction is its versatility. We can subclass a module to create
layers (such as the fully connected layer class), entire models (such as the MLP class above),
or various components of intermediate complexity. We exploit this versatility throughout the
following chapters, such as when addressing convolutional neural networks.

6.1.2 The Sequential Module

We can now take a closer look at how the Sequential class works. Recall that Sequential
was designed to daisy-chain other modules together. To build our own simplified MySequen-
tial, we just need to define two key methods:
1. A method to append modules one by one to a list.
2. A forward propagation method to pass an input through the chain of modules, in the same
order as they were appended.
The following MySequential class delivers the same functionality of the default Sequential
class.

class MySequential(nn.Module):
def __init__(self, *args):
super().__init__()
for idx, module in enumerate(args):
self.add_module(str(idx), module)

def forward(self, X):


for module in self.children():
X = module(X)
return X

In the __init__ method, we add every module by calling the add_modules method. These
modules can be accessed by the children method later. In this way the system knows the
added modules, and it will properly initialize each module’s parameters.
When our MySequential’s forward propagation method is invoked, each added module is
executed in the order in which they were added. We can now reimplement an MLP using our
MySequential class.

net = MySequential(nn.LazyLinear(256), nn.ReLU(), nn.LazyLinear(10))


net(X).shape
214 Builders Guide

torch.Size([2, 10])

Note that this use of MySequential is identical to the code we previously wrote for the
Sequential class (as described in Section 5.1).

6.1.3 Executing Code in the Forward Propagation Method

The Sequential class makes model construction easy, allowing us to assemble new archi-
tectures without having to define our own class. However, not all architectures are simple
daisy chains. When greater flexibility is required, we will want to define our own blocks. For
example, we might want to execute Python’s control flow within the forward propagation
method. Moreover, we might want to perform arbitrary mathematical operations, not simply
relying on predefined neural network layers.

You might have noticed that until now, all of the operations in our networks have acted upon
our network’s activations and its parameters. Sometimes, however, we might want to incor-
porate terms that are neither the result of previous layers nor updatable parameters. We call
these constant parameters. Say for example that we want a layer that calculates the function
f (x, w) = c · w⊤ x, where x is the input, w is our parameter, and c is some specified con-
stant that is not updated during optimization. So we implement a FixedHiddenMLP class as
follows.

class FixedHiddenMLP(nn.Module):
def __init__(self):
super().__init__()
# Random weight parameters that will not compute gradients and
# therefore keep constant during training
self.rand_weight = torch.rand((20, 20))
self.linear = nn.LazyLinear(20)

def forward(self, X):


X = self.linear(X)
X = F.relu(X @ self.rand_weight + 1)
# Reuse the fully connected layer. This is equivalent to sharing
# parameters with two fully connected layers
X = self.linear(X)
# Control flow
while X.abs().sum() > 1:
X /= 2
return X.sum()

In this FixedHiddenMLP model, we implement a hidden layer whose weights (self.rand_weight)


are initialized randomly at instantiation and are thereafter constant. This weight is not a model
parameter and thus it is never updated by backpropagation. The network then passes the out-
put of this “fixed” layer through a fully connected layer.

Note that before returning the output, our model did something unusual. We ran a while-loop,
testing on the condition its ℓ1 norm is larger than 1, and dividing our output vector by 2 until
it satisfied the condition. Finally, we returned the sum of the entries in X. To our knowledge,
no standard neural network performs this operation. Note that this particular operation may
not be useful in any real-world task. Our point is only to show you how to integrate arbitrary
code into the flow of your neural network computations.

net = FixedHiddenMLP()
net(X)

tensor(-0.0969, grad_fn=<SumBackward0>)
215 Parameter Management

We can mix and match various ways of assembling modules together. In the following ex-
ample, we nest modules in some creative ways.

class NestMLP(nn.Module):
def __init__(self):
super().__init__()
self.net = nn.Sequential(nn.LazyLinear(64), nn.ReLU(),
nn.LazyLinear(32), nn.ReLU())
self.linear = nn.LazyLinear(16)

def forward(self, X):


return self.linear(self.net(X))

chimera = nn.Sequential(NestMLP(), nn.LazyLinear(20), FixedHiddenMLP())


chimera(X)

tensor(0.0363, grad_fn=<SumBackward0>)

6.1.4 Summary

Layers are modules. Many layers can comprise a module. Many modules can comprise a
module.
A module can contain code. Modules take care of lots of housekeeping, including parame-
ter initialization and backpropagation. Sequential concatenations of layers and modules are
handled by the Sequential module.

6.1.5 Exercises

1. What kinds of problems will occur if you change MySequential to store modules in a
Python list?
2. Implement a module that takes two modules as an argument, say net1 and net2 and
returns the concatenated output of both networks in the forward propagation. This is also
called a parallel module.
3. Assume that you want to concatenate multiple instances of the same network. Implement
a factory function that generates multiple instances of the same module and build a larger
network from it.
Discussions 108
108

6.2 Parameter Management

Once we have chosen an architecture and set our hyperparameters, we proceed to the train-
ing loop, where our goal is to find parameter values that minimize our loss function. After
training, we will need these parameters in order to make future predictions. Additionally, we
will sometimes wish to extract the parameters either to reuse them in some other context, to
save our model to disk so that it may be executed in other software, or for examination in the
hope of gaining scientific understanding.
Most of the time, we will be able to ignore the nitty-gritty details of how parameters are de-
clared and manipulated, relying on deep learning frameworks to do the heavy lifting. How-
ever, when we move away from stacked architectures with standard layers, we will sometimes
216 Builders Guide

need to get into the weeds of declaring and manipulating parameters. In this section, we cover
the following:
• Accessing parameters for debugging, diagnostics, and visualizations.
• Sharing parameters across different model components.
We start by focusing on an MLP with one hidden layer.

import torch
from torch import nn

net = nn.Sequential(nn.LazyLinear(8), nn.ReLU(), nn.LazyLinear(1))


X = torch.rand(size=(2, 4))
net(X).shape

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣
,→under heavy development so changes to the API or functionality can happen at␣

,→any moment.

warnings.warn('Lazy modules are a new feature under heavy development '

torch.Size([2, 1])

6.2.1 Parameter Access

Let’s start with how to access parameters from the models that you already know.
When a model is defined via the Sequential class, we can first access any layer by indexing
into the model as though it were a list. Each layer’s parameters are conveniently located in its
attribute.
We can inspect the parameters of the second fully connected layer as follows.

net[2].state_dict()

OrderedDict([('weight',
tensor([[ 0.1653, -0.3052, 0.2309, -0.2187, 0.1273, -0.2385, ␣
,→0.2225, 0.1569]])),
('bias', tensor([-0.2281]))])

We can see that this fully connected layer contains two parameters, corresponding to that
layer’s weights and biases, respectively.

Targeted Parameters

Note that each parameter is represented as an instance of the parameter class. To do anything
useful with the parameters, we first need to access the underlying numerical values. There
are several ways to do this. Some are simpler while others are more general. The following
code extracts the bias from the second neural network layer, which returns a parameter class
instance, and further accesses that parameter’s value.

type(net[2].bias), net[2].bias.data

(torch.nn.parameter.Parameter, tensor([-0.2281]))
217 Parameter Management

Parameters are complex objects, containing values, gradients, and additional information.
That’s why we need to request the value explicitly.
In addition to the value, each parameter also allows us to access the gradient. Because we
have not invoked backpropagation for this network yet, it is in its initial state.

net[2].weight.grad == None

True

All Parameters at Once

When we need to perform operations on all parameters, accessing them one-by-one can grow
tedious. The situation can grow especially unwieldy when we work with more complex mod-
ules (e.g., nested modules), since we would need to recurse through the entire tree to extract
each sub-module’s parameters. Below we demonstrate accessing the parameters of all lay-
ers.

[(name, param.shape) for name, param in net.named_parameters()]

[('0.weight', torch.Size([8, 4])),


('0.bias', torch.Size([8])),
('2.weight', torch.Size([1, 8])),
('2.bias', torch.Size([1]))]

6.2.2 Tied Parameters

Often, we want to share parameters across multiple layers. Let’s see how to do this elegantly.
In the following we allocate a fully connected layer and then use its parameters specifically
to set those of another layer. Here we need to run the forward propagation net(X) before
accessing the parameters.

# We need to give the shared layer a name so that we can refer to its
# parameters
shared = nn.LazyLinear(8)
net = nn.Sequential(nn.LazyLinear(8), nn.ReLU(),
shared, nn.ReLU(),
shared, nn.ReLU(),
nn.LazyLinear(1))
net(X)
# Check whether the parameters are the same
print(net[2].weight.data[0] == net[4].weight.data[0])
net[2].weight.data[0, 0] = 100
# Make sure that they are actually the same object rather than just having the
# same value
print(net[2].weight.data[0] == net[4].weight.data[0])

tensor([True, True, True, True, True, True, True, True])


tensor([True, True, True, True, True, True, True, True])

This example shows that the parameters of the second and third layer are tied. They are not
just equal, they are represented by the same exact tensor. Thus, if we change one of the
parameters, the other one changes, too.
You might wonder, when parameters are tied what happens to the gradients? Since the model
218 Builders Guide

parameters contain gradients, the gradients of the second hidden layer and the third hidden
layer are added together during backpropagation.

6.2.3 Summary

We have several ways to access and tie model parameters.

6.2.4 Exercises

1. Use the NestMLP model defined in Section 6.1 and access the parameters of the various
layers.
2. Construct an MLP containing a shared parameter layer and train it. During the training
process, observe the model parameters and gradients of each layer.
3. Why is sharing parameters a good idea?
Discussions 109 109

6.3 Parameter Initialization

Now that we know how to access the parameters, let’s look at how to initialize them properly.
We discussed the need for proper initialization in Section 5.4. The deep learning framework
provides default random initializations to its layers. However, we often want to initialize our
weights according to various other protocols. The framework provides most commonly used
protocols, and also allows to create a custom initializer.
By default, PyTorch initializes weight and bias matrices uniformly by drawing from a range
that is computed according to the input and output dimension. PyTorch’s nn.init module
provides a variety of preset initialization methods.

import torch
from torch import nn

net = nn.Sequential(nn.LazyLinear(8), nn.ReLU(), nn.LazyLinear(1))


X = torch.rand(size=(2, 4))
net(X).shape

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣

,→under heavy development so changes to the API or functionality can happen at␣
,→any moment.

warnings.warn('Lazy modules are a new feature under heavy development '

torch.Size([2, 1])

6.3.1 Built-in Initialization

Let’s begin by calling on built-in initializers. The code below initializes all weight parameters
as Gaussian random variables with standard deviation 0.01, while bias parameters cleared to
zero.
219 Parameter Initialization

def init_normal(module):
if type(module) == nn.Linear:
nn.init.normal_(module.weight, mean=0, std=0.01)
nn.init.zeros_(module.bias)
net.apply(init_normal)
net[0].weight.data[0], net[0].bias.data[0]

(tensor([-0.0055, -0.0004, -0.0033, -0.0091]), tensor(0.))

We can also initialize all the parameters to a given constant value (say, 1).

def init_constant(module):
if type(module) == nn.Linear:
nn.init.constant_(module.weight, 1)
nn.init.zeros_(module.bias)
net.apply(init_constant)
net[0].weight.data[0], net[0].bias.data[0]

(tensor([1., 1., 1., 1.]), tensor(0.))

We can also apply different initializers for certain blocks. For example, below we initialize
the first layer with the Xavier initializer and initialize the second layer to a constant value of
42.

def init_xavier(module):
if type(module) == nn.Linear:
nn.init.xavier_uniform_(module.weight)
def init_42(module):
if type(module) == nn.Linear:
nn.init.constant_(module.weight, 42)

net[0].apply(init_xavier)
net[2].apply(init_42)
print(net[0].weight.data[0])
print(net[2].weight.data)

tensor([-0.2813, 0.6403, -0.3291, 0.1083])


tensor([[42., 42., 42., 42., 42., 42., 42., 42.]])

Custom Initialization

Sometimes, the initialization methods we need are not provided by the deep learning frame-
work. In the example below, we define an initializer for any weight parameter w using the
following strange distribution:


 with probability 41
U (5, 10)
w∼ 0 with probability 12 (6.3.1)


U (−10, −5) with probability 1
4

Again, we implement a my_init function to apply to net.

def my_init(module):
if type(module) == nn.Linear:
print("Init", *[(name, param.shape)
for name, param in module.named_parameters()][0])
nn.init.uniform_(module.weight, -10, 10)
(continues on next page)
220 Builders Guide

(continued from previous page)

module.weight.data *= module.weight.data.abs() >= 5

net.apply(my_init)
net[0].weight[:2]

Init weight torch.Size([8, 4])


Init weight torch.Size([1, 8])

tensor([[ 7.4425, -8.1811, -7.0599, 6.4645],


[ 9.4552, 0.0000, -0.0000, -0.0000]], grad_fn=<SliceBackward0>)

Note that we always have the option of setting parameters directly.

net[0].weight.data[:] += 1
net[0].weight.data[0, 0] = 42
net[0].weight.data[0]

tensor([42.0000, -7.1811, -6.0599, 7.4645])

6.3.2 Summary

We can initialize parameters using built-in and custom initializers.

6.3.3 Exercises

Look up the online documentation for more built-in initializers.


Discussions 110 110

6.4 Lazy Initialization

So far, it might seem that we got away with being sloppy in setting up our networks. Specifi-
cally, we did the following unintuitive things, which might not seem like they should work:
• We defined the network architectures without specifying the input dimensionality.
• We added layers without specifying the output dimension of the previous layer.
• We even “initialized” these parameters before providing enough information to determine
how many parameters our models should contain.
You might be surprised that our code runs at all. After all, there is no way the deep learning
framework could tell what the input dimensionality of a network would be. The trick here is
that the framework defers initialization, waiting until the first time we pass data through the
model, to infer the sizes of each layer on the fly.
Later on, when working with convolutional neural networks, this technique will become even
more convenient since the input dimensionality (i.e., the resolution of an image) will affect
the dimensionality of each subsequent layer. Hence, the ability to set parameters without the
need to know, at the time of writing the code, what the dimensionality is can greatly simplify
221 Lazy Initialization

the task of specifying and subsequently modifying our models. Next, we go deeper into the
mechanics of initialization.

To begin, let’s instantiate an MLP.

import torch
from torch import nn
from d2l import torch as d2l

net = nn.Sequential(nn.LazyLinear(256), nn.ReLU(), nn.LazyLinear(10))

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣
,→under heavy development so changes to the API or functionality can happen at␣

,→any moment.
warnings.warn('Lazy modules are a new feature under heavy development '

At this point, the network cannot possibly know the dimensions of the input layer’s weights
because the input dimension remains unknown.

Consequently the framework has not yet initialized any parameters. We confirm by attempting
to access the parameters below.

net[0].weight

<UninitializedParameter>

Next let’s pass data through the network to make the framework finally initialize parame-
ters.

X = torch.rand(2, 20)
net(X)

net[0].weight.shape

torch.Size([256, 20])

As soon as we know the input dimensionality, 20, the framework can identify the shape of the
first layer’s weight matrix by plugging in the value of 20. Having recognized the first layer’s
shape, the framework proceeds to the second layer, and so on through the computational
graph until all shapes are known. Note that in this case, only the first layer requires lazy ini-
tialization, but the framework initializes sequentially. Once all parameter shapes are known,
the framework can finally initialize the parameters.

The following method passes in dummy inputs through the network for a dry run to infer
all parameter shapes and subsequently initializes the parameters. It will be used later when
default random initializations are not desired.

@d2l.add_to_class(d2l.Module) #@save
def apply_init(self, inputs, init=None):
self.forward(*inputs)
if init is not None:
self.net.apply(init)

6.4.1 Summary
222 Builders Guide

Lazy initialization can be convenient, allowing the framework to infer parameter shapes au-
tomatically, making it easy to modify architectures and eliminating one common source of
errors. We can pass data through the model to make the framework finally initialize param-
eters.

6.4.2 Exercises

1. What happens if you specify the input dimensions to the first layer but not to subsequent
layers? Do you get immediate initialization?

2. What happens if you specify mismatching dimensions?

3. What would you need to do if you have input of varying dimensionality? Hint: look at the
parameter tying.
111
Discussions 111

6.5 Custom Layers

One factor behind deep learning’s success is the availability of a wide range of layers that can
be composed in creative ways to design architectures suitable for a wide variety of tasks. For
instance, researchers have invented layers specifically for handling images, text, looping over
sequential data, and performing dynamic programming. Sooner or later, you will encounter
or invent a layer that does not exist yet in the deep learning framework. In these cases, you
must build a custom layer. In this section, we show you how.

6.5.1 Layers without Parameters

To start, we construct a custom layer that does not have any parameters of its own. This
should look familiar if you recall our introduction to module in Section 6.1. The following
CenteredLayer class simply subtracts the mean from its input. To build it, we simply need
to inherit from the base layer class and implement the forward propagation function.

import torch
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l

class CenteredLayer(nn.Module):
def __init__(self):
super().__init__()

def forward(self, X):


return X - X.mean()

Let’s verify that our layer works as intended by feeding some data through it.

layer = CenteredLayer()
layer(torch.tensor([1.0, 2, 3, 4, 5]))
223 Custom Layers

tensor([-2., -1., 0., 1., 2.])

We can now incorporate our layer as a component in constructing more complex mod-
els.

net = nn.Sequential(nn.LazyLinear(128), CenteredLayer())

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣

,→under heavy development so changes to the API or functionality can happen at␣
,→any moment.

warnings.warn('Lazy modules are a new feature under heavy development '

As an extra sanity check, we can send random data through the network and check that the
mean is in fact 0. Because we are dealing with floating point numbers, we may still see a very
small nonzero number due to quantization.

Y = net(torch.rand(4, 8))
Y.mean()

tensor(4.6566e-09, grad_fn=<MeanBackward0>)

6.5.2 Layers with Parameters

Now that we know how to define simple layers, let’s move on to defining layers with parame-
ters that can be adjusted through training. We can use built-in functions to create parameters,
which provide some basic housekeeping functionality. In particular, they govern access, ini-
tialization, sharing, saving, and loading model parameters. This way, among other benefits,
we will not need to write custom serialization routines for every custom layer.
Now let’s implement our own version of the fully connected layer. Recall that this layer re-
quires two parameters, one to represent the weight and the other for the bias. In this im-
plementation, we bake in the ReLU activation as a default. This layer requires two input
arguments: in_units and units, which denote the number of inputs and outputs, respec-
tively.

class MyLinear(nn.Module):
def __init__(self, in_units, units):
super().__init__()
self.weight = nn.Parameter(torch.randn(in_units, units))
self.bias = nn.Parameter(torch.randn(units,))

def forward(self, X):


linear = torch.matmul(X, self.weight.data) + self.bias.data
return F.relu(linear)

Next, we instantiate the MyLinear class and access its model parameters.

linear = MyLinear(5, 3)
linear.weight

Parameter containing:
tensor([[ 0.6095, 1.6360, 0.9545],
[ 0.1530, -1.1826, -0.0388],
[-0.6582, -2.1849, -2.1861],
(continues on next page)
224 Builders Guide

(continued from previous page)

[ 1.0998, -0.8887, 0.4570],


[ 0.7781, 0.8461, -0.7845]], requires_grad=True)

We can directly carry out forward propagation calculations using custom layers.

linear(torch.rand(2, 5))

tensor([[0.8122, 0.0000, 0.7459],


[0.2869, 0.0000, 0.0000]])

We can also construct models using custom layers. Once we have that we can use it just like
the built-in fully connected layer.

net = nn.Sequential(MyLinear(64, 8), MyLinear(8, 1))


net(torch.rand(2, 64))

tensor([[0.],
[0.]])

6.5.3 Summary

We can design custom layers via the basic layer class. This allows us to define flexible new
layers that behave differently from any existing layers in the library. Once defined, custom
layers can be invoked in arbitrary contexts and architectures. Layers can have local parame-
ters, which can be created through built-in functions.

6.5.4 Exercises

1. Design a layer that takes an input and computes a tensor reduction, i.e., it returns yk =

i,j Wijk xi xj .
112
2. Design a layer that returns the leading half of the Fourier coefficients of the data.

Discussions 112

6.6 File I/O

So far we discussed how to process data and how to build, train, and test deep learning models.
However, at some point, we will hopefully be happy enough with the learned models that we
will want to save the results for later use in various contexts (perhaps even to make predictions
in deployment). Additionally, when running a long training process, the best practice is to
periodically save intermediate results (checkpointing) to ensure that we do not lose several
days worth of computation if we trip over the power cord of our server. Thus it is time to
learn how to load and store both individual weight vectors and entire models. This section
addresses both issues.
225 File I/O

6.6.1 Loading and Saving Tensors

For individual tensors, we can directly invoke the load and save functions to read and write
them respectively. Both functions require that we supply a name, and save requires as input
the variable to be saved.

import torch
from torch import nn
from torch.nn import functional as F

x = torch.arange(4)
torch.save(x, 'x-file')

We can now read the data from the stored file back into memory.

x2 = torch.load('x-file')
x2

tensor([0, 1, 2, 3])

We can store a list of tensors and read them back into memory.

y = torch.zeros(4)
torch.save([x, y],'x-files')
x2, y2 = torch.load('x-files')
(x2, y2)

(tensor([0, 1, 2, 3]), tensor([0., 0., 0., 0.]))

We can even write and read a dictionary that maps from strings to tensors. This is convenient
when we want to read or write all the weights in a model.

mydict = {'x': x, 'y': y}


torch.save(mydict, 'mydict')
mydict2 = torch.load('mydict')
mydict2

{'x': tensor([0, 1, 2, 3]), 'y': tensor([0., 0., 0., 0.])}

6.6.2 Loading and Saving Model Parameters

Saving individual weight vectors (or other tensors) is useful, but it gets very tedious if we
want to save (and later load) an entire model. After all, we might have hundreds of parameter
groups sprinkled throughout. For this reason the deep learning framework provides built-in
functionalities to load and save entire networks. An important detail to note is that this saves
model parameters and not the entire model. For example, if we have a 3-layer MLP, we need
to specify the architecture separately. The reason for this is that the models themselves can
contain arbitrary code, hence they cannot be serialized as naturally. Thus, in order to reinstate
a model, we need to generate the architecture in code and then load the parameters from disk.
Let’s start with our familiar MLP.

class MLP(nn.Module):
def __init__(self):
super().__init__()
(continues on next page)
226 Builders Guide

(continued from previous page)

self.hidden = nn.LazyLinear(256)
self.output = nn.LazyLinear(10)

def forward(self, x):


return self.output(F.relu(self.hidden(x)))

net = MLP()
X = torch.randn(size=(2, 20))
Y = net(X)

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣

,→under heavy development so changes to the API or functionality can happen at␣
,→any moment.

warnings.warn('Lazy modules are a new feature under heavy development '

Next, we store the parameters of the model as a file with the name “mlp.params”.

torch.save(net.state_dict(), 'mlp.params')

To recover the model, we instantiate a clone of the original MLP model. Instead of randomly
initializing the model parameters, we read the parameters stored in the file directly.

clone = MLP()
clone.load_state_dict(torch.load('mlp.params'))
clone.eval()

MLP(
(hidden): LazyLinear(in_features=0, out_features=256, bias=True)
(output): LazyLinear(in_features=0, out_features=10, bias=True)
)

Since both instances have the same model parameters, the computational result of the same
input X should be the same. Let’s verify this.

Y_clone = clone(X)
Y_clone == Y

tensor([[True, True, True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True, True, True]])

6.6.3 Summary

The save and load functions can be used to perform file I/O for tensor objects. We can save
and load the entire sets of parameters for a network via a parameter dictionary. Saving the
architecture has to be done in code rather than in parameters.

6.6.4 Exercises

1. Even if there is no need to deploy trained models to a different device, what are the prac-
tical benefits of storing model parameters?
2. Assume that we want to reuse only parts of a network to be incorporated into a network
of a different architecture. How would you go about using, say the first two layers from a
previous network in a new network?
227 GPUs

3. How would you go about saving the network architecture and parameters? What restric-
tions would you impose on the architecture?

113
Discussions 113

6.7 GPUs

In Chapter 1.5, we discussed the rapid growth of computation over the past two decades. In
a nutshell, GPU performance has increased by a factor of 1000 every decade since 2000.
This offers great opportunities but it also suggests a significant need to provide such perfor-
mance.

In this section, we begin to discuss how to harness this computational performance for your
research. First by using single GPUs and at a later point, how to use multiple GPUs and
multiple servers (with multiple GPUs).

Specifically, we will discuss how to use a single NVIDIA GPU for calculations. First, make
sure you have at least one NVIDIA GPU installed. Then, download the NVIDIA driver and
114
CUDA 114 and follow the prompts to set the appropriate path. Once these preparations
are complete, the nvidia-smi command can be used to view the graphics card informa-
tion.

!nvidia-smi

Wed Dec 14 05:37:30 2022


+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... Off | 00000000:00:1B.0 Off | 0 |
| N/A 23C P0 48W / 300W | 1224MiB / 16160MiB | 10% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... Off | 00000000:00:1D.0 Off | 0 |
| N/A 23C P0 49W / 300W | 1436MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... Off | 00000000:00:1E.0 Off | 0 |
| N/A 20C P0 48W / 300W | 0MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 22466 C ...l-en-release-1/bin/python 1221MiB |
+-----------------------------------------------------------------------------+

In PyTorch, every array has a device, we often refer it as a context. So far, by default, all vari-
ables and associated computation have been assigned to the CPU. Typically, other contexts
might be various GPUs. Things can get even hairier when we deploy jobs across multiple
servers. By assigning arrays to contexts intelligently, we can minimize the time spent trans-
228 Builders Guide

ferring data between devices. For example, when training neural networks on a server with a
GPU, we typically prefer for the model’s parameters to live on the GPU.

To run the programs in this section, you need at least two GPUs. Note that this might be
extravagant for most desktop computers but it is easily available in the cloud, e.g., by using
the AWS EC2 multi-GPU instances. Almost all other sections do not require multiple GPUs.
Instead, this is simply to illustrate how data flow between different devices.

6.7.1 Computing Devices

We can specify devices, such as CPUs and GPUs, for storage and calculation. By default,
tensors are created in the main memory and then use the CPU to calculate it.

In PyTorch, the CPU and GPU can be indicated by torch.device('cpu') and torch.
device('cuda'). It should be noted that the cpu device means all physical CPUs and mem-
ory. This means that PyTorch’s calculations will try to use all CPU cores. However, a gpu
device only represents one card and the corresponding memory. If there are multiple GPUs,
we use torch.device(f'cuda:{i}') to represent the ith GPU (i starts from 0). Also,
gpu:0 and gpu are equivalent.

import torch
from torch import nn
from d2l import torch as d2l

def cpu(): #@save


return torch.device('cpu')
def gpu(i=0): #@save
return torch.device(f'cuda:{i}')
cpu(), gpu(), gpu(1)

(device(type='cpu'),
device(type='cuda', index=0),
device(type='cuda', index=1))

We can query the number of available GPUs.

def num_gpus(): #@save


return torch.cuda.device_count()
num_gpus()

Now we define two convenient functions that allow us to run code even if the requested GPUs
do not exist.

def try_gpu(i=0): #@save


"""Return gpu(i) if exists, otherwise return cpu()."""
if num_gpus() >= i + 1:
return gpu(i)
return cpu()

def try_all_gpus(): #@save


"""Return all available GPUs, or [cpu(),] if no GPU exists."""
return [gpu(i) for i in range(num_gpus())]

try_gpu(), try_gpu(10), try_all_gpus()


229 GPUs

(device(type='cuda', index=0),
device(type='cpu'),
[device(type='cuda', index=0), device(type='cuda', index=1)])

6.7.2 Tensors and GPUs

By default, tensors are created on the CPU. We can query the device where the tensor is
located.

x = torch.tensor([1, 2, 3])
x.device

device(type='cpu')

It is important to note that whenever we want to operate on multiple terms, they need to
be on the same device. For instance, if we sum two tensors, we need to make sure that both
arguments live on the same device—otherwise the framework would not know where to store
the result or even how to decide where to perform the computation.

Storage on the GPU

There are several ways to store a tensor on the GPU. For example, we can specify a storage
device when creating a tensor. Next, we create the tensor variable X on the first gpu. The tensor
created on a GPU only consumes the memory of this GPU. We can use the nvidia-smi
command to view GPU memory usage. In general, we need to make sure that we do not
create data that exceeds the GPU memory limit.

X = torch.ones(2, 3, device=try_gpu())
X

tensor([[1., 1., 1.],


[1., 1., 1.]], device='cuda:0')

Assuming that you have at least two GPUs, the following code will create a random tensor
on the second GPU.

Y = torch.rand(2, 3, device=try_gpu(1))
Y

tensor([[0.6673, 0.6182, 0.4598],


[0.2127, 0.1105, 0.8601]], device='cuda:1')

Copying

If we want to compute X + Y, we need to decide where to perform this operation. For instance,
as shown in Fig.6.7.1, we can transfer X to the second GPU and perform the operation there.
Do not simply add X and Y, since this will result in an exception. The runtime engine would
not know what to do: it cannot find data on the same device and it fails. Since Y lives on the
second GPU, we need to move X there before we can add the two.
230 Builders Guide

t
Figure 6.7.1 Copy data to perform an operation on the same device.

Z = X.cuda(1)
print(X)
print(Z)

tensor([[1., 1., 1.],


[1., 1., 1.]], device='cuda:0')
tensor([[1., 1., 1.],
[1., 1., 1.]], device='cuda:1')

Now that the data is on the same GPU (both Z and Y are), we can add them up.

Y + Z

tensor([[1.6673, 1.6182, 1.4598],


[1.2127, 1.1105, 1.8601]], device='cuda:1')

Imagine that your variable Z already lives on your second GPU. What happens if we still call
Z.cuda(1)? It will return Z instead of making a copy and allocating new memory.

Z.cuda(1) is Z

True

Side Notes

People use GPUs to do machine learning because they expect them to be fast. But transferring
variables between devices is slow. So we want you to be 100% certain that you want to do
something slow before we let you do it. If the deep learning framework just did the copy
automatically without crashing then you might not realize that you had written some slow
code.
Also, transferring data between devices (CPU, GPUs, and other machines) is something that
is much slower than computation. It also makes parallelization a lot more difficult, since we
have to wait for data to be sent (or rather to be received) before we can proceed with more
operations. This is why copy operations should be taken with great care. As a rule of thumb,
many small operations are much worse than one big operation. Moreover, several operations
at a time are much better than many single operations interspersed in the code unless you
know what you are doing. This is the case since such operations can block if one device has
to wait for the other before it can do something else. It is a bit like ordering your coffee
in a queue rather than pre-ordering it by phone and finding out that it is ready when you
are.
Last, when we print tensors or convert tensors to the NumPy format, if the data is not in the
main memory, the framework will copy it to the main memory first, resulting in additional
231 GPUs

transmission overhead. Even worse, it is now subject to the dreaded global interpreter lock
that makes everything wait for Python to complete.

6.7.3 Neural Networks and GPUs

Similarly, a neural network model can specify devices. The following code puts the model
parameters on the GPU.

net = nn.Sequential(nn.LazyLinear(1))
net = net.to(device=try_gpu())

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣

,→under heavy development so changes to the API or functionality can happen at␣
,→any moment.

warnings.warn('Lazy modules are a new feature under heavy development '

We will see many more examples of how to run models on GPUs in the following chapters,
simply since they will become somewhat more computationally intensive.

When the input is a tensor on the GPU, the model will calculate the result on the same
GPU.

net(X)

tensor([[0.2367],
[0.2367]], device='cuda:0', grad_fn=<AddmmBackward0>)

Let’s confirm that the model parameters are stored on the same GPU.

net[0].weight.data.device

device(type='cuda', index=0)

Let the trainer support GPU.

@d2l.add_to_class(d2l.Trainer) #@save
def __init__(self, max_epochs, num_gpus=0, gradient_clip_val=0):
self.save_hyperparameters()
self.gpus = [d2l.gpu(i) for i in range(min(num_gpus, d2l.num_gpus()))]

@d2l.add_to_class(d2l.Trainer) #@save
def prepare_batch(self, batch):
if self.gpus:
batch = [a.to(self.gpus[0]) for a in batch]
return batch

@d2l.add_to_class(d2l.Trainer) #@save
def prepare_model(self, model):
model.trainer = self
model.board.xlim = [0, self.max_epochs]
if self.gpus:
model.to(self.gpus[0])
self.model = model

In short, as long as all data and parameters are on the same device, we can learn models
efficiently. In the following chapters we will see several such examples.
232 Builders Guide

6.7.4 Summary

We can specify devices for storage and calculation, such as the CPU or GPU. By default, data
is created in the main memory and then uses the CPU for calculations. The deep learning
framework requires all input data for calculation to be on the same device, be it CPU or the
same GPU. You can lose significant performance by moving data without care. A typical
mistake is as follows: computing the loss for every minibatch on the GPU and reporting it
back to the user on the command line (or logging it in a NumPy ndarray) will trigger a global
interpreter lock which stalls all GPUs. It is much better to allocate memory for logging inside
the GPU and only move larger logs.

6.7.5 Exercises

1. Try a larger computation task, such as the multiplication of large matrices, and see the
difference in speed between the CPU and GPU. What about a task with a small amount
of calculations?
2. How should we read and write model parameters on the GPU?
3. Measure the time it takes to compute 1000 matrix-matrix multiplications of 100 × 100
matrices and log the Frobenius norm of the output matrix one result at a time vs. keeping
a log on the GPU and transferring only the final result.
4. Measure how much time it takes to perform two matrix-matrix multiplications on two
GPUs at the same time vs. in sequence on one GPU. Hint: you should see almost linear
scaling.
Discussions 115 115
7 Convolutional Neural Networks

Image data is represented as a two-dimensional grid of pixels, be it monochromatic or in


color. Accordingly each pixel corresponds to one or multiple numerical values respectively.
So far we ignored this rich structure and treated them as vectors of numbers by flattening the
images, irrespective of the spatial relation between pixels. This deeply unsatisfying approach
was necessary in order to feed the resulting one-dimensional vectors through a fully connected
MLP.

Because these networks are invariant to the order of the features, we could get similar results
regardless of whether we preserve an order corresponding to the spatial structure of the pixels
or if we permute the columns of our design matrix before fitting the MLP’s parameters.
Preferably, we would leverage our prior knowledge that nearby pixels are typically related to
each other, to build efficient models for learning from image data.

This chapter introduces convolutional neural networks (CNNs) (LeCun et al., 1995), a power-
ful family of neural networks that are designed for precisely this purpose. CNN-based archi-
tectures are now ubiquitous in the field of computer vision. For instance, on the Imagnet col-
lection (Deng et al., 2009) it was only the use of convolutional neural networks, in short Con-
vnets that provided significant performance improvements (Krizhevsky et al., 2012).

Modern CNNs, as they are called colloquially owe their design to inspirations from biology,
group theory, and a healthy dose of experimental tinkering. In addition to their sample effi-
ciency in achieving accurate models, CNNs tend to be computationally efficient, both because
they require fewer parameters than fully connected architectures and because convolutions
are easy to parallelize across GPU cores (Chetlur et al., 2014). Consequently, practition-
ers often apply CNNs whenever possible, and increasingly they have emerged as credible
competitors even on tasks with a one-dimensional sequence structure, such as audio (Abdel-
Hamid et al., 2014), text (Kalchbrenner et al., 2014), and time series analysis (LeCun et al.,
1995), where recurrent neural networks are conventionally used. Some clever adaptations of
CNNs have also brought them to bear on graph-structured data (Kipf and Welling, 2016) and
in recommender systems.

First, we will dive more deeply into the motivation for convolutional neural networks. This
is followed by a walk through the basic operations that comprise the backbone of all con-
volutional networks. These include the convolutional layers themselves, nitty-gritty details
including padding and stride, the pooling layers used to aggregate information across adja-
cent spatial regions, the use of multiple channels at each layer, and a careful discussion of
the structure of modern architectures. We will conclude the chapter with a full working ex-
ample of LeNet, the first convolutional network successfully deployed, long before the rise
of modern deep learning. In the next chapter, we will dive into full implementations of some
popular and comparatively recent CNN architectures whose designs represent most of the
techniques commonly used by modern practitioners.
233
234 Convolutional Neural Networks

7.1 From Fully Connected Layers to Convolutions

To this day, the models that we have discussed so far remain appropriate options when we are
dealing with tabular data. By tabular, we mean that the data consist of rows corresponding to
examples and columns corresponding to features. With tabular data, we might anticipate that
the patterns we seek could involve interactions among the features, but we do not assume any
structure a priori concerning how the features interact.
Sometimes, we truly lack knowledge to guide the construction of craftier architectures. In
these cases, an MLP may be the best that we can do. However, for high-dimensional percep-
tual data, such structure-less networks can grow unwieldy.
For instance, let’s return to our running example of distinguishing cats from dogs. Say that
we do a thorough job in data collection, collecting an annotated dataset of one-megapixel
photographs. This means that each input to the network has one million dimensions. Even
an aggressive reduction to one thousand hidden dimensions would require a fully connected
layer characterized by 106 ×103 = 109 parameters. Unless we have lots of GPUs, a talent for
distributed optimization, and an extraordinary amount of patience, learning the parameters
of this network may turn out to be infeasible.
A careful reader might object to this argument on the basis that one megapixel resolution may
not be necessary. However, while we might be able to get away with one hundred thousand
pixels, our hidden layer of size 1000 grossly underestimates the number of hidden units that
it takes to learn good representations of images, so a practical system will still require billions
of parameters. Moreover, learning a classifier by fitting so many parameters might require
collecting an enormous dataset. And yet today both humans and computers are able to dis-
tinguish cats from dogs quite well, seemingly contradicting these intuitions. That is because
images exhibit rich structure that can be exploited by humans and machine learning models
alike. Convolutional neural networks (CNNs) are one creative way that machine learning has
embraced for exploiting some of the known structure in natural images.

7.1.1 Invariance

Imagine that we want to detect an object in an image. It seems reasonable that whatever
method we use to recognize objects should not be overly concerned with the precise location
of the object in the image. Ideally, our system should exploit this knowledge. Pigs usually do
not fly and planes usually do not swim. Nonetheless, we should still recognize a pig were one
to appear at the top of the image. We can draw some inspiration here from the children’s
game “Where’s Waldo” (depicted in Fig.7.1.1). The game consists of a number of chaotic
scenes bursting with activities. Waldo shows up somewhere in each, typically lurking in some
unlikely location. The reader’s goal is to locate him. Despite his characteristic outfit, this
can be surprisingly difficult, due to the large number of distractions. However, what Waldo
looks like does not depend upon where Waldo is located. We could sweep the image with a
Waldo detector that could assign a score to each patch, indicating the likelihood that the patch
contains Waldo. In fact, many object detection and segmentation algorithms are based on this
approach (Long et al., 2015). CNNs systematize this idea of spatial invariance, exploiting it
to learn useful representations with fewer parameters.
We can now make these intuitions more concrete by enumerating a few desiderata to guide
our design of a neural network architecture suitable for computer vision:
1. In the earliest layers, our network should respond similarly to the same patch, regardless of
where it appears in the image. This principle is called translation invariance (or translation
equivariance).
235 From Fully Connected Layers to Convolutions

t
Figure 7.1.1 An image of the Wheres Waldo game.

2. The earliest layers of the network should focus on local regions, without regard for the
contents of the image in distant regions. This is the locality principle. Eventually, these
local representations can be aggregated to make predictions at the whole image level.

3. As we proceed, deeper layers should be able to capture longer-range features of the image,
in a way similar to higher level vision in nature.

Let’s see how this translates into mathematics.

7.1.2 Constraining the MLP

To start off, we can consider an MLP with two-dimensional images X as inputs and their im-
mediate hidden representations H similarly represented as matrices (they are two-dimensional
tensors in code), where both X and H have the same shape. Let that sink in. We now con-
ceive of not only the inputs but also the hidden representations as possessing spatial struc-
ture.

Let [X]i,j and [H]i,j denote the pixel at location (i, j) in the input image and hidden rep-
resentation, respectively. Consequently, to have each of the hidden units receive input from
each of the input pixels, we would switch from using weight matrices (as we did previously
in MLPs) to representing our parameters as fourth-order weight tensors W. Suppose that U
contains biases, we could formally express the fully connected layer as
∑∑
[H]i,j = [U]i,j + [W]i,j,k,l [X]k,l
k l
∑∑ (7.1.1)
= [U]i,j + [V]i,j,a,b [X]i+a,j+b .
a b

The switch from W to V is entirely cosmetic for now since there is a one-to-one correspon-
dence between coefficients in both fourth-order tensors. We simply re-index the subscripts
(k, l) such that k = i+a and l = j+b. In other words, we set [V]i,j,a,b = [W]i,j,i+a,j+b . The
indices a and b run over both positive and negative offsets, covering the entire image. For any
given location (i, j) in the hidden representation [H]i,j , we compute its value by summing
over pixels in x, centered around (i, j) and weighted by [V]i,j,a,b . Before we carry on, let’s
consider the total number of parameters required for a single layer in this parametrization: a
236 Convolutional Neural Networks

1000 × 1000 image (1 megapixel) is mapped to a 1000 × 1000 hidden representation. This
requires 1012 parameters, far beyond what computers currently can handle.

Translation Invariance

Now let’s invoke the first principle established above: translation invariance (Zhang and oth-
ers, 1988). This implies that a shift in the input X should simply lead to a shift in the hidden
representation H. This is only possible if V and U do not actually depend on (i, j). As such,
we have [V]i,j,a,b = [V]a,b and U is a constant, say u. As a result, we can simplify the
definition for H:
∑∑
[H]i,j = u + [V]a,b [X]i+a,j+b . (7.1.2)
a b

This is a convolution! We are effectively weighting pixels at (i + a, j + b) in the vicinity


of location (i, j) with coefficients [V]a,b to obtain the value [H]i,j . Note that [V]a,b needs
many fewer coefficients than [V]i,j,a,b since it no longer depends on the location within the
image. Consequently, the number of parameters required is no longer 1012 but a much more
reasonable 4 · 106 : we still have the dependency on a, b ∈ (−1000, 1000). In short, we
have made significant progress. Time-delay neural networks (TDNNs) are some of the first
examples to exploit this idea (Waibel et al., 1989).

Locality

Now let’s invoke the second principle: locality. As motivated above, we believe that we should
not have to look very far away from location (i, j) in order to glean relevant information to
assess what is going on at [H]i,j . This means that outside some range |a| > ∆ or |b| > ∆,
we should set [V]a,b = 0. Equivalently, we can rewrite [H]i,j as


∆ ∑

[H]i,j = u + [V]a,b [X]i+a,j+b . (7.1.3)
a=−∆ b=−∆

This reduces the number of parameters from 4 · 106 to 4∆2 , where ∆ is typically smaller
than 10. As such, we reduced the number of parameters by another 4 orders of magnitude.
Note that (7.1.3), in a nutshell, is what is called a convolutional layer. Convolutional neural
networks (CNNs) are a special family of neural networks that contain convolutional layers.
In the deep learning research community, V is referred to as a convolution kernel, a filter, or
simply the layer’s weights that are learnable parameters.

While previously, we might have required billions of parameters to represent just a single
layer in an image-processing network, we now typically need just a few hundred, without al-
tering the dimensionality of either the inputs or the hidden representations. The price paid for
this drastic reduction in parameters is that our features are now translation invariant and that
our layer can only incorporate local information, when determining the value of each hidden
activation. All learning depends on imposing inductive bias. When that bias agrees with real-
ity, we get sample-efficient models that generalize well to unseen data. But of course, if those
biases do not agree with reality, e.g., if images turned out not to be translation invariant, our
models might struggle even to fit our training data.

This dramatic reduction in parameters brings us to our last desideratum, namely that deeper
layers should represent larger and more complex aspects of an image. This can be achieved
by interleaving nonlinearities and convolutional layers repeatedly.
237 From Fully Connected Layers to Convolutions

7.1.3 Convolutions

Let’s briefly review why (7.1.3) is called a convolution. In mathematics, the convolution be-
tween two functions (Rudin, 1973), say f, g : Rd → R is defined as

(f ∗ g)(x) = f (z)g(x − z)dz. (7.1.4)

That is, we measure the overlap between f and g when one function is “flipped” and shifted by
x. Whenever we have discrete objects, the integral turns into a sum. For instance, for vectors
from the set of square summable infinite dimensional vectors with index running over Z we
obtain the following definition:

(f ∗ g)(i) = f (a)g(i − a). (7.1.5)
a

For two-dimensional tensors, we have a corresponding sum with indices (a, b) for f and
(i − a, j − b) for g, respectively:
∑∑
(f ∗ g)(i, j) = f (a, b)g(i − a, j − b). (7.1.6)
a b

This looks similar to (7.1.3), with one major difference. Rather than using (i + a, j + b),
we are using the difference instead. Note, though, that this distinction is mostly cosmetic
since we can always match the notation between (7.1.3) and (7.1.6). Our original definition in
(7.1.3) more properly describes a cross-correlation. We will come back to this in the following
section.

7.1.4 Channels

Returning to our Waldo detector, let’s see what this looks like. The convolutional layer picks
windows of a given size and weighs intensities according to the filter V, as demonstrated in
Fig.7.1.2. We might aim to learn a model so that wherever the “waldoness” is highest, we
should find a peak in the hidden layer representations.

t
Figure 7.1.2 Detect Waldo.

There is just one problem with this approach. So far, we blissfully ignored that images consist
of 3 channels: red, green, and blue. In sum, images are not two-dimensional objects but rather
third-order tensors, characterized by a height, width, and channel, e.g., with shape 1024 ×
1024 × 3 pixels. While the first two of these axes concern spatial relationships, the third can
be regarded as assigning a multidimensional representation to each pixel location. We thus
index X as [X]i,j,k . The convolutional filter has to adapt accordingly. Instead of [V]a,b , we
now have [V]a,b,c .
238 Convolutional Neural Networks

Moreover, just as our input consists of a third-order tensor, it turns out to be a good idea
to similarly formulate our hidden representations as third-order tensors H. In other words,
rather than just having a single hidden representation corresponding to each spatial location,
we want an entire vector of hidden representations corresponding to each spatial location.
We could think of the hidden representations as comprising a number of two-dimensional
grids stacked on top of each other. As in the inputs, these are sometimes called channels.
They are also sometimes called feature maps, as each provides a spatialized set of learned
features to the subsequent layer. Intuitively, you might imagine that at lower layers that are
closer to inputs, some channels could become specialized to recognize edges while others
could recognize textures.
To support multiple channels in both inputs (X) and hidden representations (H), we can add
a fourth coordinate to V: [V]a,b,c,d . Putting everything together we have:

∆ ∑
∆ ∑
[H]i,j,d = [V]a,b,c,d [X]i+a,j+b,c , (7.1.7)
a=−∆ b=−∆ c

where d indexes the output channels in the hidden representations H. The subsequent convo-
lutional layer will go on to take a third-order tensor, H, as input. Being more general, (7.1.7)
is the definition of a convolutional layer for multiple channels, where V is a kernel or filter
of the layer.
There are still many operations that we need to address. For instance, we need to figure
out how to combine all the hidden representations to a single output, e.g., whether there is
a Waldo anywhere in the image. We also need to decide how to compute things efficiently,
how to combine multiple layers, appropriate activation functions, and how to make reasonable
design choices to yield networks that are effective in practice. We turn to these issues in the
remainder of the chapter.

7.1.5 Summary and Discussion

In this section we derived the structure of convolutional neural networks from first principles.
While it is unclear whether this is what led to the invention of CNNs, it is satisfying to know
that they are the right choice when applying reasonable principles to how image processing
and computer vision algorithms should operate, at least at lower levels. In particular, trans-
lation invariance in images implies that all patches of an image will be treated in the same
manner. Locality means that only a small neighborhood of pixels will be used to compute the
corresponding hidden representations. Some of the earliest references to CNNs are in the
form of the Neocognitron (Fukushima, 1982).
A second principle that we encountered in our reasoning is how to reduce the number of
parameters in a function class without limiting its expressive power, at least, whenever certain
assumptions on the model hold. We saw a dramatic reduction of complexity as a result of
this restriction, turning computationally and statistically infeasible problems into tractable
models.
Adding channels allowed us to bring back some of the complexity that was lost due to the
restrictions imposed on the convolutional kernel by locality and translation invariance. Note
that channels are quite a natural addition beyond red, green, and blue. Many satellite images,
in particular for agriculture and meteorology, have tens to hundreds of channels, generating
hyperspectral images instead. They report data on many different wavelengths. In the follow-
ing we will see how to use convolutions effectively to manipulate the dimensionality of the
images they operate on, how to move from location-based to channel-based representations
and how to deal with large numbers of categories efficiently.

7.1.6 Exercises
239 Convolutions for Images

1. Assume that the size of the convolution kernel is ∆ = 0. Show that in this case the
convolution kernel implements an MLP independently for each set of channels. This leads
to the Network in Network architectures (Lin et al., 2013).
2. Audio data is often represented as a one-dimensional sequence.
1. When might you want to impose locality and translation invariance for audio?
2. Derive the convolution operations for audio.
3. Can you treat audio using the same tools as computer vision? Hint: use the spectrogram.
3. Why might translation invariance not be a good idea after all? Give an example.
4. Do you think that convolutional layers might also be applicable for text data? Which prob-
lems might you encounter with language?
5. What happens with convolutions when an object is at the boundary of an image.
6. Prove that the convolution is symmetric, i.e., f ∗ g = g ∗ f .
7. Prove the convolution theorem, i.e., f ∗ g = F −1 [F[f ] · F[g]]. Can you use it to accel-
erate convolutions?

116 Discussions 116

7.2 Convolutions for Images

Now that we understand how convolutional layers work in theory, we are ready to see how
they work in practice. Building on our motivation of convolutional neural networks as effi-
cient architectures for exploring structure in image data, we stick with images as our running
example.

7.2.1 The Cross-Correlation Operation

Recall that strictly speaking, convolutional layers are a misnomer, since the operations they
express are more accurately described as cross-correlations. Based on our descriptions of
convolutional layers in Section 7.1, in such a layer, an input tensor and a kernel tensor are
combined to produce an output tensor through a cross-correlation operation.
Let’s ignore channels for now and see how this works with two-dimensional data and hidden
representations. In Fig.7.2.1, the input is a two-dimensional tensor with a height of 3 and
width of 3. We mark the shape of the tensor as 3 × 3 or (3, 3). The height and width of the
kernel are both 2. The shape of the kernel window (or convolution window) is given by the
height and width of the kernel (here it is 2 × 2).

t
Figure 7.2.1 Two-dimensional cross-correlation operation. The shaded portions are the first output
element as well as the input and kernel tensor elements used for the output computation:
0 × 0 + 1 × 1 + 3 × 2 + 4 × 3 = 19.
240 Convolutional Neural Networks

In the two-dimensional cross-correlation operation, we begin with the convolution window


positioned at the upper-left corner of the input tensor and slide it across the input tensor,
both from left to right and top to bottom. When the convolution window slides to a certain
position, the input subtensor contained in that window and the kernel tensor are multiplied
elementwise and the resulting tensor is summed up yielding a single scalar value. This result
gives the value of the output tensor at the corresponding location. Here, the output tensor
has a height of 2 and width of 2 and the four elements are derived from the two-dimensional
cross-correlation operation:
0 × 0 + 1 × 1 + 3 × 2 + 4 × 3 = 19,
1 × 0 + 2 × 1 + 4 × 2 + 5 × 3 = 25,
(7.2.1)
3 × 0 + 4 × 1 + 6 × 2 + 7 × 3 = 37,
4 × 0 + 5 × 1 + 7 × 2 + 8 × 3 = 43.
Note that along each axis, the output size is slightly smaller than the input size. Because
the kernel has width and height greater than one, we can only properly compute the cross-
correlation for locations where the kernel fits wholly within the image, the output size is given
by the input size nh × nw minus the size of the convolution kernel kh × kw via

(nh − kh + 1) × (nw − kw + 1). (7.2.2)

This is the case since we need enough space to “shift” the convolution kernel across the image.
Later we will see how to keep the size unchanged by padding the image with zeros around its
boundary so that there is enough space to shift the kernel. Next, we implement this process
in the corr2d function, which accepts an input tensor X and a kernel tensor K and returns an
output tensor Y.

import torch
from torch import nn
from d2l import torch as d2l

def corr2d(X, K): #@save


"""Compute 2D cross-correlation."""
h, w = K.shape
Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
Y[i, j] = (X[i:i + h, j:j + w] * K).sum()
return Y

We can construct the input tensor X and the kernel tensor K from Fig.7.2.1 to validate the out-
put of the above implementation of the two-dimensional cross-correlation operation.

X = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])


K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
corr2d(X, K)

tensor([[19., 25.],
[37., 43.]])

7.2.2 Convolutional Layers

A convolutional layer cross-correlates the input and kernel and adds a scalar bias to produce an
output. The two parameters of a convolutional layer are the kernel and the scalar bias. When
training models based on convolutional layers, we typically initialize the kernels randomly,
just as we would with a fully connected layer.
241 Convolutions for Images

We are now ready to implement a two-dimensional convolutional layer based on the corr2d
function defined above. In the __init__ constructor method, we declare weight and bias
as the two model parameters. The forward propagation function calls the corr2d function
and adds the bias.

class Conv2D(nn.Module):
def __init__(self, kernel_size):
super().__init__()
self.weight = nn.Parameter(torch.rand(kernel_size))
self.bias = nn.Parameter(torch.zeros(1))

def forward(self, x):


return corr2d(x, self.weight) + self.bias

In h × w convolution or a h × w convolution kernel, the height and width of the convolu-


tion kernel are h and w, respectively. We also refer to a convolutional layer with a h × w
convolution kernel simply as a h × w convolutional layer.

7.2.3 Object Edge Detection in Images

Let’s take a moment to parse a simple application of a convolutional layer: detecting the
edge of an object in an image by finding the location of the pixel change. First, we construct
an “image” of 6 × 8 pixels. The middle four columns are black (0) and the rest are white
(1).

X = torch.ones((6, 8))
X[:, 2:6] = 0
X

tensor([[1., 1., 0., 0., 0., 0., 1., 1.],


[1., 1., 0., 0., 0., 0., 1., 1.],
[1., 1., 0., 0., 0., 0., 1., 1.],
[1., 1., 0., 0., 0., 0., 1., 1.],
[1., 1., 0., 0., 0., 0., 1., 1.],
[1., 1., 0., 0., 0., 0., 1., 1.]])

Next, we construct a kernel K with a height of 1 and a width of 2. When we perform the
cross-correlation operation with the input, if the horizontally adjacent elements are the same,
the output is 0. Otherwise, the output is non-zero. Note that this kernel is special case of a
finite difference operator. At location (i, j) it computes xi,j − x(i+1),j , i.e., it computes the
difference between the values of horizontally adjacent pixels. This is a discrete approximation
of the first derivative in the horizontal direction. After all, for a function f (i, j) its derivative
−∂i f (i, j) = limϵ→0 f (i,j)−fϵ (i+ϵ,j) . Let’s see how this works in practice.

K = torch.tensor([[1.0, -1.0]])

We are ready to perform the cross-correlation operation with arguments X (our input) and K
(our kernel). As you can see, we detect 1 for the edge from white to black and -1 for the edge
from black to white. All other outputs take value 0.

Y = corr2d(X, K)
Y

tensor([[ 0., 1., 0., 0., 0., -1., 0.],


[ 0., 1., 0., 0., 0., -1., 0.],
[ 0., 1., 0., 0., 0., -1., 0.],
(continues on next page)
242 Convolutional Neural Networks

(continued from previous page)

[ 0., 1., 0., 0., 0., -1., 0.],


[ 0., 1., 0., 0., 0., -1., 0.],
[ 0., 1., 0., 0., 0., -1., 0.]])

We can now apply the kernel to the transposed image. As expected, it vanishes. The kernel K
only detects vertical edges.

corr2d(X.t(), K)

tensor([[0., 0., 0., 0., 0.],


[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])

7.2.4 Learning a Kernel

Designing an edge detector by finite differences [1, -1] is neat if we know this is precisely
what we are looking for. However, as we look at larger kernels, and consider successive layers
of convolutions, it might be impossible to specify precisely what each filter should be doing
manually.
Now let’s see whether we can learn the kernel that generated Y from X by looking at the input–
output pairs only. We first construct a convolutional layer and initialize its kernel as a random
tensor. Next, in each iteration, we will use the squared error to compare Y with the output of
the convolutional layer. We can then calculate the gradient to update the kernel. For the sake
of simplicity, in the following we use the built-in class for two-dimensional convolutional
layers and ignore the bias.

# Construct a two-dimensional convolutional layer with 1 output channel and a


# kernel of shape (1, 2). For the sake of simplicity, we ignore the bias here
conv2d = nn.LazyConv2d(1, kernel_size=(1, 2), bias=False)

# The two-dimensional convolutional layer uses four-dimensional input and


# output in the format of (example, channel, height, width), where the batch
# size (number of examples in the batch) and the number of channels are both 1
X = X.reshape((1, 1, 6, 8))
Y = Y.reshape((1, 1, 6, 7))
lr = 3e-2 # Learning rate

for i in range(10):
Y_hat = conv2d(X)
l = (Y_hat - Y) ** 2
conv2d.zero_grad()
l.sum().backward()
# Update the kernel
conv2d.weight.data[:] -= lr * conv2d.weight.grad
if (i + 1) % 2 == 0:
print(f'epoch {i + 1}, loss {l.sum():.3f}')

epoch 2, loss 9.676


epoch 4, loss 3.275
epoch 6, loss 1.226
epoch 8, loss 0.483
epoch 10, loss 0.194
(continues on next page)
243 Convolutions for Images

(continued from previous page)

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣
,→under heavy development so changes to the API or functionality can happen at␣

,→any moment.
warnings.warn('Lazy modules are a new feature under heavy development '

Note that the error has dropped to a small value after 10 iterations. Now we will take a look
at the kernel tensor we learned.

conv2d.weight.data.reshape((1, 2))

tensor([[ 1.0365, -0.9459]])

Indeed, the learned kernel tensor is remarkably close to the kernel tensor K we defined ear-
lier.

7.2.5 Cross-Correlation and Convolution

Recall our observation from Section 7.1 of the correspondence between the cross-correlation
and convolution operations. Here let’s continue to consider two-dimensional convolutional
layers. What if such layers perform strict convolution operations as defined in (7.1.6) instead
of cross-correlations? In order to obtain the output of the strict convolution operation, we
only need to flip the two-dimensional kernel tensor both horizontally and vertically, and then
perform the cross-correlation operation with the input tensor.
It is noteworthy that since kernels are learned from data in deep learning, the outputs of con-
volutional layers remain unaffected no matter such layers perform either the strict convolution
operations or the cross-correlation operations.
To illustrate this, suppose that a convolutional layer performs cross-correlation and learns the
kernel in Fig.7.2.1, which is denoted as the matrix K here. Assuming that other conditions
remain unchanged, when this layer performs strict convolution instead, the learned kernel K′
will be the same as K after K′ is flipped both horizontally and vertically. That is to say, when
the convolutional layer performs strict convolution for the input in Fig.7.2.1 and K′ , the same
output in Fig.7.2.1 (cross-correlation of the input and K) will be obtained.
In keeping with standard terminology with deep learning literature, we will continue to refer
to the cross-correlation operation as a convolution even though, strictly-speaking, it is slightly
different. Besides, we use the term element to refer to an entry (or component) of any tensor
representing a layer representation or a convolution kernel.

7.2.6 Feature Map and Receptive Field

As described in Section 7.1.4, the convolutional layer output in Fig.7.2.1 is sometimes called
a feature map, as it can be regarded as the learned representations (features) in the spatial
dimensions (e.g., width and height) to the subsequent layer. In CNNs, for any element x of
some layer, its receptive field refers to all the elements (from all the previous layers) that may
affect the calculation of x during the forward propagation. Note that the receptive field may
be larger than the actual size of the input.
Let’s continue to use Fig.7.2.1 to explain the receptive field. Given the 2 × 2 convolution
kernel, the receptive field of the shaded output element (of value 19) is the four elements
in the shaded portion of the input. Now let’s denote the 2 × 2 output as Y and consider a
deeper CNN with an additional 2 × 2 convolutional layer that takes Y as its input, outputting
244 Convolutional Neural Networks

a single element z. In this case, the receptive field of z on Y includes all the four elements of
Y, while the receptive field on the input includes all the nine input elements. Thus, when any
element in a feature map needs a larger receptive field to detect input features over a broader
area, we can build a deeper network.
Receptive fields derive their name from neurophysiology. In a series of experiments (Hubel
and Wiesel, 1959, Hubel and Wiesel, 1962, Hubel and Wiesel, 1968) on a range of animals
and different stimuli, Hubel and Wiesel explored the response of what is called the visual
cortex on said stimuli. By and large they found that lower levels respond to edges and related
shapes. Later on, Field (1987) illustrated this effect on natural images with, what can only
be called, convolutional kernels. We reprint a key figure in Fig.7.2.2 to illustrate the striking
similarities.

t
Figure 7.2.2 Figure and caption taken from Field (1987): An example of coding with six different
channels. (Left) Examples of the six types of sensor associated with each channel. (Right)
Convolution of the image in (Middle) with the six sensors shown in (Left). The response
of the individual sensors is determined by sampling these filtered images at a distance
proportional to the size of the sensor (shown with dots). This diagram shows the response
of only the even symmetric sensors.

As it turns out, this relation even holds for the features computed by deeper layers of networks
trained on image classification tasks, as demonstrated e.g., in Kuzovkin et al. (2018). Suffice
it to say, convolutions have proven to be an incredibly powerful tool for computer vision, both
in biology and in code. As such, it is not surprising (in hindsight) that they heralded the recent
success in deep learning.

7.2.7 Summary

The core computation required for a convolutional layer is a cross-correlation operation. We


saw that a simple nested for-loop is all that is required to compute its value. If we have multiple
input and multiple output channels, we are performing a matrix-matrix operation between
channels. As can be seen, the computation is straightforward and, most importantly, highly
local. This affords significant hardware optimization and many recent results in computer
vision are only possible due to that. After all, it means that chip designers can invest into fast
computation rather than memory, when it comes to optimizing for convolutions. While this
may not lead to optimal designs for other applications, it opens the door to ubiquitous and
affordable computer vision.
In terms of convolutions themselves, they can be used for many purposes such as to detect
245 Padding and Stride

edges and lines, to blur images, or to sharpen them. Most importantly, it is not necessary
that the statistician (or engineer) invents suitable filters. Instead, we can simply learn them
from data. This replaces feature engineering heuristics by evidence-based statistics. Lastly,
and quite delightfully, these filters are not just advantageous for building deep networks but
they also correspond to receptive fields and feature maps in the brain. This gives us confidence
that we are on the right track.

7.2.8 Exercises

1. Construct an image X with diagonal edges.


1. What happens if you apply the kernel K in this section to it?
2. What happens if you transpose X?
3. What happens if you transpose K?
2. Design some kernels manually.
1. Given a directional vector v = (v1 , v2 ), derive an edge-detection kernel that detects
edges orthogonal to v, i.e., edges in the direction (v2 , −v1 ).
2. Derive a finite difference operator for the second derivative. What is the minimum size
of the convolutional kernel associate with it? Which structures in images respond most
strongly to it?
3. How would you design a blur kernel? Why might you want to use such a kernel?
4. What is the minimum size of a kernel to obtain a derivative of order d?
3. When you try to automatically find the gradient for the Conv2D class we created, what
kind of error message do you see?
4. How do you represent a cross-correlation operation as a matrix multiplication by changing
the input and kernel tensors?

117 Discussions 117

7.3 Padding and Stride

Recall the example of a convolution in Fig.7.2.1. The input had both a height and width of 3
and the convolution kernel had both a height and width of 2, yielding an output representation
with dimension 2 × 2. Assuming that the input shape is nh × nw and the convolution kernel
shape is kh × kw , the output shape will be (nh − kh + 1) × (nw − kw + 1): we can only shift
the convolution kernel so far until it runs out of pixels to apply the convolution to.
In the following we will explore a number of techniques, including padding and strided con-
volutions, that offer more control over the size of the output. As motivation, note that since
kernels generally have width and height greater than 1, after applying many successive con-
volutions, we tend to wind up with outputs that are considerably smaller than our input. If
we start with a 240 × 240 pixel image, 10 layers of 5 × 5 convolutions reduce the image to
200 × 200 pixels, slicing off 30% of the image and with it obliterating any interesting infor-
mation on the boundaries of the original image. Padding is the most popular tool for handling
this issue. In other cases, we may want to reduce the dimensionality drastically, e.g., if we
find the original input resolution to be unwieldy. Strided convolutions are a popular technique
that can help in these instances.
246 Convolutional Neural Networks

7.3.1 Padding

As described above, one tricky issue when applying convolutional layers is that we tend to
lose pixels on the perimeter of our image. Consider Fig.7.3.1 that depicts the pixel utilization
as a function of the convolution kernel size and the position within the image. The pixels in
the corners are hardly used at all.

t
Figure 7.3.1 Pixel utilization for convolutions of size 1 × 1, 2 × 2, and 3 × 3 respectively.

Since we typically use small kernels, for any given convolution, we might only lose a few
pixels, but this can add up as we apply many successive convolutional layers. One straight-
forward solution to this problem is to add extra pixels of filler around the boundary of our
input image, thus increasing the effective size of the image. Typically, we set the values of
the extra pixels to zero. In Fig.7.3.2, we pad a 3 × 3 input, increasing its size to 5 × 5. The
corresponding output then increases to a 4 × 4 matrix. The shaded portions are the first out-
put element as well as the input and kernel tensor elements used for the output computation:
0 × 0 + 0 × 1 + 0 × 2 + 0 × 3 = 0.

t
Figure 7.3.2 Two-dimensional cross-correlation with padding.

In general, if we add a total of ph rows of padding (roughly half on top and half on bottom)
and a total of pw columns of padding (roughly half on the left and half on the right), the
output shape will be

(nh − kh + ph + 1) × (nw − kw + pw + 1). (7.3.1)

This means that the height and width of the output will increase by ph and pw , respec-
tively.
In many cases, we will want to set ph = kh − 1 and pw = kw − 1 to give the input and
output the same height and width. This will make it easier to predict the output shape of each
layer when constructing the network. Assuming that kh is odd here, we will pad ph /2 rows
on both sides of the height. If kh is even, one possibility is to pad ⌈ph /2⌉ rows on the top of
the input and ⌊ph /2⌋ rows on the bottom. We will pad both sides of the width in the same
way.
CNNs commonly use convolution kernels with odd height and width values, such as 1, 3, 5,
or 7. Choosing odd kernel sizes has the benefit that we can preserve the dimensionality while
padding with the same number of rows on top and bottom, and the same number of columns
on left and right.
Moreover, this practice of using odd kernels and padding to precisely preserve dimensionality
offers a clerical benefit. For any two-dimensional tensor X, when the kernel’s size is odd and
247 Padding and Stride

the number of padding rows and columns on all sides are the same, producing an output
with the same height and width as the input, we know that the output Y[i, j] is calculated
by cross-correlation of the input and convolution kernel with the window centered on X[i,
j].

In the following example, we create a two-dimensional convolutional layer with a height and
width of 3 and apply 1 pixel of padding on all sides. Given an input with a height and width
of 8, we find that the height and width of the output is also 8.

import torch
from torch import nn

# We define a helper function to calculate convolutions. It initializes


# the convolutional layer weights and performs corresponding dimensionality
# elevations and reductions on the input and output
def comp_conv2d(conv2d, X):
# (1, 1) indicates that batch size and the number of channels are both 1
X = X.reshape((1, 1) + X.shape)
Y = conv2d(X)
# Strip the first two dimensions: examples and channels
return Y.reshape(Y.shape[2:])
# 1 row and column is padded on either side, so a total of 2 rows or columns␣
,→are added

conv2d = nn.LazyConv2d(1, kernel_size=3, padding=1)


X = torch.rand(size=(8, 8))
comp_conv2d(conv2d, X).shape

/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣

,→under heavy development so changes to the API or functionality can happen at␣
,→any moment.

warnings.warn('Lazy modules are a new feature under heavy development '

torch.Size([8, 8])

When the height and width of the convolution kernel are different, we can make the output
and input have the same height and width by setting different padding numbers for height and
width.

# We use a convolution kernel with height 5 and width 3. The padding on


# either side of the height and width are 2 and 1, respectively
conv2d = nn.LazyConv2d(1, kernel_size=(5, 3), padding=(2, 1))
comp_conv2d(conv2d, X).shape

torch.Size([8, 8])

7.3.2 Stride

When computing the cross-correlation, we start with the convolution window at the upper-left
corner of the input tensor, and then slide it over all locations both down and to the right. In the
previous examples, we defaulted to sliding one element at a time. However, sometimes, either
for computational efficiency or because we wish to downsample, we move our window more
than one element at a time, skipping the intermediate locations. This is particularly useful if
the convolution kernel is large since it captures a large area of the underlying image.

We refer to the number of rows and columns traversed per slide as stride. So far, we have
used strides of 1, both for height and width. Sometimes, we may want to use a larger stride.
248 Convolutional Neural Networks

Fig.7.3.3 shows a two-dimensional cross-correlation operation with a stride of 3 vertically and


2 horizontally. The shaded portions are the output elements as well as the input and kernel
tensor elements used for the output computation: 0 × 0 + 0 × 1 + 1 × 2 + 2 × 3 = 8,
0 × 0 + 6 × 1 + 0 × 2 + 0 × 3 = 6. We can see that when the second element of the
first column is generated, the convolution window slides down three rows. The convolution
window slides two columns to the right when the second element of the first row is generated.
When the convolution window continues to slide two columns to the right on the input, there
is no output because the input element cannot fill the window (unless we add another column
of padding).

t
Figure 7.3.3 Cross-correlation with strides of 3 and 2 for height and width, respectively.

In general, when the stride for the height is sh and the stride for the width is sw , the output
shape is

⌊(nh − kh + ph + sh )/sh ⌋ × ⌊(nw − kw + pw + sw )/sw ⌋. (7.3.2)

If we set ph = kh − 1 and pw = kw − 1, then the output shape can be simplified to


⌊(nh + sh − 1)/sh ⌋ × ⌊(nw + sw − 1)/sw ⌋. Going a step further, if the input height
and width are divisible by the strides on the height and width, then the output shape will be
(nh /sh ) × (nw /sw ).
Below, we set the strides on both the height and width to 2, thus halving the input height and
width.

conv2d = nn.LazyConv2d(1, kernel_size=3, padding=1, stride=2)


comp_conv2d(conv2d, X).shape

torch.Size([4, 4])

Let’s look at a slightly more complicated example.

conv2d = nn.LazyConv2d(1, kernel_size=(3, 5), padding=(0, 1), stride=(3, 4))


comp_conv2d(conv2d, X).shape

torch.Size([2, 2])

7.3.3 Summary and Discussion

Padding can increase the height and width of the output. This is often used to give the output
the same height and width as the input to avoid undesirable shrinkage of the output. Moreover,
it ensures that all pixels are used equally frequently. Typically we pick symmetric padding
on both sides of the input height and width. In this case we refer to (ph , pw ) padding. Most
commonly we set ph = pw , in which case we simply state that we choose padding p.
A similar convention applies to strides. When horizontal stride sh and vertical stride sw
match, we simply talk about stride s. The stride can reduce the resolution of the output, for
249 Multiple Input and Multiple Output Channels

example reducing the height and width of the output to only 1/n of the height and width of
the input for n > 1. By default, the padding is 0 and the stride is 1.

So far all padding that we discussed simply extended images with zeros. This has significant
computational benefit since it is trivial to accomplish. Moreover, operators can be engineered
to take advantage of this padding implicitly without the need to allocate additional memory.
At the same time, it allows CNNs to encode implicit position information within an image,
simply by learning where the “whitespace” is. There are many alternatives to zero-padding.
(Alsallakh et al., 2020) provides an extensive overview of alternatives (albeit without a clear
case to use nonzero paddings unless artifacts occur).

7.3.4 Exercises

1. Given the last code example in this section with kernel size (3, 5), padding (0, 1), and
stride (3, 4), calculate the output shape to check if it is consistent with the experimental
result.

2. For audio signals, what does a stride of 2 correspond to?

3. Implement mirror padding, i.e., padding where the border values are simply mirrored to
extend tensors.

4. What are the computational benefits of a stride larger than 1?

5. What might be statistical benefits of a stride larger than 1?

6. How would you implement a stride of 12 ? What does it correspond to? When would this
be useful?
118
Discussions 118

7.4 Multiple Input and Multiple Output Channels

While we described the multiple channels that comprise each image (e.g., color images have
the standard RGB channels to indicate the amount of red, green and blue) and convolutional
layers for multiple channels in Section 7.1.4, until now, we simplified all of our numerical
examples by working with just a single input and a single output channel. This allowed us to
think of our inputs, convolution kernels, and outputs each as two-dimensional tensors.

When we add channels into the mix, our inputs and hidden representations both become
three-dimensional tensors. For example, each RGB input image has shape 3 × h × w. We
refer to this axis, with a size of 3, as the channel dimension. The notion of channels is as old
as CNNs themselves. For instance LeNet5 (LeCun et al., 1995) uses them. In this section,
we will take a deeper look at convolution kernels with multiple input and multiple output
channels.

7.4.1 Multiple Input Channels

When the input data contains multiple channels, we need to construct a convolution kernel
with the same number of input channels as the input data, so that it can perform cross-
correlation with the input data. Assuming that the number of channels for the input data is
250 Convolutional Neural Networks

ci , the number of input channels of the convolution kernel also needs to be ci . If our convo-
lution kernel’s window shape is kh × kw , then when ci = 1, we can think of our convolution
kernel as just a two-dimensional tensor of shape kh × kw .
However, when ci > 1, we need a kernel that contains a tensor of shape kh × kw for ev-
ery input channel. Concatenating these ci tensors together yields a convolution kernel of
shape ci × kh × kw . Since the input and convolution kernel each have ci channels, we can
perform a cross-correlation operation on the two-dimensional tensor of the input and the
two-dimensional tensor of the convolution kernel for each channel, adding the ci results to-
gether (summing over the channels) to yield a two-dimensional tensor. This is the result of a
two-dimensional cross-correlation between a multi-channel input and a multi-input-channel
convolution kernel.
Fig.7.4.1 provides an example of a two-dimensional cross-correlation with two input chan-
nels. The shaded portions are the first output element as well as the input and kernel tensor
elements used for the output computation: (1 × 1 + 2 × 2 + 4 × 3 + 5 × 4) + (0 × 0 + 1 ×
1 + 3 × 2 + 4 × 3) = 56.

t
Figure 7.4.1 Cross-correlation computation with 2 input channels.

To make sure we really understand what is going on here, we can implement cross-correlation
operations with multiple input channels ourselves. Notice that all we are doing is performing
a cross-correlation operation per channel and then adding up the results.

import torch
from d2l import torch as d2l

def corr2d_multi_in(X, K):


# Iterate through the 0th dimension (channel) of K first, then add them up
return sum(d2l.corr2d(x, k) for x, k in zip(X, K))

We can construct the input tensor X and the kernel tensor K corresponding to the values in
Fig.7.4.1 to validate the output of the cross-correlation operation.

X = torch.tensor([[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]],


[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]])
K = torch.tensor([[[0.0, 1.0], [2.0, 3.0]], [[1.0, 2.0], [3.0, 4.0]]])

corr2d_multi_in(X, K)

tensor([[ 56., 72.],


[104., 120.]])

7.4.2 Multiple Output Channels

Regardless of the number of input channels, so far we always ended up with one output
channel. However, as we discussed in Section 7.1.4, it turns out to be essential to have multiple
251 Multiple Input and Multiple Output Channels

channels at each layer. In the most popular neural network architectures, we actually increase
the channel dimension as we go deeper in the neural network, typically downsampling to trade
off spatial resolution for greater channel depth. Intuitively, you could think of each channel
as responding to a different set of features. The reality is a bit more complicated than this.
A naive interpretation would suggest that representations are learned independently per pixel
or per channel. Instead, channels are optimized to be jointly useful. This means that rather
than mapping a single channel to an edge detector, it may simply mean that some direction
in channel space corresponds to detecting edges.

Denote by ci and co the number of input and output channels, respectively, and let kh and
kw be the height and width of the kernel. To get an output with multiple channels, we can
create a kernel tensor of shape ci × kh × kw for every output channel. We concatenate them
on the output channel dimension, so that the shape of the convolution kernel is co × ci × kh ×
kw . In cross-correlation operations, the result on each output channel is calculated from the
convolution kernel corresponding to that output channel and takes input from all channels in
the input tensor.

We implement a cross-correlation function to calculate the output of multiple channels as


shown below.

def corr2d_multi_in_out(X, K):


# Iterate through the 0th dimension of `K`, and each time, perform
# cross-correlation operations with input `X`. All of the results are
# stacked together
return torch.stack([corr2d_multi_in(X, k) for k in K], 0)

We construct a trivial convolution kernel with 3 output channels by concatenating the kernel
tensor for K with K+1 and K+2.

K = torch.stack((K, K + 1, K + 2), 0)
K.shape

torch.Size([3, 2, 2, 2])

Below, we perform cross-correlation operations on the input tensor X with the kernel ten-
sor K. Now the output contains 3 channels. The result of the first channel is consistent with
the result of the previous input tensor X and the multi-input channel, single-output channel
kernel.

corr2d_multi_in_out(X, K)

tensor([[[ 56., 72.],


[104., 120.]],

[[ 76., 100.],
[148., 172.]],

[[ 96., 128.],
[192., 224.]]])

7.4.3 1 × 1 Convolutional Layer

At first, a 1 × 1 convolution, i.e., kh = kw = 1, does not seem to make much sense. After all,
a convolution correlates adjacent pixels. A 1 × 1 convolution obviously does not. Nonethe-
less, they are popular operations that are sometimes included in the designs of complex deep
252 Convolutional Neural Networks

networks (Lin et al., 2013, Szegedy et al., 2017) Let’s see in some detail what it actually
does.
Because the minimum window is used, the 1 × 1 convolution loses the ability of larger con-
volutional layers to recognize patterns consisting of interactions among adjacent elements in
the height and width dimensions. The only computation of the 1 × 1 convolution occurs on
the channel dimension.
Fig.7.4.2 shows the cross-correlation computation using the 1 × 1 convolution kernel with 3
input channels and 2 output channels. Note that the inputs and outputs have the same height
and width. Each element in the output is derived from a linear combination of elements at
the same position in the input image. You could think of the 1 × 1 convolutional layer as
constituting a fully connected layer applied at every single pixel location to transform the ci
corresponding input values into co output values. Because this is still a convolutional layer,
the weights are tied across pixel location. Thus the 1 × 1 convolutional layer requires co × ci
weights (plus the bias). Also note that convolutional layers are typically followed by nonlin-
earities. This ensures that 1 × 1 convolutions cannot simply be folded into other convolu-
tions.

t
Figure 7.4.2 The cross-correlation computation uses the 1 × 1 convolution kernel with 3 input channels
and 2 output channels. The input and output have the same height and width.

Let’s check whether this works in practice: we implement a 1 × 1 convolution using a fully
connected layer. The only thing is that we need to make some adjustments to the data shape
before and after the matrix multiplication.

def corr2d_multi_in_out_1x1(X, K):


c_i, h, w = X.shape
c_o = K.shape[0]
X = X.reshape((c_i, h * w))
K = K.reshape((c_o, c_i))
# Matrix multiplication in the fully connected layer
Y = torch.matmul(K, X)
return Y.reshape((c_o, h, w))

When performing 1 × 1 convolutions, the above function is equivalent to the previously im-
plemented cross-correlation function corr2d_multi_in_out. Let’s check this with some
sample data.

X = torch.normal(0, 1, (3, 3, 3))


K = torch.normal(0, 1, (2, 3, 1, 1))

Y1 = corr2d_multi_in_out_1x1(X, K)
Y2 = corr2d_multi_in_out(X, K)
assert float(torch.abs(Y1 - Y2).sum()) < 1e-6

7.4.4 Discussion

Channels allow us to combine the best of both worlds: MLPs that allow for significant nonlin-
earities and convolutions that allow for localized analysis of features. In particular, channels
253 Multiple Input and Multiple Output Channels

allow the CNN to reason with multiple features, such as edge and shape detectors at the same
time. They also offer a practical trade-off between the drastic parameter reduction arising
from translation invariance and locality, and the need for expressive and diverse models in
computer vision.

Note, though, that this flexibility comes at a price. Given an image of size (h × w), the cost
for computing a k × k convolution is O(h · w · k 2 ). For ci and co input and output channels
respectively this increases to O(h · w · k 2 · ci · co ). For a 256 × 256 pixel image with a
5 × 5 kernel and 128 input and output channels respectively this amounts to over 53 billion
operations (we count multiplications and additions separately). Later on we will encounter
effective strategies to cut down on the cost, e.g., by requiring the channel-wise operations to
be block-diagonal, leading to architectures such as ResNeXt (Xie et al., 2017).

7.4.5 Exercises

1. Assume that we have two convolution kernels of size k1 and k2 , respectively (with no
nonlinearity in-between).

1. Prove that the result of the operation can be expressed by a single convolution.

2. What is the dimensionality of the equivalent single convolution?

3. Is the converse true, i.e., can you always decompose a convolution into two smaller
ones?

2. Assume an input of shape ci × h × w and a convolution kernel of shape co × ci × kh × kw ,


padding of (ph , pw ), and stride of (sh , sw ).

1. What is the computational cost (multiplications and additions) for the forward propa-
gation?

2. What is the memory footprint?

3. What is the memory footprint for the backward computation?

4. What is the computational cost for the backpropagation?

3. By what factor does the number of calculations increase if we double the number of in-
put channels ci and the number of output channels co ? What happens if we double the
padding?

4. Are the variables Y1 and Y2 in the last example of this section exactly the same? Why?

5. Express convolutions as a matrix multiplication, even when the convolution window is not
1 × 1?

6. Your task is to implement fast convolutions with a k × k kernel. One of the algorithm
candidates is to scan horizontally across the source, reading a k-wide strip and computing
the 1-wide output strip one value at a time. The alternative is to read a k + ∆ wide strip
and compute a ∆-wide output strip. Why is the latter preferable? Is there a limit to how
large you should choose ∆?

7. Assume that we have a c × c matrix.

1. How much faster is it to multiply with a block-diagonal matrix if the matrix is broken
up into b blocks?
119
2. What is the downside of having b blocks? How could you fix it, at least partly?

Discussions 119
254 Convolutional Neural Networks

7.5 Pooling

In many cases our ultimate task asks some global question about the image, e.g., does it
contain a cat? Consequently, the units of our final layer should be sensitive to the entire input.
By gradually aggregating information, yielding coarser and coarser maps, we accomplish
this goal of ultimately learning a global representation, while keeping all of the advantages
of convolutional layers at the intermediate layers of processing. The deeper we go in the
network, the larger the receptive field (relative to the input) to which each hidden node is
sensitive. Reducing spatial resolution accelerates this process, since the convolution kernels
cover a larger effective area.

Moreover, when detecting lower-level features, such as edges (as discussed in Section 7.2),
we often want our representations to be somewhat invariant to translation. For instance, if
we take the image X with a sharp delineation between black and white and shift the whole
image by one pixel to the right, i.e., Z[i, j] = X[i, j + 1], then the output for the new
image Z might be vastly different. The edge will have shifted by one pixel. In reality, objects
hardly ever occur exactly at the same place. In fact, even with a tripod and a stationary object,
vibration of the camera due to the movement of the shutter might shift everything by a pixel
or so (high-end cameras are loaded with special features to address this problem).

This section introduces pooling layers, which serve the dual purposes of mitigating the sensi-
tivity of convolutional layers to location and of spatially downsampling representations.

7.5.1 Maximum Pooling and Average Pooling

Like convolutional layers, pooling operators consist of a fixed-shape window that is slid over
all regions in the input according to its stride, computing a single output for each location
traversed by the fixed-shape window (sometimes known as the pooling window). However,
unlike the cross-correlation computation of the inputs and kernels in the convolutional layer,
the pooling layer contains no parameters (there is no kernel). Instead, pooling operators are
deterministic, typically calculating either the maximum or the average value of the elements
in the pooling window. These operations are called maximum pooling (max-pooling for short)
and average pooling, respectively.

Average pooling is essentially as old as CNNs. The idea is akin to downsampling an image.
Rather than just taking the value of every second (or third) pixel for the lower resolution
image, we can average over adjacent pixels to obtain an image with better signal to noise
ratio since we are combining the information from multiple adjacent pixels. Max-pooling was
introduced in (Riesenhuber and Poggio, 1999) in the context of cognitive neuroscience to
describe how information aggregation might be aggregated hierarchically for the purpose of
object recognition, and an earlier version in speech recognition (Yamaguchi et al., 1990). In
almost all cases, max-pooling, as it is also referred to, is preferable.

In both cases, as with the cross-correlation operator, we can think of the pooling window as
starting from the upper-left of the input tensor and sliding across the input tensor from left
to right and top to bottom. At each location that the pooling window hits, it computes the
maximum or average value of the input subtensor in the window, depending on whether max
or average pooling is employed.

The output tensor in Fig.7.5.1 has a height of 2 and a width of 2. The four elements are
255 Pooling

t
Figure 7.5.1 Max-pooling with a pooling window shape of 2 × 2. The shaded portions are the first
output element as well as the input tensor elements used for the output computation:
max(0, 1, 3, 4) = 4.

derived from the maximum value in each pooling window:

max(0, 1, 3, 4) = 4,
max(1, 2, 4, 5) = 5,
(7.5.1)
max(3, 4, 6, 7) = 7,
max(4, 5, 7, 8) = 8.
More generally, we can define a p × q pooling layer by aggregating over a region of said size.
Returning to the problem of edge detection, we use the output of the convolutional layer as
input for 2 × 2 max-pooling. Denote by X the input of the convolutional layer input and Y
the pooling layer output. Regardless of whether or not the values of X[i, j], X[i, j + 1],
X[i+1, j] and X[i+1, j + 1] are different, the pooling layer always outputs Y[i, j] = 1.
That is to say, using the 2 × 2 max-pooling layer, we can still detect if the pattern recognized
by the convolutional layer moves no more than one element in height or width.

In the code below, we implement the forward propagation of the pooling layer in the pool2d
function. This function is similar to the corr2d function in Section 7.2. However, no kernel
is needed, computing the output as either the maximum or the average of each region in the
input.

import torch
from torch import nn
from d2l import torch as d2l

def pool2d(X, pool_size, mode='max'):


p_h, p_w = pool_size
Y = torch.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
if mode == 'max':
Y[i, j] = X[i: i + p_h, j: j + p_w].max()
elif mode == 'avg':
Y[i, j] = X[i: i + p_h, j: j + p_w].mean()
return Y

We can construct the input tensor X in Fig.7.5.1 to validate the output of the two-dimensional
max-pooling layer.

X = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])


pool2d(X, (2, 2))

tensor([[4., 5.],
[7., 8.]])

Also, we experiment with the average pooling layer.

pool2d(X, (2, 2), 'avg')


256 Convolutional Neural Networks

tensor([[2., 3.],
[5., 6.]])

7.5.2 Padding and Stride

As with convolutional layers, pooling layers change the output shape. And as before, we can
adjust the operation to achieve a desired output shape by padding the input and adjusting the
stride. We can demonstrate the use of padding and strides in pooling layers via the built-in
two-dimensional max-pooling layer from the deep learning framework. We first construct an
input tensor X whose shape has four dimensions, where the number of examples (batch size)
and number of channels are both 1.

X = torch.arange(16, dtype=torch.float32).reshape((1, 1, 4, 4))


X

tensor([[[[ 0., 1., 2., 3.],


[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]]])

Since pooling aggregates information from an area, deep learning frameworks default to
matching pooling window sizes and stride. For instance, if we use a pooling window of shape
(3, 3) we get a stride shape of (3, 3) by default.

pool2d = nn.MaxPool2d(3)
# Pooling has no model parameters, hence it needs no initialization
pool2d(X)

tensor([[[[10.]]]])

As expected, the stride and padding can be manually specified to override framework defaults
if needed.

pool2d = nn.MaxPool2d(3, padding=1, stride=2)


pool2d(X)

tensor([[[[ 5., 7.],


[13., 15.]]]])

Of course, we can specify an arbitrary rectangular pooling window with arbitrary height and
width respectively, as the example below shows.

pool2d = nn.MaxPool2d((2, 3), stride=(2, 3), padding=(0, 1))


pool2d(X)

tensor([[[[ 5., 7.],


[13., 15.]]]])

7.5.3 Multiple Channels

When processing multi-channel input data, the pooling layer pools each input channel sep-
arately, rather than summing the inputs up over channels as in a convolutional layer. This
257 Pooling

means that the number of output channels for the pooling layer is the same as the number of
input channels. Below, we will concatenate tensors X and X + 1 on the channel dimension to
construct an input with 2 channels.

X = torch.cat((X, X + 1), 1)
X

tensor([[[[ 0., 1., 2., 3.],


[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]],

[[ 1., 2., 3., 4.],


[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.],
[13., 14., 15., 16.]]]])

As we can see, the number of output channels is still 2 after pooling.

pool2d = nn.MaxPool2d(3, padding=1, stride=2)


pool2d(X)

tensor([[[[ 5., 7.],


[13., 15.]],

[[ 6., 8.],
[14., 16.]]]])

7.5.4 Summary

Pooling is an exceedingly simple operation. It does exactly what its name indicates, aggregate
results over a window of values. All convolution semantics, such as strides and padding apply
in the same way as they did previously. Note that pooling is indifferent to channels, i.e., it
leaves the number of channels unchanged and it applies to each channel separately. Lastly, of
the two popular pooling choices, max-pooling is preferable to average pooling, as it confers
some degree of invariance to output. A popular choice is to pick a pooling window size of
2 × 2 to quarter the spatial resolution of output.

Note that there are many more ways of reducing resolution beyond pooling. For instance,
in stochastic pooling (Zeiler and Fergus, 2013) and fractional max-pooling (Graham, 2014)
aggregation is combined with randomization. This can slightly improve the accuracy in some
cases. Lastly, as we will see later with the attention mechanism, there are more refined ways
of aggregating over outputs, e.g., by using the alignment between a query and representation
vectors.

7.5.5 Exercises

1. Implement average pooling through a convolution.

2. Prove that max-pooling cannot be implemented through a convolution alone.

3. Max-pooling can be accomplished using ReLU operations, i.e., ReLU(x) = max(0, x).

1. Express max(a, b) by using only ReLU operations.

2. Use this to implement max-pooling by means of convolutions and ReLU layers.


258 Convolutional Neural Networks

3. How many channels and layers do you need for a 2 × 2 convolution? How many for a
3 × 3 convolution.
4. What is the computational cost of the pooling layer? Assume that the input to the pooling
layer is of size c × h × w, the pooling window has a shape of ph × pw with a padding of
(ph , pw ) and a stride of (sh , sw ).
5. Why do you expect max-pooling and average pooling to work differently?
6. Do we need a separate minimum pooling layer? Can you replace it with another operation?
7. We could use the softmax operation for pooling. Why might it not be so popular?
Discussions 120 120

7.6 Convolutional Neural Networks (LeNet)

We now have all the ingredients required to assemble a fully-functional CNN. In our earlier
encounter with image data, we applied a linear model with softmax regression (Section 4.4)
and an MLP (Section 5.2) to pictures of clothing in the Fashion-MNIST dataset. To make
such data amenable we first flattened each image from a 28 × 28 matrix into a fixed-length
784-dimensional vector, and thereafter processed them in fully connected layers. Now that
we have a handle on convolutional layers, we can retain the spatial structure in our images.
As an additional benefit of replacing fully connected layers with convolutional layers, we will
enjoy more parsimonious models that require far fewer parameters.
In this section, we will introduce LeNet, among the first published CNNs to capture wide
attention for its performance on computer vision tasks. The model was introduced by (and
named for) Yann LeCun, then a researcher at AT&T Bell Labs, for the purpose of recognizing
handwritten digits in images (LeCun et al., 1998). This work represented the culmination of
a decade of research developing the technology. In 1989, LeCun’s team published the first
study to successfully train CNNs via backpropagation (LeCun et al., 1989).
At the time LeNet achieved outstanding results matching the performance of support vector
machines, then a dominant approach in supervised learning, achieving an error rate of less
than 1% per digit. LeNet was eventually adapted to recognize digits for processing deposits
in ATM machines. To this day, some ATMs still run the code that Yann LeCun and his
colleague Leon Bottou wrote in the 1990s!

7.6.1 LeNet

At a high level, LeNet (LeNet-5) consists of two parts: (i) a convolutional encoder consisting
of two convolutional layers; and (ii) a dense block consisting of three fully connected layers;
The architecture is summarized in Fig.7.6.1.
The basic units in each convolutional block are a convolutional layer, a sigmoid activation
function, and a subsequent average pooling operation. Note that while ReLUs and max-
pooling work better, these discoveries had not yet been made at the time. Each convolutional
layer uses a 5 × 5 kernel and a sigmoid activation function. These layers map spatially ar-
ranged inputs to a number of two-dimensional feature maps, typically increasing the number
of channels. The first convolutional layer has 6 output channels, while the second has 16. Each
2 × 2 pooling operation (stride 2) reduces dimensionality by a factor of 4 via spatial down-
sampling. The convolutional block emits an output with shape given by (batch size, number
of channel, height, width).
259 Convolutional Neural Networks (LeNet)

t
Figure 7.6.1 Data flow in LeNet. The input is a handwritten digit, the output a probability over 10
possible outcomes.

In order to pass output from the convolutional block to the dense block, we must flatten each
example in the minibatch. In other words, we take this four-dimensional input and transform
it into the two-dimensional input expected by fully connected layers: as a reminder, the two-
dimensional representation that we desire uses the first dimension to index examples in the
minibatch and the second to give the flat vector representation of each example. LeNet’s dense
block has three fully connected layers, with 120, 84, and 10 outputs, respectively. Because we
are still performing classification, the 10-dimensional output layer corresponds to the number
of possible output classes.

While getting to the point where you truly understand what is going on inside LeNet may have
taken a bit of work, hopefully the following code snippet will convince you that implementing
such models with modern deep learning frameworks is remarkably simple. We need only
to instantiate a Sequential block and chain together the appropriate layers, using Xavier
initialization as introduced in Section 5.4.2.

import torch
from torch import nn
from d2l import torch as d2l

def init_cnn(module): #@save


"""Initialize weights for CNNs."""
if type(module) == nn.Linear or type(module) == nn.Conv2d:
nn.init.xavier_uniform_(module.weight)

class LeNet(d2l.Classifier): #@save


def __init__(self, lr=0.1, num_classes=10):
super().__init__()
self.save_hyperparameters()
self.net = nn.Sequential(
nn.LazyConv2d(6, kernel_size=5, padding=2), nn.Sigmoid(),
nn.AvgPool2d(kernel_size=2, stride=2),
nn.LazyConv2d(16, kernel_size=5), nn.Sigmoid(),
nn.AvgPool2d(kernel_size=2, stride=2),
nn.Flatten(),
nn.LazyLinear(120), nn.Sigmoid(),
nn.LazyLinear(84), nn.Sigmoid(),
nn.LazyLinear(num_classes))

We take some liberty in the reproduction of LeNet insofar as we replace the Gaussian acti-
vation layer by a softmax layer. This greatly simplifies the implementation, not the least due
to the fact that the Gaussian decoder is rarely used nowadays. Other than that, this network
matches the original LeNet-5 architecture.

Let’s see what happens inside the network. By passing a single-channel (black and white)
260 Convolutional Neural Networks

28×28 image through the network and printing the output shape at each layer, we can inspect
the model to make sure that its operations line up with what we expect from Fig.7.6.2.

t
Figure 7.6.2 Compressed notation for LeNet-5.

@d2l.add_to_class(d2l.Classifier) #@save
def layer_summary(self, X_shape):
X = torch.randn(*X_shape)
for layer in self.net:
X = layer(X)
print(layer.__class__.__name__, 'output shape:\t', X.shape)

model = LeNet()
model.layer_summary((1, 1, 28, 28))

Conv2d output shape: torch.Size([1, 6, 28, 28])


Sigmoid output shape: torch.Size([1, 6, 28, 28])
AvgPool2d output shape: torch.Size([1, 6, 14, 14])
Conv2d output shape: torch.Size([1, 16, 10, 10])
Sigmoid output shape: torch.Size([1, 16, 10, 10])
AvgPool2d output shape: torch.Size([1, 16, 5, 5])
Flatten output shape: torch.Size([1, 400])
Linear output shape: torch.Size([1, 120])
Sigmoid output shape: torch.Size([1, 120])
Linear output shape: torch.Size([1, 84])
Sigmoid output shape: torch.Size([1, 84])
Linear output shape: torch.Size([1, 10])
/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/
,→torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature␣

,→under heavy development so changes to the API or functionality can happen at␣
,→any moment.

warnings.warn('Lazy modules are a new feature under heavy development '

Note that the height and width of the representation at each layer throughout the convolutional
block is reduced (compared with the previous layer). The first convolutional layer uses 2 pixels
of padding to compensate for the reduction in height and width that would otherwise result
from using a 5×5 kernel. As an aside, the image size of 28×28 pixels in the original MNIST
OCR dataset is a result of trimming 2 pixel rows (and columns) from the original scans that
measured 32 × 32 pixels. This was done primarily to save space (a 30% reduction) at a time
when Megabytes mattered.

In contrast, the second convolutional layer forgoes padding, and thus the height and width are
both reduced by 4 pixels. As we go up the stack of layers, the number of channels increases
layer-over-layer from 1 in the input to 6 after the first convolutional layer and 16 after the
second convolutional layer. However, each pooling layer halves the height and width. Finally,
261 Convolutional Neural Networks (LeNet)

each fully connected layer reduces dimensionality, finally emitting an output whose dimension
matches the number of classes.

7.6.2 Training

Now that we have implemented the model, let’s run an experiment to see how the LeNet-5
model fares on Fashion-MNIST.
While CNNs have fewer parameters, they can still be more expensive to compute than sim-
ilarly deep MLPs because each parameter participates in many more multiplications. If you
have access to a GPU, this might be a good time to put it into action to speed up training.
Note that the d2l.Trainer class takes care of all details. By default, it initializes the model
parameters on the available devices. Just as with MLPs, our loss function is cross-entropy,
and we minimize it via minibatch stochastic gradient descent.

trainer = d2l.Trainer(max_epochs=10, num_gpus=1)


data = d2l.FashionMNIST(batch_size=128)
model = LeNet(lr=0.1)
model.apply_init([next(iter(data.get_dataloader(True)))[0]], init_cnn)
trainer.fit(model, data)

7.6.3 Summary

In this chapter we made significant progress. We moved from the MLPs of the 1980s to the
CNNs of the 1990s and early 2000s. The architectures proposed, e.g., in the form of LeNet-
5 remain meaningful, even to this day. It is worth comparing the error rates on Fashion-
MNIST achievable with LeNet-5 both to the very best possible with MLPs (Section 5.2) and
those with significantly more advanced architectures such as ResNet (Section 8.6). LeNet
is much more similar to the latter than to the former. One of the primary differences, as
we shall see, is that greater amounts of computation afforded significantly more complex
architectures.
A second difference is the relative ease with which we were able to implement LeNet. What
used to be an engineering challenge worth months of C++ and assembly code, engineering
to improve SN, an early Lisp based deep learning tool (Bottou and Le Cun, 1988), and fi-
nally experimentation with models can now be accomplished in minutes. It is this incredible
productivity boost that has democratized deep learning model development tremendously. In
the next chapter we will follow down this rabbit to hole to see where it takes us.

7.6.4 Exercises

1. Let’s modernize LeNet. Implement and test the following changes:

You might also like