You are on page 1of 4

Muzammil Asrar

Sp21-RCE-006

Assignment_03: Give a line-by-line explanation.

Convolution layers is returned in above function of conv3x3. In this function default parameters are set.
Also explained in front of line what parameters means what.

Convolution layers is returned in above function of conv1x1. In this function default parameters are set
as similarly like in conv3x3.
• Define class BasicBlock and then initials the variable when class called objects create, we have to
pass values in case we didn't pass value it will consider its default values.

• BasicBlock is subclass of NN.Module and it's is inheriting values from his parent class (super ()).

• if norm_layer is passed they will add it otherwise not. Dilation is basically teacher student
relationship, teacher have whole knowledge but pass only those things which students needed.¶

• After Dilation layers are defined further and they take values from above defined functions of
conv3x3 and conv1x1 when they required values, they just called functions and we don't need to
pass values again.

• Forward function run the model they pass above functions definition to layers and return output
of model. Relu used at the end as activation function.
def conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1,
dilation: int = 1) -> nn.Conv2d:
#inplanes= Take how many number of channel are considered as input
#Outplanes= Take how many number of channel are considered as output
#stride = Stride in that particular layer
#Groups= How many groups he wants to build Ideally it 1 or 3
#Dilation = To handle feature maps we need to manage dilation. ResNet architecture have both
Dilated as well as non dilated properties

"""3x3 convolution with padding"""


return nn.Conv2d(
in_planes,
out_planes,
kernel_size=3,
stride=stride,
padding=dilation,
groups=groups,
bias=False,
dilation=dilation,
)
Return a convolutional 2D layer with all the parameter it received in Conv3x3. Additionally they
add a fix Kernel size of 3x3 and consider BIAS as FALSE

-----------------------------------------------------------------------------------------------------------------

def conv 1x1(in_planes: int, out_planes: int, stride: int = 1) -> nn.Conv2d:
#inplanes= Take how many number of channel are considered as input
#Outplanes= Take how many number of channel are considered as output
#stride = Stride in that particular layer or block

"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride,
bias=False)
Return a convolutional 2D layer with all the parameters it received in the Conv 1x1 block.
Additionally they add a fix Kernel size of 1x1 and consider BIAS as FALSE
-------------------------------------------------------------------------------------------------------------------

class BasicBlock(nn.Module): # Construct a class and inherit all the classes of nn.modules
expansion: int = 1 # Define Expansion ratio as 1 throughout this class
def __init__(self,inplanes: int,planes: int,stride: int = 1,downsample: Optional[nn.Module] =
None,groups: int = 1,base_width: int = 64,dilation: int = 1,norm_layer: Optional[Callable[...,
nn.Module]] = None,) -> None:
Define function with multiple parameters which are initialized by the constructor once we create
the object it will initialize all these parameters.

super().__init__()
if norm_layer is None: # if normalization layer flag is NONE when we call constructor
norm_layer = nn.BatchNorm2d #assign the normalization present in nn.module the class which
we inherit on top
if groups != 1 or base_width != 64: # if group is not 1 or base width is not 64
raise ValueError("BasicBlock only supports groups=1 and base_width=64") # it raises an
exception that the basic block only supports group 1 with base width of 64
if dilation > 1: # if dilation is greater than 1
raise NotImplementedError("Dilation > 1 not supported in BasicBlock") it raises an exception
that in basic block dilation greater than 1 is not supported.

# Both self.conv1 and self.downsample layers downsample the input when


stride != 1
self.conv1 = conv3x3(inplanes, planes, stride) # call Conv 3x3 function to create layer
self.bn1 = norm_layer(planes) #create batch norm layer
self.relu = nn.ReLU(inplace=True) # Call relu

self.conv2 = conv3x3(planes, planes) call Conv 3x3 function to create layer


self.bn2 = norm_layer(planes)#create batch norm layer
self.downsample = downsample # downsampling of an image
self.stride = stride # set stride at this place of network

def forward(self, x: Tensor) -> Tensor:


identity = x
out = self.conv1(x) # Forward op
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out

You might also like