You are on page 1of 14

Neural style transfer

Neural Style transfer (NST)

NST is the technique of blending style from one image


into another image keeping its content intact. The
only change is the style configurations of the image
to give an artistic touch to your image.
The content image describes the layout or the sketch
and Style being the painting or the colors. It is 
an application of Computer Vision related to image
processing techniques and Deep Convolutional
Neural Networks.
Neural Style Transfer deals with two sets of images: Content
image and Style image.
This technique helps to recreate the content image in the style of
the reference image. It uses Neural Networks to apply the artistic
style from one image to another.
Neural style transfer opens up endless possibilities in design,
content generation, and the development of creative tools.
How does Style Transfer work?
The aim of Neural Style Transfer is to give the Deep Learning
model the ability to differentiate between the style
representations and content image.
NST employs a pre-trained Convolutional Neural Network with
added loss functions to transfer style from one image to another
and synthesize a newly generated image with the features we
want to add.
These two objectives are combined in a single loss formula,
where we can control how much we care about style
reconstruction and content reconstruction.
Here are the required inputs to the model for image style
transfer:

1. A Content Image –an image to which we want to transfer


style to
2. A Style Image – the style we want to transfer to the content
image
3. An Input Image (generated) – the final blend of content and
style image
Neural Style Transfer basic structure

Training a style transfer model requires two networks: a


pre-trained feature extractor and a transfer network.
The input image is transformed into representations that
have more information about the content of the image,
rather than the detailed pixel value.
The features that we get from the higher levels of the
model can be considered more related to the content of
the image.
To obtain a representation of the style of a reference
image, we use the correlation between different filter
responses.
Content loss
It helps to establish similarities between the content image and the generated image.
Content loss is calculated by Euclidean distance between the respective intermediate higher-level feature
representation of input image (x) and content image (p) at layer l.
          

Style loss
Style loss is conceptually different from Content loss.
We cannot just compare the intermediate features of the two images and get the style loss. That's why we
introduce a new term called Gram matrices.
Gram matrix is a way to interpret style information in an image as it shows the overall distribution of
features in a given layer. It is measured as the amount of correlation present between features maps in a
given layer.
Style loss is calculated by the distance between the gram matrices (or, in
other terms, style representation) of the generated image and the style
reference image.
The contribution of each layer in the style information is calculated by the
below formula:

Thus, the total style loss across each layer is expressed as:

where the contribution of each layer in the style loss is depicted by some
factor wl.
Neural Style Transfer use cases

Proposed agricultural issues classification taxonomy


Many agricultural CNN solutions have been developed depending on specific
agriculture issues. For the study purpose, a classification taxonomy tailored to
CNN application in the smart farming was developed. In this section, we
categorize use of state-of-the-art CNN based on the agricultural issue they
solve:
(a) Plant management includes solutions geared towards crop welfare and
production. This includes classification(species), detection(disease and pest)
and prediction(yield production).
(b) Livestock management address solutions for livestock
production(prediction and quality management) and animal welfare(animal
identification, species detection and disease and pest control).
(c) Environment management addresses solutions for land and water
management.
.
Summary

1. Neural Style transfer builds on the fact to blend the content image
to a style reference image such that the content is painted in the
specific style
2. NST employs a pre-trained Convolutional Neural Network for
feature extraction and separation of content and style representations
from an image
3. NST network has two inputs: Content image and Style image. The
content image is recreated as a newly generated image which is the
only trainable variable in the neural network
4. The architecture of the model performs the training using two loss
terms: Content Loss and Style Loss
5. Content loss is calculated by measuring the difference between the
higher-level intermediate layer feature maps
6. Style loss can be measured by the degree of correlation between
the responses from different filters at a level.  

You might also like