You are on page 1of 55

Project Report

On

DOOGLE
Submitted to:

Panjab University, Chandigarh

In the partial fulfillment of the requirement for the degree of

Bachelor of Computer Applications (B.C.A.)


(Session – 2018-2019)

Under the Supervision of: Submitted by:


Mrs. Gunjan Aman Kumar Sing -16091067
Deptt of Computer Sc. Rakshit kumar -16091155
B.C.A 6th Semester

DAV COLLEGE, SECTOR 10, CHANDIGARH.


DAV COLLEGE, SECTOR 10, CHANDIGARH

CERTIFICATE

This is to certify that Mr.Aman Kumar Singh and Rakshit Kumar, Class Roll No.8061 and 8071 a
bonafied student of B.C.A. 6th Sem being run by DAV College, Chandigarh of batch 2018-2019 has com-
pleted the project entitled “DOOGLE” under my supervision & Guidance. It is further certified that the
work done in this project is a result of candidate’s own efforts.
I wish him/her all success in his/her life.

Date: Mrs. Gunjan


Asstt. Prof.
Deptt. of Comp. Sc.
ACKNOWLEDGEMENT

I express my heart full indebtedness and owe a deep sense of gratitude to my project guide Mrs. Gunjan for
her sincere guidance and inspiration in completing this project. Without whom this project could not have
been possible and taking a variety of hurdles with implicit patience throughout my project and whose deep
involvement and interest in project infused in me great inspiration and confidence in taking up study in the
right direction.

I am highly obliged in taking the opportunity to sincerely thanks to our HOD mam Mrs. Meenaksi
Bhardwaj for providing us this opportunity to develop our own project, and for continually challenging us
to improve and refine and extend our thinking. I also acknowledge our principle Dr. Pawan Kumar
Sharma for giving the opportunity to our BCA Programs and Project and providing us all the facility that
was required.

In the present world of competition, there is a race of existence in which those are having the will to come
forward to succeed. A project is like a bridge between theoretical and practical working. I would like to
thank the supreme power the almighty God who is obviously the one has guided me to work and the one has
always guided me to work on the right path of life. Next to him are my parents, whom I am greatly in debt
for me brought up with love and encouragement to this stage.
INTRODUCTION

Doggle’s goal is to build a model capable of doing breed classification of a dog by just “looking” into its
image. WEstarted thinking about possible approaches to build a model for doing this and what accuracy it
might be able to achieve. It appears that with modern machine learning frameworks like PyTorch and
publicly available datasets and pre-trained models for image recognition the task could be solved with a
pretty good accuracy without applying too many efforts and spending too much time and resources.
Who's a good dog? Who likes ear scratches? Well, it seems those fancy deep neural networks don't have all
the answers. However, maybe they can answer that ubiquitous question we all ask when meeting a four-
legged stranger: what kind of good pup is that?
How well you can tell your Norfolk Terriers from your Norwich Terriers? With 120 breeds of dogs and a
limited number training images per class, you might find the problem more, err, ruff than you anticipated.

Though, our goal is not to detect songs but to detect dog breeds. This dataset provides the images of 133 dif-
ferent dog breeds. At the end of this project, our code will accept any user-supplied image as input. If a dog
is detected in the image, it will provide an estimate of the dog’s breed. If no dog found in image it predicts
which object it found in image.It is important to mention that the task of assigning breed to dogs from im-
ages is considered exceptionally challenging. To see why, consider that even a human would have trouble
distinguishing between a Brittany and a Welsh Springer Spaniel. Likewise, recall that Labradors come in
yellow, chocolate, and black.It is not difficult to find other dog breed pairs with minimal inter-class varia-
tion (for instance, Curly-Coated Retrievers and American Water Spaniels).

How well you can tell your Norfolk Terriers from your Norwich Terriers ? Its hard to identify similar dog
breeds which looks same .To see why, consider that even a human would have trouble distinguishing
between a Curly-Coated Retrievers and American Water Spaniels. It is difficult to find dog breed with
minimal inter-class variation like German Shepherd and Belgian Shepherd.

Also searching on different web search engines might give u a mixed and useless response of the search
which is mostly a first searched first result rather than a complete and correct result.

Many people in their day to day life see many dogs which they think they know but don’t have any idea that
their assumptions are very wrong. It’s very hard to spot a particular breed of dog in a heap of dogs as most
of them seem similar with a black nose and a furry tail!
We are going to take our first step to build a Shazam like application. Though, our goal is not to detect songs
but to detect dog breeds. This dataset provides the images of 133 different dog breeds. At the end of this
project, our code will accept any user-supplied image as input. If a dog is detected in the image, it will
provide an estimate of the dog’s breed. If a human is detected, it will provide an estimate of the dog breed
that is most resembling.
This application is a dog-breed classifier. It takes as input an image and detects if it's an image of either a
human or a dog and if it's either one of those then it finds the dog-breed classification that the subject of the
image most resembles. If it's neither a human or a dog then it emits an error message. To do this going to try
two libraries for each of the human face-detectors and dog detectors and also going to try three Neural
Networks to try and classify the dog breeds.
OBJECTIVE

How well you can tell your Norfolk Terriers from your Norwich Terriers ? Its hard to identify similar dog
breeds which looks same .To see why, consider that even a human would have trouble distinguishing
between a Curly-Coated Retrievers and American Water Spaniels. It is difficult to find dog breed with
minimal inter-class variation like German Shepherd and Belgian Shepherd.
It can identify 133 breed of dog.If the dog’s breed is unknown, the app will show a percentage of the closest
breed.
The main objective of the project is to understand(learn) deep learning , using of pretrained models, fine
tunning of the models , deploying model to web using flask.
The aim of this project is to provide users a web app shich they can use to identify the breed of dog they and
provide the information of that breed so that they can know about it.
REQUIREMENT ANALYSIS

PROBLEM ANALYSIS

We are working on some very subjective topic that is dog breed verification using Artificial intelligence.
This project will help those who are unfamiliar with all the dog’s breed they meet around every day on the
streets, offices, parks, schools and streets.
Looking at some bunch of dogs and being not able to distinguish between familiar looking dogs always
leaves us searching for the breed, the background of the breed, its breeds features and history. This project is
designed to solve such issues and save our efforts.
Our main idea behind this project is to make an AI based and efficient, web based application , so that
available on all devices without much hardware specification.
Few problems our project solves are:-
 Distinguish between similar looking breeds of dogs.
 Spread awareness about more everyday and unfamiliar dog breed.

INTRODUCTION TO SRS

A software requirement specification (SRS) is a document that describes the nature of a project,
software or application. In simple words, SRS document is a manual of a project provided its is
prepared before your kick-start a project/application. This document is also known by the names SRS
report , software document. A software document is primarily prepared for project software or any
kind of application.
General Description Of Project

Functions:
Unlike a regular image searching this project has more easier and efficient functions and features like :
 Simple and easy to understand User Interface.
 Artificial Intelligence based project to minimize human effort.
 Can be connected to specified database or directly to network search.
 Can directly access the clicked picture of a dog
 Also can take an image stored in the device to search.
 Dynamic and graphical User interface for better experience.

Characteristics
Main Characteristics of this project are:-
 Quick search for dog breed.
 Easy and user friendly interface for better experience.
 Saves previously and frequently searched dog breeds.
 Gives pretty accurate result of device uploaded images.
 Artificial intelligence makes user experience more effortless.

Constrains
Nothing is 100% perfect and so is this project
 No specified database and dependent on network.
 No feature to access without network.
 No guaranteed authenticity of result.
 Limited dog breed dataset.
Assumptions
 Image being uploaded is of a dog
4
 Clear and identitable image is being uploaded.
 Device upload this has full support.
 Network is very good.

CERTAIN SPECIFIC REQUIREMENTS

System Requirements

 Hardware Requirements
Considiring this project , strictly defining hardware will not be appropriate.
Considerably it requires a system having GOOGLE CHROME(preferd) any other browser or any running
perfectly as this project is a web based software and it will completely do the processing on server.

 Software Requirements
As this is a packed software hence requires in the name of software requirements only an working Opertaing
system will be required as follows :-
 Windows XP with service pack 2 or later.
 Linux 2010 onwards.
 MAC must use OS X version 10.5 or later

Development Environment Requirements


Major requirements are:-
 Code editor( VS code, jupyter notebook)
 Python 3.6 or later with Flask extension.
 PyTorch 1.0
 Google Chrome (preferd)/any browser
5
Functional Requirements
 Adding Images :-
 User can add images of dogs either by directly by clicking thorough camera or by uploading through
the device.
 This is done by clicking on the “browse” button.
 Then by selecting and adding the selected image.
 Searching :- Searching is done directly after the image is uploaded.

FEASIBILTY ANALYSIS

Economic Feasibility
The frameworks used in this software requires no additional plugins and hence no extra package
required to run. All the framework and development editors used and free and open source.

Technical Feasibility
Being a web based software gives this software very less chance of technical difficulty. Makes
this software compatible to all operating system .

Social Feasibility
This software is based on social feasibility only as dogs are most human friendly animals and hence
most people will find it enjoyable and useful to use.
6

SOFTWARE DESIGN

SYSTEM DESIGN
Systems design is the process of defining the architecture, modules, interfaces, and
data for a system to satisfy specified requirements. Systems design could be seen as the
application of systems theory to product development. There is some overlap with the
disciplines of systems analysis, systems architecture and systems engineering.

Architectural Design
Architectural Design refers to the high level structures of a software system and the
discipline of creating such structures and systems. Each structure comprises software
elements, relations among them, and properties of both elements and relations. It functions as
a blueprint for the system and the developing project, laying out the tasks necessary to be
executed by the design teams.

USER
SERVER
DATA FLOW DIAGRAM

8
Interface Design

User interface design (UI) or user interface engineering is the design of user interfaces
for machines and software, such as computers, home appliances, mobile devices, and other
electronic devices, with the focus on maximizing usability and the user experience. The goal
of user interface design is to make the user's interaction as simple and efficient as possible, in
terms of accomplishing user goals (user-centered design).
Frontpage of the app which takes input from user in form of image :

Result page which shows predicted breed:


CODING
APPROCH USED

Approach we used is top to down. We know that a system is composed of more than one sub-systems and it
contains a
number of components. Further, these sub-systems and components may have their on set of
sub-system and components and creates hierarchical structure in the system.
Top-down design takes the whole software system as one entity and then decomposes it to
achieve more than one sub-system or component based on some characteristics. Each sub-
system or component is then treated as a system and decomposed further. This process keeps
on running until the lowest level of system in the top-down hierarchy is achieved.
Top-down design starts with a generalized model of system and keeps on defining the more
specific part of it. When all components are composed the whole system comes into
existence.
Top-down design is more suitable when the software solution needs to be designed from
scratch and specific details are unknown.
File structure:
Classes
vgg.txt
breed.txt

Models
vgg.pth
model.pt

Static
Css
Fonts
doggo.jpg
styles.css
Js
particles.min.js
result.js
script.js

Templates
index.html
result.html

App.py

Main.py

Train.ipynb
Classes
Classes folder contains the text files breed .txt and vgg.txt . Breed file contains the
nameof 133 dog breed and vgg contains the name of 1000 classes of imagenet dataset. Theses text files are
used in main .py file to get the name of dog breed or any other object which is detected in image

Models
This folder contains two files model.pt and vgg.pth which are weight files of vgg and
resnet model which are used in main.py

Static
Static folder contains two folder inside it css and js which cointains the css and
javascript for frontend as we used flask so flask needs different folder static in which css and js can be
placed so that flask can loads it. Js folder contains result.js which is used in result.html & particles.js which
is used to design moving particles in the backgrounf of index.html & script.js which is used in index.html
css used is to give the good design to app:
* {
padding: 0px;
margin: 0px;
}

@font-face {
font-family: 'Maxwell';
src: url('./fonts/MAXWELL\ BOLD.ttf') format('truetype');
}

@font-face {
font-family: 'Product Sans';
src: url('./fonts/Product\ Sans\ Regular.ttf') format('truetype');
}

@font-face {
font-family: 'Product Sans Bold';
src: url('./fonts/Product\ Sans\ Bold.ttf') format('truetype');
}
#particles-js {
height: 100%;
width: 100%;
position: fixed;
}

#container {
width: 100vw;
height: 100vh;
display: flex;
justify-content: center;
align-items: center;
background: url('./doggos.jpg');
background-size: cover;
}

#wrapper {
display: flex;
flex-direction: column;
align-items: center;
}
17
@-webkit-keyframes hue {
from {
-webkit-filter: hue-rotate(0deg) drop-shadow(5px 5px 4px rgba(0, 0, 0, 0.4)); ;
}
to {
-webkit-filter: hue-rotate(-360deg) drop-shadow(5px 5px 4px rgba(0, 0, 0, 0.4)); ;
}
}

@-webkit-keyframes hue2 {
from {
-webkit-filter: hue-rotate(-360deg) drop-shadow(5px 5px 4px rgba(0, 0, 0, 0.4)); ;
}
to {
-webkit-filter: hue-rotate(0deg) drop-shadow(5px 5px 4px rgba(0, 0, 0, 0.4)); ;
}
}

#mainHeading, #secHeading, #terHeading {


font-family: 'Product Sans Bold';
font-size: 120px;
background-image: -webkit-linear-gradient(120deg, #f35626, #feab3a);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
-webkit-animation: hue 15s infinite linear;
margin-top: -60px;
}

#terHeading {
-webkit-animation: hue2 15s infinite linear;
}

#secHeading {
font-size: 35px;
padding: 10px;
}

#terHeading {
font-family: 'Product Sans';
font-size: 18px;
margin: -20px -270px 0px 0px;
}

form {
display: flex;
flex-direction: column;
align-items: center;
}

.input-container {
margin-top: 60px;
max-width: 300px;
background-color: #EDEDED;
border: 2px solid #161616;
border-radius: 50px;
padding-left: 20px;
}

input[type='file'] {
display: none;
}

.file-info {
font-family: 'Product Sans';
font-size: 20px;
}

.browse-btn {
color: #fff;
background-color: #161616;
padding: 8px 10px 8px 10px;
border: none;
border-radius: 50px;
font-family: 'Product Sans';
font-size: 16px;
transform: scale(1.2) translateX(10px);
cursor: pointer;
transition: 0.2s;
border: 1px solid #fff;
}

.browse-btn:hover {
color: #161616;
background: #fff;
}

#submitBtn {
border-style: none;
border: 1px solid;
border-radius: 20px;
margin-top: 26px;
padding: 10px 12px 10px 12px;
font-family: 'Product Sans';
font-size: 16px;
background-color: transparent;
color: #fff;
transition: ease all 0.2s;
cursor: pointer;
outline: none;
z-index: 2;
}

#submitBtn:hover {
color: #161616;
background-color: #fff;
}

/* Results Page */
#resultContainer {
width: 100vw;
height: 100vh;
display: flex;
justify-content: center;
align-items: center;
background: url(./doggos.jpg);
background-size: cover;
}

#resultWrapper {
width: 100%;
height: 100%;
display: flex;
align-items: center;
justify-content: center;
flex-direction: column;
}

#processingIndicator {
background-color: rgba(0, 0, 0, 0.5);
font-size: 15px;
font-family: 'Product Sans Bold';
text-align: center;
width: 100%;
position: fixed;
bottom: 0px;
color: rgba(240, 248, 255, 0.397);
box-sizing: border-box;
padding: 6px;
}

#dogCard {
background-color: white;
padding: 10px;
border-radius: 20px;
display: flex;
flex-direction: column;
align-items: center;
box-shadow: 0px 10px 20px 0px rgba(0, 0, 0, 0.4);
}

#dogImage {
width: 200px;
height: 200px;
background-size: cover;
border-radius: 14px;
border: 2px solid #161616;

/* #dogImageWrapper {
height: 240px;
width: fit-content;
}

#dogImageWrapper img {
width: auto;
height: 100%;
border-radius: 14px;
border: 2px solid #161616;
} */

#dogParticles {
width: inherit;
height: inherit;
border-radius: inherit;
position: absolute;
z-index: 2;
}

#dogBreed {
margin: 14px 0px 0px 0px;
font-family: Maxwell;
color: #161616;
font-size: 20px;
text-align: center;
}

#dogInfoHeading {
font-family: 'Product Sans Bold';
font-size: 20px;
margin-bottom: 10px;
background-image: -webkit-linear-gradient(120deg, #f35626, #feab3a);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
-webkit-animation: hue 15s infinite linear;
width: 100%;
text-align: center;
filter: drop-shadow(5px 5px 4px rgba(0, 0, 0, 0.4));
Particles.min.js this is used to give the moving lines in the background of frontpage of app which gives
it AI effect This is open source javascriopt by Vincent Garreau (vincentgarreau.com) and can be foung on
GitHub : github.com/VincentGarreau/particles.js
/* ----------------------------------------------- */
function hexToRgb(e){var
a=/^#?([a-f\d])([a-f\d])([a-f\d])$/i;e=e.replace(a,function(e,a,t,i){return
a+a+t+t+i+i});var t=/^#?([a-f\d]{2})([a-f\d
]{2})([a-f\d]{2})$/i.exec(e);return t?{r:parseInt(t[1],16),g:parseInt(t[2],16),b:
parseInt(t[3],16)}:null}function clamp(e,a,t){return Math.min(Math.max(e,a),t)}function
IsInArray(e,a){return a.indexOf(e)>-1}var pJS=function(e,a){var
t=document.querySelector("#"+e+" > .particles-js-canvas-el");this.pJS={canvas:
{el:t,w:t.offsetWidth,h:t.offsetHeight},particles:{number:{value:400,density:{enable:!
0,value_area:800}},color:{value:"#fff"},shape:{type:"circle",stroke:
{width:0,color:"#ff0000"},polygon:{nb_sides:5},image:{src:"",width:100,height:100}},
opacity:{value:1,random:!1,anim:{enable:!1,speed:2,opacity_min:0,sync:!1}},size:
{value:20,random:!1,anim:{enable:!1,speed:20,size_min:0,sync:!1}},line_linked:{enable:!
0,distance:100,color:"#fff",opacity:1,width:1},move:{enable:!
0,speed:2,direction:"none",random:!1,straight:!1,out_mode:"out",bounce:!1,attract:
{enable:!1,rotateX:3e3,rotateY:3e3}},array:[]},interactivity:
{detect_on:"canvas",events:{onhover:
{enable:!0,mode:"grab"},onclick:{enable:!0,mode:"push"},resize:!0},modes:{grab:
{distance:100,line_linked:{opacity:1}},bubble:
{distance:200,size:80,duration:.4},repulse:{distance:200,duration:.4},push:
{particles_nb:4},remove:{particles_nb:2}},mouse:{}},retina_detect:!1,fn:{interact:
{},modes:{},vendors:{}},tmp:{}};var
i=this.pJS;a&&Object.deepExtend(i,a),i.tmp.obj={size_value:
i.particles.size.value,size_anim_speed:i.
particles.size.anim.speed,move_speed:i.particles.move.speed,line_linked_distance:i.part
icles.line_linked.distance,line_linked_width:i.
particles.line_linked.width,mode_grab_distance:i.interactivity.modes.grab.distance,mode
_bubble_distance:i.interactivity.modes.bubble.distance,
mode_bubble_size:i.interactivity.modes.bubble.size,mode_repulse_distance:i.interactivit
y.modes.repulse.distance},i.fn.retinaInit=function()
{i.retina_detect&&window.devicePixelRatio>1?
(i.canvas.pxratio=window.devicePixelRatio,i.tmp.retina=!0):
(i.canvas.pxratio=1,i.tmp.retina=!
1),i.canvas.w=i.canvas.el.offsetWidth*i.canvas.pxratio,i.
canvas.h=i.canvas.el.offsetHeight*i.canvas.pxratio,i.
particles.size.value=i.tmp.obj.size_value*i.canvas.pxratio,i.particles.size.anim.speed=
i.tmp.
obj.size_anim_speed*i.canvas.pxratio,i.particles.
move.speed=i.tmp.obj.move_speed*i.canvas.pxratio,i.particles.line_linked.distance=i
.tmp.obj.line_linked_distance*i.canvas
.pxratio,i.interactivity.modes.grab.distance=i.tm
p.obj.mode_grab_distance*i.canvas.pxratio,i.interactivity.modes.bubble.distance=i.tmp.o
bj.mode_bubble_distance*i.canvas.pxratio,i.particles.line_linked.width=i.tmp.obj.line_l
inked_width*i.canvas.pxratio,i.interactivity.modes.bubble.size=i.tmp.obj.mode_bubble_si
ze*i.canvas.pxratio,i.interactivity.modes.repulse.distance=i.tmp.obj.mode_repulse_dista
nce*i.canvas.pxratio},i.fn.canvasInit=function()
{i.canvas.ctx=i.canvas.el.getContext("2d")},i.fn.canvasSize=function()
{i.canvas.el.width=i.canvas.w,i.canvas.el.height=i.canvas.h,i&&i.interactivity.events.r
esize&&window.addEventListener("resize",function()
{i.canvas.w=i.canvas.el.offsetWidth,i.canvas.h=i.canvas.el.offsetHeight,i.tmp.retina&&(
i.canvas.w*=i.canvas.pxratio,i.canvas.h*=i.canvas.pxratio),i.canvas.el.width=i.canvas.w
Empty(),i.fn.canvasClear(),i.fn.vendors.start()},i.fn.interact.linkParticles=function(e
,a){var t=e.x-a.x,s=e.y-
a.y,n=Math.sqrt(t*t+s*s);if(n<=i.particles.line_linked.distance){var
r=i.particles.line_linked.opacity-
n/(1/i.particles.line_linked.opacity)/i.particles.line_linked.distance;if(r>0){var
c=i.particles.line_linked.color_rg
b_line;i.canvas.ctx.strokeStyle="rgba("+c.r+","+c.g+","+c.b+","+r+")",i.canvas.ctx.line
Width=i.particles.line_linked.widt
h,i.canvas.ctx.beginPath(),i.canvas.ctx.moveTo(e.x,e.y),i.canvas.ctx.lineTo(a.x,a.y),i.
canvas.ctx.stroke(),i.canvas.ctx.c
losePath()}}},i.fn.interact.attractParticles=function(e,a){var t=e.x-a.x,s=e.y-
a.y,n=Math.sqrt(t*t+s*s);if(n<=i.p
articles.line_linked.distance){var r=t/(1e3*i.particles.move.attract.
rotateX),c=s/(1e3*i.particles.move.attract.rotateY);e.vx-=r,e.vy-
=c,a.vx+=r,a.vy+=c}},i.fn.interact
.bounceParticles=function(e,a){var t=e.x-a.x,i=e.y-a.y,s=Math.sqrt(t*t+i*i),n=e.radiu
s+a.radius;n>=s&&(e.vx=-e.vx,e.vy=-e.vy,a.vx=-a.vx,a.vy=-
a.vy)},i.fn.modes.pushParticles=fu
nction(e,a){i.tmp.pushing=!0;for(var t=0;e>t;t++)i.particles.array.push(new
i.fn.particle(i.particles.color,i.
particles.opacity.value,{x:a?a.pos_x:Math.random()*i.canvas.w,y:a?
a.pos_y:Math.random()*i.canvas.h})
),t==e-1&&(i.particles.move.enable||i.fn.particlesDraw(),i.tmp.pushing=!
1)},i.fn.modes.removeParticles=fun
ction(e){i.particles.array.splice(0,e),i.particles.move.enable||
i.fn.particlesDraw()},i.fn.modes.b
ubbleParticle=function(e){function a(){e.opacity_bubble=e.opacity,e.radi
us_bubble=e.radius}function t(a,t,s,n,c){if(a!=t)if(i.tmp.bubble_duration_end){if(void
0!=s){var o=n-p*(n-a)/i.interactiv
ity.modes.bubble.duration,l=a-o;d=a+l,"size"==c&&(e.radius_bubbl
e=d),"opacity"==c&&(e.opacity_bubble=d)}}else if(r<=i.interactivity.modes.bubble
.distance){if(void 0!=s)var v=s;else var v=n;if(v!=a){var
d=n-p*(n-a)/i.interactivity.modes.
bubble.duration;"size"==c&&(e.radius_bubble=d),"opacity"==c&&(e.opacity_bubble=d)}}else
"size"==c&&(e.radius_bubble=void 0
),"opacity"==c&&(e.opacity_bubble=void 0)}if(i.interactivity.events.onhov
er.enable&&isInArray("bubble",i.interactivity.events.onhover.mode)){var s=e.x-
i.interactivity.mouse.pos_x,n=e.y-
i.interactivity.mouse.pos_y,r=Math.sqrt(s*s+n*n),c=1-r/
i.interactivity.modes.bubble.dista
nce;if(r<=i.interactivity.modes.bubble.distance){if(c>=0&&"mousemove"==i.interacti
vity.status){if(i.interactivity.modes.bubble.size!=i.particles.size.value)if(i.inter
activity.modes.bubble.size>i.particles.size.value){var
o=e.radius+i.interactivity.modes.b
ubble.size*c;o>=0&&(e.radius_bubble=o)}else{var l=e.radius-
i.interactivity.modes.bubble.size,
o=e.radius-l*c;o>0?e.radius_bubble=o:e.radius_bubble=
0}if(i.interactivity.modes.bubble.opacity!=i.particles.opacity.value)if(i.in
teractivity.modes.bubble.opacity>i.particles.opacity.value){var
v=i.interactivity.modes.bubble.opa
city*c;v>e.opacity&&v<=i.interactivity.modes.bubble.opacity&&(e.opacity_bubble=v)}else{
var v=e.opacity-(i.particles.opaci
ty.value-i.interactivity.modes.bubble.opaci
ty)*c;v<e.opacity&&v>=i.interactivity.modes.bubble.opacity&&(e.opacity_bubble=v)}}}else
a();"mouseleave"==i.interactivity.
status&&a()}else if(i.interactivity.events.onclick.
Empty(),i.fn.canvasClear(),i.fn.vendors.start()},i.fn.interact.linkParticles=function(e
,a){var t=e.x-a.x,s=e.y-
a.y,n=Math.sqrt(t*t+s*s);if(n<=i.particles.line_linked.distance){var
r=i.particles.line_linked.opacity-
n/(1/i.particles.line_linked.opacity)/i.particles.line_linked.distance;if(r>0){var
c=i.particles.line_linked.color_rg
b_line;i.canvas.ctx.strokeStyle="rgba("+c.r+","+c.g+","+c.b+","+r+")",i.canvas.ctx.line
Width=i.particles.line_linked.widt
h,i.canvas.ctx.beginPath(),i.canvas.ctx.moveTo(e.x,e.y),i.canvas.ctx.lineTo(a.x,a.y),i.
canvas.ctx.stroke(),i.canvas.ctx.c
losePath()}}},i.fn.interact.attractParticles=function(e,a){var t=e.x-a.x,s=e.y-
a.y,n=Math.sqrt(t*t+s*s);if(n<=i.p
articles.line_linked.distance){var r=t/(1e3*i.particles.move.attract.
rotateX),c=s/(1e3*i.particles.move.attract.rotateY);e.vx-=r,e.vy-
=c,a.vx+=r,a.vy+=c}},i.fn.interact
.bounceParticles=function(e,a){var t=e.x-a.x,i=e.y-a.y,s=Math.sqrt(t*t+i*i),n=e.radiu
s+a.radius;n>=s&&(e.vx=-e.vx,e.vy=-e.vy,a.vx=-a.vx,a.vy=-
a.vy)},i.fn.modes.pushParticles=fu
nction(e,a){i.tmp.pushing=!0;for(var t=0;e>t;t++)i.particles.array.push(new
i.fn.particle(i.particles.color,i.
particles.opacity.value,{x:a?a.pos_x:Math.random()*i.canvas.w,y:a?
a.pos_y:Math.random()*i.canvas.h})
),t==e-1&&(i.particles.move.enable||i.fn.particlesDraw(),i.tmp.pushing=!
1)},i.fn.modes.removeParticles=fun
ction(e){i.particles.array.splice(0,e),i.particles.move.enable||
i.fn.particlesDraw()},i.fn.modes.b
ubbleParticle=function(e){function a(){e.opacity_bubble=e.opacity,e.radi
us_bubble=e.radius}function t(a,t,s,n,c){if(a!=t)if(i.tmp.bubble_duration_end){if(void
0!=s){var o=n-p*(n-a)/i.interactiv
ity.modes.bubble.duration,l=a-o;d=a+l,"size"==c&&(e.radius_bubbl
e=d),"opacity"==c&&(e.opacity_bubble=d)}}else if(r<=i.interactivity.modes.bubble
.distance){if(void 0!=s)var v=s;else var v=n;if(v!=a){var
d=n-p*(n-a)/i.interactivity.modes.
bubble.duration;"size"==c&&(e.radius_bubble=d),"opacity"==c&&(e.opacity_bubble=d)}}else
"size"==c&&(e.radius_bubble=void 0
),"opacity"==c&&(e.opacity_bubble=void 0)}if(i.interactivity.events.onhov
er.enable&&isInArray("bubble",i.interactivity.events.onhover.mode)){var s=e.x-
i.interactivity.mouse.pos_x,n=e.y-
i.interactivity.mouse.pos_y,r=Math.sqrt(s*s+n*n),c=1-r/
i.interactivity.modes.bubble.dista
nce;if(r<=i.interactivity.modes.bubble.distance){if(c>=0&&"mousemove"==i.interacti
vity.status){if(i.interactivity.modes.bubble.size!=i.particles.size.value)if(i.inter
activity.modes.bubble.size>i.particles.size.value){var
o=e.radius+i.interactivity.modes.b
ubble.size*c;o>=0&&(e.radius_bubble=o)}else{var l=e.radius-
i.interactivity.modes.bubble.size,
o=e.radius-l*c;o>0?e.radius_bubble=o:e.radius_bubble=
0}if(i.interactivity.modes.bubble.opacity!=i.particles.opacity.value)if(i.in
teractivity.modes.bubble.opacity>i.particles.opacity.value){var
v=i.interactivity.modes.bubble.opa
city*c;v>e.opacity&&v<=i.interactivity.modes.bubble.opacity&&(e.opacity_bubble=v)}else{
var v=e.opacity-(i.particles.opaci
ty.value-i.interactivity.modes.bubble.opaci
ty)*c;v<e.opacity&&v>=i.interactivity.modes.bubble.opacity&&(e.opacity_bubble=v)}}}else
a();"mouseleave"==i.interactivity.
status&&a()}else if(i.interactivity.events.onclick.
enable&&isInArray("bubble",i.interactivity.events.onclick.mode))
{if(i.tmp.bubble_clicking){var s=e
{i.interactivity.mouse.pos_x=null,i.interactivity.mouse.pos_y=null,i.interactivity.stat
us="mouseleave"})),i.interactivity
.events.onclick.enable&&i.interactivity.el.addEventListener("click",function()
{if(i.interactivity.mouse.click_po
s_x=i.interactivity.mouse.pos_x,i.interactivity.mouse.click_pos_y=i.interactivity.mouse
.pos_y,i.interactivity.mouse.click
_time=(new Date).getTime(),i.interactivity.ev
ents.onclick.enable)switch(i.interactivity.events.onclick.mode)
{case"push":i.particles.move.enabl
e?i.fn.modes.pushParticles(i.interac
tivity.modes.push.particles_nb,i.interactivity.mouse):1==i.interactivity.modes.push.par
ticles_nb?
i.fn.modes.pushParticles(i.interactivity.modes.push.particles_nb,i.interactivity.mouse)
:i.interactivity.modes.push.partic
les_nb>1&&i.fn.modes.pushParticles(i.interactivity.modes.push.particles_nb);break;case"
remove":i.fn.modes.removeParticles
(i.interactivity.modes.remove.particles_nb);break;case"bubble":i.tmp.bubble_clicking=!
0;break;case"repulse":i.tmp.repuls
e_clicking=!0,i.tmp.repulse_count=0,i.tmp.repulse_finish=!1,setTimeout(function()
{i.tmp.repulse_clicking=!
1},1e3*i.interactivity.modes.repulse.duration)}})},i.fn.vendors.densityAutoParticles=fu
nction(){if(i.particles.number.den
sity.enable){var e=i.canvas.el.width*i.canvas.el.he
ight/1e3;i.tmp.retina&&(e/=2*i.canvas.pxratio);var a=e*i.particles.number.value/i.par
ticles.number.density.value_area,t=i.particles.array.length-a;0>t?
i.fn.modes.pushParticles(Math.abs(
t)):i.fn.modes.removeParticles(t)}},i.fn.vendors.checkOverlap=function(e,a){for(var
t=0;t<i.particles.array.length;t++
){var s=i.particles.array[t],n=e.x-s.x,r=e.y-s.y,c=Math.sqrt(n*n+r*r);c<=e.radi
us+s.radius&&(e.x=a?a.x:Math.random()*i.canvas.w,e.y=a?
a.y:Math.random()*i.canvas.h,i.fn.
vendors.checkOverlap(e))}},i.fn.vendors.createSvgImg=function(e){var
a=i.tmp.source_svg,t=/#([0-9A-F]{3
,6})/gi,s=a.replace(t,function(a,t,i,s){if(e.color.rgb)var
n="rgba("+e.color.rgb.r+","+e.colo
r.rgb.g+","+e.color.rgb.b+","+e.opacity+")";else var n="hsla("+e.color.hsl.h+","+e.colo
r.hsl.s+"%,"+e.color.hsl.l+"%,"+e.opacity+")";return n}),n=new
Blob([s],{type:"image/svg+xml;char
set=utf-8"}),r=window.URL||window.webkitURL||window,c=r.createObjectURL(n),o=new
Image;o.addEventListener("load",fu
nction(){e.img.obj=o,e.img.loaded=!0,r.revokeObjectURL(c),i.tmp.count_svg+
+}),o.src=c},i.fn.vendors.destroyp
JS=function(){cancelAnimationFrame(i.fn.drawAni
mFrame),t.remove(),pJSDom=null},i.fn.vendors.drawShape=function(e,a,t,i,s,n){var
r=s*n,c=s/n,o=180*(c-2)/c,l=Math.P
I-Math.PI*o/180;e.save(),e.beginPath(),e.translate(a,t),e.moveTo(0,0);for(var
v=0;r>v;v+
+)e.lineTo(i,0),e.translate(i,0),e.rotate(l);e.fill(),e.restore()},i.fn.vendors.exportI
mg=function(){window.open(i.canvas
.el.toDataURL("image/png"),"_blank")},i.fn.vendors.loadImg=function(e)
{if(i.tmp.img_error=void 0,""!=i.p
articles.shape.image.src)if("svg"==e){var a=new XMLHttpRequest;a.open("GET",i.part
icles.shape.image.src),a.onreadystatechange=function(e)
{4==a.readyState&&(200==a.status?
(i.tmp.source_svg=e.currentTarget.response,i.fn.vendors.checkBeforeDraw()):
(console.log("Error pJS - Image no
t found"),i.tmp.img_error=!0))},a.send()}else{var t=new
Image;t.addEventListener("load",function(){i.tmp.img_obj=t,i.fn.vendors.chec

result.js this javascript is used to align and give effects to result.html amd script.js this javascript is
used in both index.html and result.html
var partJson = {
"particles": {
"number": {
"value": 300,
"density": {
"enable": true,
"value_area": 800
}
},
"color": {
"value": "#ffffff"
},
"shape": {
"type": "circle",
"stroke": {
"width": 0,
"color": "#000000"
},
"polygon": {
"nb_sides": 5
},
"image": {
"src": "img/github.svg",
"width": 100,
"height": 100
}
},
"opacity": {
"value": 1,
"random": true,
"anim": {
"enable": true,
"speed": 3.5,
"opacity_min": 0,
"sync": false
}
},
"size": {
"value": 3,
"random": true,
"anim": {
"enable": false,
"speed": 4,
"size_min": 0.3,
"sync": false
}
},
"line_linked": {
"enable": false,
"distance": 150,
"color": "#ffffff",
"opacity": 0.4,
"width": 1
},
"move": {
"enable": true,
"speed": 0.2,
"direction": "none",
"random": true,
"straight": false,
"out_mode": "out",
"bounce": false,
"attract": {
"enable": false,
"rotateX": 0,
"rotateY": 0
}
}
},
"interactivity": {
"detect_on": "canvas",
"events": {
"onhover": {
"enable": false,
"mode": "bubble"
},
"onclick": {
"enable": false,
"mode": "repulse"
},
"resize": true
},
"modes": {
"grab": {
"distance": 400,
"line_linked": {
"opacity": 1
}
},
"bubble": {
"distance": 250,
"size": 0,
"duration": 2,
"opacity": 0,
"speed": 3
},
"repulse": {
"distance": 400,
"duration": 0.4
},
"push": {
"particles_nb": 4
},
"remove": {
"particles_nb": 2
}
}
},
"retina_detect": true
}
var jsonUri = "data:text/plain;base64,"+window.btoa(JSON.stringify(partJson));
particlesJS.load('dogParticles', jsonUri);

script.js

var partJson = {
"particles": {
"number": {
"value": 60,
"density": {
"enable": true,
"value_area": 1000
}
},

"color": {
"value": "#ffffff"
},
"shape": {
"type": "circle",
"stroke": {
"width": 0,
"color": "#000000"
},
"polygon": {
"nb_sides": 5
},
"image": {
"src": "img/github.svg",
"width": 100,
"height": 100
}
},

"opacity": {
"value": 0.5,
"random": false,
"anim": {
"enable": false,
"speed": 1,
"opacity_min": 0.1,
"sync": false
}
},

"size": {
"value": 3,
"random": true,
"anim": {
"enable": false,
"speed": 40,
"size_min": 0.1,
"sync": false
}
},

"line_linked": {
"enable": true,
"distance": 150,
"color": "#ffffff",
"opacity": 0.4,
"width": 1
},

"move": {
"enable": true,
"speed": 6,
"direction": "none",
"random": false,
"straight": false,
"out_mode": "out",
"bounce": false,
"attract": {
"enable": false,
"rotateX": 600,
"rotateY": 1200
}
}
},

"interactivity": {
"detect_on": "canvas",
"events": {
"onhover": {
"enable": true,
"mode": "grab"
},

"onclick": {
"enable": true,
"mode": "push"
},
"resize": true
},

"modes": {
"grab": {
"distance": 170,
"line_linked": {
"opacity": 1
}
},

"bubble": {
"distance": 400,
"size": 40,
"duration": 2,
"opacity": 8,
"speed": 3
},
"repulse": {
"distance": 200,
"duration": 0.4
},

"push": {
"particles_nb": 4
},
"remove": {
"particles_nb": 2
}
}
},
"retina_detect": true
}

var jsonUri = "data:text/plain;base64,"+window.btoa(JSON.stringify(partJson));


particlesJS.load('particles-js', jsonUri);

const uploadButton = document.querySelector('.browse-btn');


const fileInfo = document.querySelector('.file-info');
const realInput = document.getElementById('real-input');

uploadButton.addEventListener('click', (e) => {


e.preventDefault();
realInput.click();
});

realInput.addEventListener('change', () => {
const name = realInput.value.split(/\\|\//).pop();
const truncated = name.length > 20
? name.substr(name.length - 20)
: name;

fileInfo.innerHTML = truncated;
});

Templates
This folder contains index.html and result.html which are render to app.py Front page
is index.html and result.html shows the result after processing the image. In index file form is used to get
image input from user.
index.html
<!doctype html>
<html>
<head>
<title>doogle.</title>
<link rel="stylesheet" href="{{ url_for('static',filename='css/styles.css') }}">
<script src="static/js/particles.min.js"></script>
</head>
<body>
<div id="container">
<div id="particles-js"></div>
<div id="wrapper">
<h1 id="secHeading">U・ᴥ・U</h1>
<h1 id="mainHeading">doogle.</h1>
<h1 id="terHeading">AI POWERED</h1>
<form method="post" enctype="multipart/form-data">
<div class="input-container">
<input type="file" name="file" id="real-input">
<span class="file-info">upload any dog image</span>
<button class="browse-btn">
browse
</button>
</div>
<input id="submitBtn" type="submit" value="what's the breed ?">
</form>
</div>
</div>
<script src="static/js/script.js"></script>
</body>

result.html
<!doctype html>
<html>
<head>
<title>doogle.</title>
<link rel="stylesheet"href="{{url_for('static',filename='css/styles.css') }}">
<script src="{{ url_for('static',filename='js/particles.min.js') }}"></script>
</head>
<body>
<div id="resultContainer">
<div id="resultWrapper">
<div id="dogCard">
<div id="dogImage" style="background-image: url(static/css/doggo.jpg);">
<div id="dogParticles"></div>
</div>
<p id="dogBreed">{{breed_name}}</p>
</div>
<div id="dogInfo">
<p id="dogInfoHeading">More about {{bree_name}}</p>
{{info}}
</div>
<p id="processingIndicator">CPU | it might take some time to process
image.</p>
</div>
</div>
<script src="{{ url_for('static',filename='js/result.js') }}"></script>
</body>

App.py
In this file the code of the flask app is written in this the index.html (frontpage)
result.html(result_page) are render and the folder is specefied where images are stored and functions from
main.py file are called which are responsible for detecting dog in image.

from flask import Flask, render_template, request


app = Flask(__name__)

from main import *

@app.route('/', methods=['GET', 'POST'])


def hello():
if request.method == 'GET':
return render_template('index.html')

if request.method == 'POST':
file = request.files['file']
image = file.read()
pas = breed(path = image)
inf = wiki(pas)
chip = device

return render_template('result.html',breed_name=pas, info= inf, device= chip)

if __name__ == '__main__':
app.run(debug =True)
Main.py
This file contains the code of loading model , functions to transform image, dog
detected in image or not if detected then what is the breed , getting information of breed from wikipedia
passing image to models getting name of the breed. The functions of the file are called by app.py to
perform the task.
Two deep convolutional neural network used in this are vgg19 and resnet152.
Vgg19 is pretrainde on imagenet and is responsible for detecting whether there is a dog in image or note and
if not a dog then what kind of thing/animal it is.

During training, the input to our ConvNets is a fixed-size224×224RGB image. The only pre-processing
we do is subtracting the mean RGB value, computedon the training set, from each pixel.The image is passed
through a stack of convolutional(conv.)layers, where we use filters with a verysmall receptive
field:3×3(which is the smallest size to capturethe notion of left/right, up/down,center). In one of the
configurations we also utilise1×1convolution filters, whichcanbe seen asa linear transformation of the
input channels (followed by non-linearity). The convolution stride isfixedto1pixel; the spatial padding of
conv. layer input is such that the spatial resolution is preservedafter convolution, i.e.the padding is1pixel
for3×3conv. layers. Spatial pooling is carried out byfive max-pooling layers, which follow some oftheconv.
layers (not all the conv. layers are followedby max-pooling). Max-pooling is performed over a2×2pixel
window,with stride2.A stack of convolutional layers (which has a different depthin different architectures) is
followed bythree Fully-Connected (FC) layers: the first two have 4096 channels each, the third performs
1000-way ILSVRCclassification and thus contains 1000 channels (one for each class). The final layer isthe
soft-max layer. Theconfiguration of the fully connectedlayers is the same in all networks.All hidden layers
are equipped with therectification (ReLU (Krizhevsky et al., 2012)) non-linearity.We note that none of our
networks (except for one) containLocal Response Normalisation(LRN) normalisation (Krizhevsky et al.,
2012): as will be shown in Sect. 4, suchnormalisationdoes not improve the performance on the ILSVRC
dataset, but leads to increased memory con-sumptionand computation time. Where applicable, the
parameters for the LRN layer are thoseof (Krizhevsky et al., 2012).
Second model is resnet152 which is finetuned (fc layer 133 output) and trained on dataset of 133 dog breed

Based on the plain network, weinsert shortcut connections (Fig. 3, right) which turn thenetwork intoits
counterpart residual version. The identityshortcuts (Eqn.(1)) can be directly used when the input andoutput
are of the same dimensions (solid line shortcuts inFig. 3). When the dimensions increase (dottedline
shortcutsin Fig. 3), we consider two options: (A) The shortcut stillperforms identity mapping,with extra zero
entries paddedfor increasing dimensions. This option introduces no extraparameter;(B) The projection
shortcut in Eqn.(2) is used tomatch dimensions (done by 1×1 convolutions). For bothoptions, when the
shortcuts go across feature maps of twosizes, they are performed with a stride of 2.3.4. The image is resized
with its shorter side ran-domly sampled in[256,480]for scale augmentation [41].A 224×224 crop is
randomly sampled from an image or itshorizontal flip, with the per-pixel mean subtracted[21]. Thestandard
color augmentation in [21] is used. We adopt batchnormalization (BN) [16]rightafter each convolution
anbefore activation, following [16]. We initialize the weightsas in [13] andtrain all plain/residual nets from
scratch. Weuse SGD with a mini-batch size of 256. The learning ratestarts from 0.1 and is divided by 10
when the error plateaus,and the models are trained for up to60×104iterations. Weuse a weight decay of
0.0001 and a momentum of 0.9. Wedo not use dropout [14],following the practice in [16].In testing, for
comparison studies we adopt the standard10-crop testing[21]. For best results, we adopt the fully-
convolutional form as in [41, 13], and average the scoresatmultiple scales (images are resized such that the
shorterside is in{224,256,384,480,640})

import os
import io
import torch
import torch.nn.functional as F
import torch.nn as nn
import torchvision.models as models
import torchvision.transforms as transforms
import numpy
from PIL import Image
import wikipedia

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")


print(device)

#transforms and return image


def load_image(path):

transform = transforms.Compose([
transforms.Resize(size=(244, 244)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406),(0.229, 0.224, 0.225))])

image = Image.open(io.BytesIO(path))
image = transform(image)[:3,:,:].unsqueeze(0)
return image
#loading pretrained vgg19 model
model = models.vgg19(pretrained = False)
model.load_state_dict(torch.load('Models/vgg.pth', map_location = device))
model.to(device)
model.eval()

#loading resnet101 trained on dataset


#finetune model
resnet= models.resnet152(pretrained = False)
ftrs = resnet.fc.in_features # gives input dimentions of fullyconnected layer
resnet.fc = nn.Linear(ftrs,133)
resnet.load_state_dict(torch.load('Models/model.pt', map_location = device))
resnet.to(device)

#returns dog detected or not


def vgg(path):
'''
vgg19 is trained on imagenet containg 1000 classes
so from class no. 151 to 277 reprsents the dogs(including wild)
'''
output= model(path)
return torch.max(output, 1)[1].item()

#returns predicted breed


def res(path):
output = resnet(path)
return torch.max(output,1)[1].item()

#reading class_name if not a dog from vgg classes


def class_name_vgg(idx):
file = open('classes/vgg.txt', 'r')
lines = file.read().split('\n')
lines = [x for x in lines]
return lines[idx]

#returns breed name from text file


def breed_name(idx):
file = open('classes/breed.txt', 'r')
lines = file.read().split('\n')
lines = [x for x in lines]
return lines[idx]
# pass the image to trained model and predict the breed.
def breed(path):
in_img = load_image(path)
a = vgg(in_img)

if a >= 151 and a <=280:


class_no = res(in_img)
name = breed_name(class_no)
else:
class_no = class_name_vgg(a)
name = class_no #returns class from vgg to show what is in image
return name

#returns information from wikipedia


def wiki(info):
return wikipedia.summary(info)

Train.ipynb
This file is made in jyupter notebook. In this model is trained on dataset of 133 dog
breed . And the model is trained in google colaboratory.

import os
import numpy as np
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

import torch
import torch.nn.functional as F
import torch.nn as nn
import torch.optim as optim

import torchvision.models as models


from torchvision import datasets
import torchvision.transforms as transforms

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


print(device)

OUTPUT:- cuda
# mounting google drive to colab
from google.colab import drive
drive.mount('/content/gdrive')

OUTPUT:- Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?


client_id=947318989803-
6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg
%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth
%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F
%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com
%2Fauth%2Fpeopleapi.readonly&response_type=code

Enter your authorization code:


··········
Mounted at /content/gdrive

cd gdrive/My Drive

# transforms the images of train, validation, test datasets


transform = {'train':transforms.Compose([transforms.Resize((244,244)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406),
(0.229, 0.224, 0.225))]),

'valid':transforms.Compose([transforms.Resize((244,244)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406),
(0.229, 0.224,0.225))]),

'test':transforms.Compose([transforms.Resize((244,244)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406),
(0.229, 0.224,0.225))])

}
bs = 10
num_epochs = 22
# pytorch's ImageFolder to load the data
data = 'dogImages/'

train_data = datasets.ImageFolder(os.path.join(data, 'train/'), transform =


transform['train'])
val_data = datasets.ImageFolder(os.path.join(data, 'train/'), transform =
transform['valid'])
test_data = datasets.ImageFolder(os.path.join(data, 'train/'), transform =
transform['test'])

train_loader = torch.utils.data.DataLoader(train_data, batch_size = bs, num_workers =


2, shuffle = True)
val_loader = torch.utils.data.DataLoader(val_data, batch_size = bs, num_workers = 2,
shuffle = False)
test_loader = torch.utils.data.DataLoader(test_data, batch_size = bs, num_workers = 2,
shuffle = False)

# save breed names in text file


class_names = [item[4:].replace("_", " ") for item in train_loader.dataset.classes]

f = open("new.txt", "w")
for i in class_names:
f.write(str(i) + "\n")

f.close()

# finetuning pretrained resnet152 model


model = models.resnet152(pretrained = True)

ftrs = model.fc.in_features # gives input dimentions of fullyconnected layer


model.fc = nn.Linear(ftrs,133) # redesigning fully connected layer with 133 nodes

model = model.to(device)
print(model)

ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)

(layer2): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)

(4): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(5): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(6): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(7): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
(layer3): Sequential(
(0): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)

(1): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(5): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)

(6): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(7): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(8): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(9): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(10): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(11): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(12): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(13): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(14): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(15): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(16): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)

(17): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)

(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)


(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(18): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(19): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(20): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(21): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(22): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(23): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(24): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(25): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(26): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(27): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(28): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(29): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(30): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(31): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(32): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(33): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(34): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(35): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
(layer4): Sequential(
(0): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=2048, out_features=133, bias=True)
)
# loss and optimizer for model
critersion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr = 0.002)

val_loss_min = np.Inf # assigning minmum validation loss to infinity

for epoch in range(num_epochs):

train_loss = 0.0 # running training loss


val_loss = 0.0 # running validation loss

# training model
model.train() # preparing model to train
for inputs, labels in train_loader:
inputs = inputs.to(device)
labels = labels.to(device)

optimizer.zero_grad()
output = model(inputs)
loss = critersion(output, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()*inputs.size(0)

model.eval() # preparing model for validation


for inputs, labels in val_loader:
inputs = inputs.to(device)
labels = labels.to(device)

output = model( inputs)


loss = critersion(output, labels)
val_loss += loss.item()* inputs.size(0)

train_loss = train_loss/len(train_loader.dataset)
val_loss = val_loss/len(val_loader.dataset)

print('Epoch:{}\nTraining Loss:{:6f}\tValidation Loss:


{:6f}'.format(epoch,train_loss,val_loss))

if val_loss <= val_loss_min:


print('Validation Loss Dec. {:6f}--->{:6f}\
t...SAVING...'.format(val_loss_min,val_loss))
torch.save(model.state_dict(),'model.pt') # saves model when validation
loss # dec. from min.
validation loss
val_loss_min = val_loss

Epoch:0
Training Loss:4.232339 Validation Loss:2.935760
Validation Loss Dec. inf--->2.935760 ...SAVING...
Epoch:1
Training Loss:2.757565 Validation Loss:1.484378
Validation Loss Dec. 2.935760--->1.484378 ...SAVING...
Epoch:2
Training Loss:1.832774 Validation Loss:0.871715
Validation Loss Dec. 1.484378--->0.871715 ...SAVING...
Epoch:3
Training Loss:1.316740 Validation Loss:0.564384
Validation Loss Dec. 0.871715--->0.564384 ...SAVING...
Epoch:4
Training Loss:1.001743 Validation Loss:0.405004
Validation Loss Dec. 0.564384--->0.405004 ...SAVING...
Epoch:5
Training Loss:0.792784 Validation Loss:0.297074
Validation Loss Dec. 0.405004--->0.297074 ...SAVING...
Epoch:6
Training Loss:0.636654 Validation Loss:0.218471
Validation Loss Dec. 0.297074--->0.218471 ...SAVING...
Epoch:7
Training Loss:0.519005 Validation Loss:0.161432
Validation Loss Dec. 0.218471--->0.161432 ...SAVING...
Epoch:8
Training Loss:0.427331 Validation Loss:0.118752
Validation Loss Dec. 0.161432--->0.118752 ...SAVING...
Epoch:9
Training Loss:0.361782 Validation Loss:0.087949
Validation Loss Dec. 0.118752--->0.087949 ...SAVING...
Epoch:10
Training Loss:0.290168 Validation Loss:0.061808
Validation Loss Dec. 0.087949--->0.061808 ...SAVING...
Epoch:11
Training Loss:0.249054 Validation Loss:0.049185
Validation Loss Dec. 0.061808--->0.049185 ...SAVING...
Epoch:12
Training Loss:0.207525 Validation Loss:0.036483
Validation Loss Dec. 0.049185--->0.036483 ...SAVING...
Epoch:13
Training Loss:0.176204 Validation Loss:0.026363
Validation Loss Dec. 0.036483--->0.026363 ...SAVING...
Epoch:14
Training Loss:0.154774 Validation Loss:0.020143
Validation Loss Dec. 0.026363--->0.020143 ...SAVING...
Epoch:15
Training Loss:0.131797 Validation Loss:0.015267
Validation Loss Dec. 0.020143--->0.015267 ...SAVING...
Epoch:16
Training Loss:0.115564 Validation Loss:0.012112
Validation Loss Dec. 0.015267--->0.012112 ...SAVING...
Epoch:17
Training Loss:0.103106 Validation Loss:0.009639
Validation Loss Dec. 0.012112--->0.009639 ...SAVING...
Epoch:18
Training Loss:0.091138 Validation Loss:0.008975
Validation Loss Dec. 0.009639--->0.008975 ...SAVING...
Epoch:19
Training Loss:0.082771 Validation Loss:0.007416
Validation Loss Dec. 0.008975--->0.007416 ...SAVING...
Epoch:20
Training Loss:0.075073 Validation Loss:0.006720
Validation Loss Dec. 0.007416--->0.006720 ...SAVING...
Epoch:21
Training Loss:0.067246 Validation Loss:0.005039
Validation Loss Dec. 0.006720--->0.005039 ...SAVING..

# loading trained model


model.load_state_dict(torch.load('model.pt'))

# accuracy of model on test data


correct = 0 # total no of correct predictions
total = 0 # total no of predictions / data

model.eval() #preparing model for testing


for data in test_loader:

images,labels = data
images = images.to(device)
labels = labels.to(device)

output = model(images)
_, pred = torch.max(output,1)

total += labels.size(0)
correct += (pred == labels).sum().item()

acc = 100*correct/total
print('Accuracy of model:{} %'.format(acc))

Accuracy of model:99 %
Testing
Before deploying ht efinal project its important to test the app if its working perfect or not.
The deep convolutional neural network which predicts the breed is tested and accuracy is measured.After
training the resnet model on dog breed datataset it is trained on some of the random dog images given in the
dataset.In tain.ipynb file the testing of the trained model is done.

Code for testing the model is:

correct = 0
total = 0

model.eval()
for data in test_loader:

images,labels = data
images = images.to(device)
labels = labels.to(device)

output = model(images)
_, pred = torch.max(output,1)

total += labels.size(0)
correct += (pred == labels).sum().item()

acc = 100*correct/total
print('Accuracy of model:{} %'.format(acc))

This gives the output of testing in form of accuracy of model in test dataset which is : 99%
CONCLUSION

Doogle is a simple easy to use web application with simple and user friendly interface. It uses deep
learning in backend to identify the breed of dog in a image which is uploaded by the user.
As it has very limit data of dog breed to train the deep learning model so its not 100 percent accurate it
predicts the breed in the image or the nearset beed which matches the dog breed in image right now the
dataset which is used has 133 breeds of dog and only about 120 or near about images of per dog breed to
train which is very less for deep learning model. Doogle is server based application so the processing of
the image is done on server saving user from ai process which take lot of time on simple cpu.
FUTURE SCOPE

Any project developed focuses not only on present day market requirement but also on the future needs.
Same goes with our project “DOOGLE”.

 With the growth of handheld computers, there is and will be a great demand of quick answers to day
to day life questions.”DOOGLE” is designed to simplify such searches for dog breeds.

 “DOOGLE” is written with future safe language python, which is here to stay in the market for sure.
 Simplified algorithms used have opened the doors of future improvement of the project.
 Artificial intelligence is the need of the hour . “DOOGLE” is all about this technology only and is
ready to provide more AI proof features for future.
 There are more than 200 dog breeds, divided into 8 classes: sporting, hound, working, terrier, toy,
non-sporting, herding, and miscellaneous and many such mixed breads are waiting to be identified .
 The top five favorite breeds of dogs in the world are: Labrador Retriever, Golden Retriever, German
Shepherd, Beagle, and Dachshund. The craze of dogs is not going away anytime soon.
 The business of dogs related products is over $2.5 billion in India only. Technology such as
“DOOGLE” will help increase this number as daily more people are getting attracted in getting dogs
knowledge.
 If we're going to start breeding for the future, we need to make some adjustments in the way we do
things and also learn some new techniques that will improve the set of tools breeders have to work
with.
 Breeders will need to cooperate more, because they need to monitor and protect the gene pool of the
breed, without which there is nothing.  There needs to be more transparency about health issues; we
can't manage them if we don't know about them, and damage to the gene pool affects everybody.
BIBLIOGRAPHY
 Wikipedia : https://en.wikipedia.org/wiki/Category:Dog_breeds
 VGG paper : https://arxiv.org/pdf/1409.1556.pdf
 Resnet paper : https://arxiv.org/pdf/1512.03385v1.pdf
 Particle javascript : github.com/VincentGarreau/particles.js

You might also like