You are on page 1of 61

A PROJECT STATUS REPORT

ON

“GPS Based campus tracker with Artificial Intelligence”


BY

Ayush Tiwari (1903480130017)


Darshan Singh Bisht (1903480130020)
Farogh Khan (1903480130024)
Sahil Gupta (1903480130050)

Bachelor of Technology
in
INFORMATION TECHNOLOGY
Under the supervision of
Mr. Amit Kumar Sharma

(Assistant Professor)

PSIT COLLEGE OF ENGINEERING, KANPUR


to the

Faculty of Information Technology


Dr. A.P.J. Abdul Kalam Technical University, Lucknow
(Formerly Uttar Pradesh Technical University)
December, 2022

DECLARATION
I hereby declare that this submission is my own work and that, to the best of my knowledge and belief.
It contains no matter previously published or written by any other person nor material which to a
substantial extent has been accepted to the award of any degree or diploma of the university or other
institute of higher learning except where due acknowledge has been made in the text..

Signature
Name: Ayush Tiwari
Roll No.: 1903480130017
Date :

Signature
Name: Darshan Singh Bisht
Roll No.: 1903480130020
Date :

Signature
Name: Farogh Khan
Roll No.: 1903480130024
Date :

Signature
Name: Sahil Gupta
Roll No.: 1903480130050
Date:

Mr. Amit Kumar Sharma

(Assistant Professor, Dept. of CSE)

ii
CERTIFICATE

This is to certify that the project titled “GPS Based campus tracker with Artificial Intelligence” which is
submitted by

 Ayush Tiwari (1903480130017)


 Darshan Singh Bisht (1903480130020)
 Farogh Khan (1903480130024)
 Sahil Gupta (1903480130050)

in partial fulfillment of the requirement for the award of the degree of Bachelor of Technology in Information
Technology to PSIT College of Engineering, Kanpur, affiliated to Dr. A.P.J. Abdul Kalam Technical
University, Lucknow during the academic year 2020-21, is the record of candidate’s own work carried out by
him/her under my supervision. The matter embodied in this report is original and has not been submitted for
the award of any other degree.

Mr. Abhay Kumar Tripathi Mr. Amit Kumar Sharma


(Head of Dept , CSE) (Assistant Professor, Dept. of CSE)

iii
ACKNOWLEDGEMENT

It gives us a great sense of pleasure to present the report of B.Tech. Project “GPS Based campus tracker
with Artificial Intelligence” undertaken during B.Tech. Final Year. We owe special debt of gratitude to our
project guide Mr. Amit Kumar Sharma (Assistant Professor, CSE), PSIT College of Engineering Kanpur for his
constant support and guide throughout course our work His sincerity, thoroughness and perseverance have
been a constant source of inspiration for us. It is only his cognizant efforts that our endeavors has seen light of
the day.
We also do not like to miss the opportunity to acknowledge the contribution of all faculty member of the
department for their kind assistance and cooperation during the development of our project. Last but not the
least, we acknowledge our friends for their contribution in the completion of the project.

Signature:
Name : Ayush Tiwari
Roll No : 1903480130017
Date :

Signature:
Name : Darshan Singh Bisht
Roll No : 1903480130020
Date :

Signature:
Name : Farogh Khan
Roll No : 1903480130024
Date :

Signature:
Name : Sahil Gupta
Roll No : 1903480130050
Date :

iv
ABSTRACT

GPS Based campus tracker with Artificial Intelligence is an assistant system that is used to find path within
college as well as it can be used for the automation purpose. The path finder feature helps to find the path
between two places within college for example from main gate to the admin. This application is very useful for
the people who are new to the organization as they do not know the path inside the organization. There exist a
lot of AI assistants that help people to solve their problem. So, here is the implemented system that solves the
path related problems within the organization and also helps to automate the system.
The system is also used for the purpose of automation, so the functionalities implemented are that the system is
capable of playing music as user sends a query to play a song, the system is capable of browsing on Google as
per the instructions of user, the system is capable of searching or opening files as asked by the user.

The system is implemented with the help of artificial intelligence and takes voice input and provides voice
output. The path finder functionality works as a guide for a new comer in college who wants to know paths to
reach various locations in PSIT. All the person needs to do is to provide the locations from where he wants to
go and the destination and the system provides the complete path in voice format as well as in display which
removes all ambiguities.

v
TABLE OF CONTENTS

DECLARATION..................................................................................................... ii
CERTIFICATE........................................................................................................ iii
ACKNOWLEDGEMENT....................................................................................... iv
ABSTRACT............................................................................................................. v
TABLE OF CONTENT............................................................................................ vi
LIST OF TABLES.................................................................................................... vii
LIST OF FIGURES.................................................................................................. viii
LIST OF SYMBOLS................................................................................................ ix
LIST OF ABBREVIATIONS.................................................................................. x
CHAPTER 1 INTRODUCTION............................................................................. 1
1.1 PURPOSE………………………........................................................... 3
1.2 MOTIVATION……………………….................................................. 4
CHAPTER 2 FEASIBILITY REPORT................................................................... 5
2.1 ECONOMICAL FEASIBILITY............................................................ 5
2.2 TECHNICAL FEASIBILITY................................................................ 6
2.3 OPERATIONAL FEASIBILITY.......................................................... 6
2.4 TIME FEASIBILITY…………………………………………………. 6
CHAPTER 3 REQUIREMENT SPECIFICATION................................................ 7
3.1 FUNCTIONAL REQUIREMENT......................................................... 7
3.2 NON-FUNCTIONAL REQUIREMENT............................................... 8
3.2.1 SAFETY.................................................................................. 8
3.2.2 SECURITY.............................................................................. 8
3.2.3 SOFTWARE QUALITY......................................................... 8
3.3 HARDWARE REQUIREMENT........................................................... 9
3.4 SOFTWARE REQUIREMENT............................................................. 9
3.5 WATERFALL MODEL......................................................................... 9
CHAPTER 4 DESIGN DOCUMENT..................................................................... 11
4.1 CLASS DIAGRAM............................................................................... 11
4.2 USECASE DIAGRAM.......................................................................... 12
4.3 SEQUENCE DIAGRAM....................................................................... 13
CHAPTER 5 IMPLEMENTATION........................................................................ 15
5.1PROCESS………………………………………………….................... 15
5.2 QUERY PROCESSING……………………………………………… 16
5.3 SCREENS…………………………………………………………….. 17
5.4 PARTIAL CODING………………………………………………….. 22
CHAPTER 6 TESTING…………………………………………………………… 40
6.1 UNIT TESTING……………………………………………………… 40
6.2 BLACK BOX TESTING…………………………………………….. 41
6.3 WHITE BOX TESTING……………………………………………… 42
6.4 INTEGRATION TESTING………………………………………….. 42
6.5 TEST CASES………………………………………………………… 43
CONCLUSION AND FUTURE SCOPE................................................................ 45
CONCLUSION........................................................................................... 45
FUTURE SCOPE........................................................................................ 45
REFERENCES........................................................................................................ 47
APPENDIX.............................................................................................................. 49
vi
LIST OF TABLES

Table 5.1 Complexity Comparison……………………………………………..... 30

Table 6.1 Test Cases…………………………………………………………....... 43

vii
LIST OF FIGURES

Figure 1.1 Classification of system………………………………………………. 2


Figure 1.2 Components of AI……………………………………………………. 2
Figure 1.3 Other Functionalities………………………………………………….. 3
Figure 3.1 Waterfall Model………………………………………………………. 9
Figure 4.1 Class Diagram………………………………………………………… 11
Figure 4.2 Use Case Diagram…………………………………………………….. 12
Figure 4.3 Sequence Diagram……………………………………………………. 14
Figure 5.1 Input Screen…………………………………………………………... 17
Figure 5.2 Output Screen for Path Finder………………………………………... 18
Figure 5.3 Open Application……………………………………………………... 19
Figure 5.4 Web Crawler………………………………………………………….. 20
Figure 5.5 Play Song……………………………………………………………... 21
Figure 5.6 Search File……………………………………………………………. 22
Figure 6.1 Black Box Testing……………………………………………………. 41
Figure 6.2 White Box Testing……………………………………………………. 42
Figure 6.3 Integration Testing……………………………………………………. 43

viii
LIST OF SYMBOLS

s source

d destination

e3 text area

path1 list of all nodes

audio voice input by user

ix
LIST OF ABBREVATIONS

Command Voice input by user

Source Starting location of user

Destination Location where user wants to go

x
CHAPTER 1

INTRODUCTION

GPS Based campus tracker with Artificial Intelligence is an assistant system that
consists of two main functionalities – PSIT Path Finder and other functionalities are used
to automate the system which is shown in figure 1.3. The uniqueness of this system is the
path finder that is capable of providing the path within the college.

The system is completely voice automated. This means the system works through voice as
it takes voice input from the user and provides the output in voice form. In order to remove
the ambiguities we have also displayed the output on screen so that the user is fully
satisfied with the path provided by the system.

As the name of implemented system suggests, the vision was to develop a virtual assistant
for people who are new to the college to solve college path based queries and provide path
from the source to destination within college. The term ‘Virtual’ may be defined as “Not
physically existing as such but made by software to appear to do so.”

Thus, the implemented system is a virtual assistant, i.e., it is a software that uses artificial
intelligence to guide people and takes actions to effectively understand the queries and
respond to them rationally. The implemented system can assist the user by solving user
queries related to path within the college.

The other functionalities include:-

a) The system is capable of browsing on Google as per the instructions of user.


b) The system is capable of searching or opening files as asked by the user.
c) The system is capable of playing music as user sends a query to play a song.
d) The system is capable of opening applications as asked by the user.
1
AI DRIVEN
ASSISTANT
SYSTEM

PSIT Path Other


Finder Functionalities

Figure 1.1: Classification of system

The main technologies used in the system are Artificial Intelligence, speech recognition. In
today’s time we are seeing that many applications of artificial intelligence are coming up
and these systems are capable of working same as human being does. Design computer
programs to make computers smarter. These systems are trained using certain data set and
then using their own intelligence they answer the queries of user. These answers are so
amusing that it looks like the user is interacting to a human being.

2
Figure 1.2: Components of AI [1]

The path finder functionality uses Dijkstra’s algorithm to provide the shortest path within
the college to the user. Dijkstra's algorithm (or Dijkstra's Shortest Path First
algorithm, SPF algorithm) is an algorithm for finding the shortest paths between nodes in
a graph, which may represent, for example, road networks. It was conceived by computer
scientist Edsger W. Dijkstra in 1956 and published three years later.

The algorithm exists in many variants; Dijkstra's original variant found the shortest path
between two nodes, but a more common variant fixes a single node as the "source" node
and finds shortest paths from the source to all other nodes in the graph, producing
a shortest-path tree.

Browser File Search

Open
Play Song
Application

Google
Maps
Figure 1.3: Other functionalities

1.1 Purpose

The purpose of this project is to provide solution to the path related problems that the new
comers and the parents have to face as they don’t know the path to the various places
inside the college. Secondly the other functionalities also help to automate the system.

3
1.2 Motivation

At times we saw that when parents and students come to the college who are new to the
college face problems like they don’t know the path within the college so they need to stop
someone and ask them to tell the path to a particular place within the college so we got an
idea that why not we develop a assistant system which can help the people entering the
college by telling them the path within the college.

This would be very helpful because-

a. Then they don’t need to ask the path to anyone else.


b. They can find the path on their own.
c. As it is voice activated so it is easy to use.

The path related information can be very helpful with respect to the new comers. Hence,
there should be an assistant system that can guide them within the college and provide
them path related information.

4
CHAPTER 2

FEASIBILITY REPORT

The prime focus of the feasibility is evaluating the practicality of the proposed system
keeping in mind a number of factors.  A feasibility study aims to objectively and rationally
uncover the strengths and weaknesses of an existing business or proposed venture,
opportunities and threats present in the natural environment, the resources required to carry
through, and ultimately the prospects for success. In its simplest terms, the two criteria to
judge feasibility are cost required and value to be attained.

A well-designed feasibility study should provide a historical background of the business or


project, a description of the product or service, accounting statements, details of
the operations and management, marketing research and policies, financial data, legal
requirements and tax obligations. Generally, feasibility studies precede technical
development and project implementation.

A feasibility study might uncover new ideas that could completely change a project’s
scope. It’s best to make these determinations in advance, rather than to jump in and
learning that the project just won’t work. Conducting a feasibility study is always
beneficial to the project as it gives you and other stakeholders a clear picture of the
proposed project.

The following factors are taken into account before deciding in favor of the new system.

2.1 Economic Feasibility

5
Economic Feasibility is also known as cost analysis. It includes the cost involved in the
project. The cost involved in the implementation of the project is quite less.

Report generation in the proposed system in precise that is reports are generated as per user
requirements, which reduces the use of papers and manual labor.

2.2 Technical feasibility

Keeping in view the above fact, nowadays all organizations are automating the repetitive
and monotonous works done by humans. The key process areas of the current system are
nicely amenable to automation and hence the technical feasibility is proved beyond doubt.

As various API’s are available which help to implement the code in a proper way. So the
project is technically feasible.

2.3 Operational Feasibility

The present system has automated most of the manual tasks. Therefore the proposed
system will increase the operational efficiency of the instructors.

Operational feasibility refers to the measure of solving problems with the help of a new
proposed system. It helps in taking advantage of the opportunities and fulfills the
requirements as identified during the development of the project.

2.4 Time Feasibility

A time feasibility study will take into account the period in which the project is going to
take up to its completion. Typically this means estimating how long the system will take to
develop. Time feasibility is a measure of how reasonable the project timetable is. As the
project was completed within the given time that is why it is time feasible.

6
CHAPTER 3

REQUIREMENT SPECIFICATION

A requirement specification describes the intended purpose, requirements and nature of


software to be developed. This includes the functional and nonfunctional
requirements, software and hardware requirements of the project. In addition to this, it also
contains the information about environmental conditions required, safety and security
requirements, software quality attributes of the project etc.

3.1 Functional Requirements

A functional requirement defines a function of a system or its component, where a function


is described as a specification of behavior between outputs and inputs. Functional
requirements may involve calculations, technical details, data manipulation and processing
and other specific functionality that define what a system is supposed to accomplish.
Generally, functional requirements are expressed in the form "system must do".

Various functionalities are as follows-

a. The system should help to find the path to the particular location as entered by the
user.
b. The system should be capable of searching or opening files as asked by the user.
c. The system should be capable of browsing on Google as per the instructions of
user.
7
d. The system is capable of playing music as user sends a query to play a song along
with the song name.
e. The system should be capable of opening applications as asked by the user.

3.2 Non- Functional Requirements

A non-functional requirement is a requirement that specifies criteria that can be used to


judge the operation of a system, rather than specific behaviors. The plan for
implementing non-functional requirements is detailed in the system architecture, because
they are usually architecturally significant requirements.

The non-functional requirements related to project are as follows-

3.2.1 Safety Requirements

If there is extensive damage to a wide portion of the database due to catastrophic failure,
such as a disk crash, the recovery method restores a past copy of the database that was
backed up to archival storage (typically tape) and reconstructs a more current state by
reapplying or redoing the operations of committed transactions from the backed up log, up
to the time of failure.

3.2.2 Security Requirements

As we are using the database to store the information related to the path between the
locations within the college so proper authorization is made and that is why there is no
breach of information provides to the user.

3.2.3 Software Quality Attributes

a. Availability: Since we are uploading our project on the github so once you


download it, it will be available all the time on your system.
8
b. Correctness: The system should provide the proper shortest and the correct path to
the desired location.
c. Maintainability: The system should maintain correct distance between the nodes
and provide proper automation.
d. Usability: The system should satisfy a maximum number of users needs.

3.3 Hardware Requirements

a. Processor : Intel core i3 or Above


b. Processor Speed : 2.5 GHZ or above
c. RAM : 4 GB or above
d. HDD : 40 GB or above
e. Monitor : 15 inch

3.4 Software Requirements

a. Language: Python
b. Editor: PyCharm
c. Operating System: Windows 7 and above
d. We will also make use of some API’s like Google Speech Recognition API which
will help the system to take input in form of speech and provide output in form of
speech.
e. Web Browser: Microsoft Internet Explorer, Mozilla, Google Chrome or later
f. Tkinter (front-end)

3.5 Waterfall Model

9
Figure 3.1 Waterfall Model [2]

Figure 3.1 Waterfall Model


The waterfall model is a sequential design process, often used in software development
processes, in which progress is seen as flowing steadily downwards (like a waterfall)
through the phases of  Analysis, Requirement Specification, Design, Implementation,
Testing and Integration and Operation and Maintenance.
If in the beginning of the project failures are detected, it takes less effort (and therefore
time and money) for this error. In the waterfall model phases to be properly sealed first
before proceeding to the next stage. It is believed that the phases are correct before
proceeding to the next phase. In the waterfall model lay the emphasis on documentation. It
is a straightforward method. The way of working ensures that there are specific phases.
This tells you what stage it is. One can use this method of milestones. Milestones can be
used to monitor the progress of the project to estimate.
In our Project, all the requirements are clear and well known. All the activities in our
project are carried out in above-mentioned phases of the waterfall model.
 

10
CHAPTER 4

DESIGN DOCUMENTS

4.1 Class Diagram

The class diagram is the main building block of object-oriented modelling. It is used for
general conceptual modelling of the structure of the application, and for detailed modelling
translating the models into programming code. Class diagrams can also be used for data
modelling. The classes in a class diagram represent both the main elements, interactions in
the application, and the classes to be programmed.
In the design of a system, a number of classes are identified and grouped together in a class
diagram that helps to determine the static relations between them. With detailed modelling,
the classes of the conceptual design are often split into a number of subclasses.

11
Figure 4.1 Class Diagram

4.2 Use Case Diagram

Use case diagram is a dynamic or behavior diagram in UML. Use case diagrams model the
functionality of a system using actors and use cases.
Use cases are a set of actions, services, and functions that are the system needs to perform.
In this context, a “system” is something being developed or operated, such as a web site.
The “actor” are people or entities operating under defined roles within the system.
They also help identify any internal or external factors that may influence the system and
should be taken into consideration. They provide a good high level analysis from outside
the system. Use case diagrams specify how the system interacts with actors and without
worrying about the details of how that functionality is implemented.

12
Figure 4.2 Use Case Diagram

4.3 Sequence Diagram

A sequence diagram simply depicts interaction between objects in a sequential order i.e.
the order in which these interactions take place. We can also use the terms event diagrams
or event scenarios to refer to a sequence diagram. Sequence diagrams describe how and in
what order the objects in a system function. These diagrams are widely used by
businessmen and software developers to document and understand requirements for new
and existing systems.
Sequence Diagram Notations –
a. Actors – An actor in a UML diagram represents a type of role where it interacts
with the system and its objects. It is important to note here that an actor is always
outside the scope of the system we aim to model using the UML diagram.
13
b. Lifelines – A lifeline is a named element which depicts an individual participant in
a sequence diagram. So basically each instance in a sequence diagram is
represented by a lifeline. Lifeline elements are located at the top in a sequence
diagram.
c. Messages – Communication between objects is depicted using messages. The
messages appear in a sequential order on the lifeline. We represent messages using
arrows. Lifelines and messages form the core of a sequence diagram.
d. Guards – To model conditions we use guards in UML. They are used when we
need to restrict the flow of messages on the pretext of a condition being met.
Guards play an important role in letting software developers know the constraints
attached to a system or a particular process.

Figure 4.3 Sequence Diagram

14
CHAPTER 5

IMPLEMENTATION

The system is built on Python platform using Pycharm editor. It uses Oracle for database
connectivity and fetching the data related to path between the two locations. The user
interface is developed using Tkinter which is a standard Python interface to GUI toolkit.

The initial step is to take a voice input from the user which contains the source and
destination for which the user wants to know the path. This voice input is processed to
convert it into a text form so that we get the source and destination. Then Dijkstra’s
algorithm is applied to get the shortest path between the source and destination. After
fetching the shortest path, it is presented to the user in voice form as well as in written
form in UI.

5.1 Process
When the user comes infront of the system, the system says “How can I help you” and
provides various options like say “Path Finder” to open path finder and other options to
automate the system.

15
Then the user provides input by saying “Path Finder”. Then the system asks for the source
where the user is. So the user inputs by saying “Main Gate”. Then system asks for the
destination and the user provides the destination like “Admin”.

Now the system processes this information by applying the Dijkstra’s algorithm in it and
the graph is present within it. Now after fetching the shortest path it uses the database to
get the individual path between the nodes. This information is provided to the user in form
of voice and when the voice is over the output is also displayed on the user interface so
that everything is clear to the user because at times it is possible that the user might not be
able to understand everything.

The system not only provides the single path but also provides the node to node path
because the user is new to the place and the user will not be knowing how to go to the next
node so to remove these type of ambiguities node to node path is also provided in voice
form as well as displayed on the screen.

There are various possible paths from source to the destination but the system provides the
shortest path between the two points using the Dijkstra’s algorithm so that the user needs
to move less and saves the time.

Besides this there are other functionalities too which are used to automate the system.

Like play song- in this the user needs to input the song name in voice form then the
assistant will search the song within the system and if the song is present then the assistant
will play it for the user. So the process of searching the song then playing the song is
automated and will be done by the assistant, the user only needs to input the song name
and his work is done.

Like open application- in this the user needs to input the application name in voice form
then the assistant will open the application for the user.

Like search file- in this the user needs to input the file name in voice form then the
assistant will search the file within the system and if the file is found then the assistant asks
if the user wants to open the file or not. If the user says yes then file is opened and if the
file is not found then an appropriate message is said file not found by the system.
16
Like web crawler- if the user wants to search any information on web browser then he
needs to input the query in voice form and the assistant will open the web browser along
with the searched information.

5.2 Query Processing

a. The user enters the source and destination in voice form for this purpose the
Google speech recognition API is used.

b. After this voice is converted into text form and the source and destination is stored
in the variables as they need to be passed in the function.

c. These source and destination are passed to the function where Dijkstra’s algorithm
is implemented.

d. With the help of the nodes and the distances between them it forms the graph and
in the parameter we have the source and destination so with the help of it, the
algorithm returns the shortest path between the nodes.

e. This returned information is processed so that it can be properly presented on the


screen.

5.3 Screens

The screen present in the figure 5.1 represents the first screen which is displayed infront of
the user when the system starts. Here the system says “How can I help you” and provides
various options for the voice input which can be for the path finder or for automating the
functionalities.

17
Figure 5.1: Input Screen

18
Figure 5.2: Output Screen for path finder

The screen present in the figure 5.2 is displayed along with the output of shortest path in
voice form.

19
Figure 5.3 Open Application

The figure 5.3 represents the implementation of the functionality open application which is
implemented as follows – as the user inputs open application through voice, the system
asks which application you want to open the user inputs notepad and as the result the
system opens notepad for the user.

So the process of opening an application is completely automated and done by the


assistant, only the user needs to input application name through voice.

20
Figure 5.4 Web Crawler

The figure 5.4 represents the implementation of web crawler functionality in which the
user wants to search any information on web browser then he needs to input the query in
voice form and the assistant will open the web browser along with the searched
information.

So the process of searching any information on web browser is completely automated and
done by the assistant, only the user needs to input the query through voice.

21
Figure 5.5 Play Song

The figure 5.5 represents the implementation of play song functionality the user needs to
input the song name in voice form then the assistant will search the song within the system
and if the song is present then the assistant will play it for the user.

So the process of searching the song then playing the song is automated and will be done
by the assistant, the user only needs to input the song name and his work is done.

22
Figure 5.6 Search File

The figure 5.6 represents the implementation of search file functionality in which the user
simply enters the file name to be searched then the assistant responses by asking the
message that do you want to open the file if the file is found else sends a message file not
found in voice form.

If the user says yes to open the file then the assistant opens it for the user else does not
opens the file.

So this is how the searching and opening of a file is automated with the help of assistant.

5.4 Partial Coding


The main python file is the combination of multiple functions as it integrates all the
functionalities. There are various functions such as myCommand(),
do_you_want_to_continue(), assistant(command2).

23
Besides this there are other python files that are imported in the main python file and
integrated to work together as a complete system. The code present in these files is
explained below in proper sequence.

Some API’s are also used which are as follows-


a. speech_recognition
The speech recognition API is provided by the Google which helps to take the
input in voice form. It uses Recognizer () function to understand the voice input by
the user. Creates a new Recognizer instance, which represents a collection of
speech recognition settings and functionality.
Google has a great Speech Recognition API. This API converts spoken text
(microphone) into written text (Python strings), briefly Speech to Text. You can
simply speak in a microphone and Google API will translate this into written text.
The API has excellent results for English language.
Google has also created the JavaScript Web Speech API, so you can recognize
speech also in JavaScript if you want to.
In this there are various parameters to control the voice:-

a.pause_threshold
Represents the minimum length of silence (in seconds) that will register as the
end of a phrase can be changed.
Smaller values result in the recognition completing more quickly, but might
result in slower speakers being cut off.

b.adjust_for_ambient_noise
Adjusts the energy threshold dynamically using audio from source (an
AudioSource instance) to account for ambient noise.
Intended to calibrate the energy threshold with the ambient energy level.
Should be used on periods of audio without speech - will stop early if any
speech is detected.
The duration parameter is the maximum number of seconds that it will
dynamically adjust the threshold for before returning. This value should be at
least 0.5 in order to get a representative sample of the ambient noise.
24
c. listen(source)
Records a single phrase from source (an AudioSource instance) into an
AudioData instance, which it returns.
This is done by waiting until the audio has an energy above
recognizer_instance.energy_threshold (the user has started speaking), and then
recording until it encounters recognizer_instance.pause_threshold seconds of
non-speaking or there is no more audio input. The ending silence is not
included.
The timeout parameter is the maximum number of seconds that it will wait for a
phrase to start before giving up and throwing an
speech_recognition.WaitTimeoutError exception. If timeout is none, it will wait
indefinitely.

b. win32com.client
There are several APIs available to convert text to speech in python. One of such
APIs available in the python library commonly known as win32com library. It
provides a bunch of methods to get excited about and one of them is the Dispatch
method of the library. Dispatch method when passed with the argument
of SAPI.SpVoice. It interacts with the Microsoft Speech SDK to speak what you
type in from the keyboard.

This API is used to provide output in voice form. It uses an object named speaker
and the content written inside the speaker is presented to the user in voice form.

Example :- speaker.Speak(‘Thank you for using the assistant’);

c. webbrowser
The webbrowser module in Python provides an interface to display Web-based
documents. The webbrowser module includes functions to open URLs in
interactive browser applications. The module includes a registry of available
browsers, in case multiple options are available on the system. It can also be
controlled with the browser environment variable.

25
webbrowser.open_new(url)
Open url in a new window of the default browser, if possible, otherwise,
open url in the only browser window.

webbrowser.open_new_tab(url)
Open url in a new page (“tab”) of the default browser, if possible,
otherwise equivalent to open_new()

def myCommand():
global command
r = sr.Recognizer()
with sr.Microphone() as source:
print('I am ready for your command')
r.pause_threshold = 0.8
r.adjust_for_ambient_noise(source, duration=1)
audio = r.listen(source)
try:
command = r.recognize_google(audio)
print('You said:' + command + '\n')
except sr.UnknownValueError:
print('Say Again')
assistant(myCommand())
return command

The myCommand() function is basically used to take the user input in voice form. This can
be done with the help of the microphone present and the Google speech recognition API
converts the voice to the text in form of a string in python and then it can be used further.

if any(['OPEN PATHFINDER' in command2, 'OPEN PATH FINDER' in command2]):


try:
print ("Speak source: ")
source = myCommand()
26
print ("Speak destination: ")
destination = myCommand()
source = source.upper()
destination = destination.upper()
print('Source is = ' + source + ' Destination is = ' + destination)
print("Shortest path from " + source + " to " + destination + "\n")
speaker.speak("Shortest path from" + source + "to" + destination)
t.g.dijkstras(source, destination)
path = t.path1
for i in range(0, len(path)):
if i < len(path) - 1:
speaker.speak(path[i] + " to ")
elif i < len(path):
speaker.speak(path[i])
for i in range(0, len(path)):
if i < len(path) - 1:
k = con.f1(path[i], path[i + 1])
k1 = ','.join(k)
print(path[i] + " --> " + path[i + 1] + " :- " + k1)
speaker.speak(path[i] + " to " + path[i + 1] + " is" + k1)
do_you_want_to_continue()

This part of code is the implementation of the unique functionality of the project that is the
path finder which provides the path within the college. Now this code first takes the source
and the destination input from the user in voice form.
Now it converts the source and destination into a string so that it can be used for
processing purpose in the variables source and destination.
After this the dijkstras(source, destination) function is called with the parameters source
and destination. This function returns the shortest path between the two nodes passed
within the parameters.
After getting the complete path a loop is run in a manner such that the assistant system tells
the node wise path to the user and then after telling the path it also prints the path on the
screen to remove any type of ambiguity in the user’s mind related to the path.
27
The above mentioned code uses dijkstra’s algorithm implemented in the function named
dijkstras. The function is explained below.

from collections import defaultdict


import math

path1=[]
class DirectedGraph(dict):

def __missing__(self, key):


value = self[key] = {key: 0}
return value

def connect(self, node1, node2, weight):


nodes = self.keys()
if node1 not in nodes:
self[node1]
if node2 not in nodes:
self[node2]
self[node1][node2] = weight

def connected_nodes(self, node):


return {k: v for k, v in self[node].items() if k != node}

def dijkstras(self, start, end):


def get_next():
remaining = {k: v for k, v in distance.items() if k not in finished}
next_node = min(remaining, key=remaining.get)
return next_node

28
def step(node):
cost = 0 if parent[node] is None else distance[node]

for n, w in self.connected_nodes(node).items():
if distance[n] is None or distance[n] > w + cost:
parent[n] = node
distance[n] = cost + w
finished.append(node)

def print_path():
path = [end]
node = end
while parent[node] != start:
path.append(parent[node])
node = parent[node]
path.append(start)
path.reverse()
return path

finished = []
distance = {}
parent = {}

for node in self.keys():


distance[node] = 0 if (node == start) else math.inf
parent[node] = None

step(start)
while end not in finished:
nextNode = get_next()
step(nextNode)

path_list = print_path()
29
for i in path_list:
path1.append(i)

return distance

g = DirectedGraph()
g.connect('MAIN GATE', 'OLD ADMIN', 100)
g.connect('OLD ADMIN', 'MAIN GATE', 100)
g.connect('OLD ADMIN', 'R BLOCK', 20)
g.connect('R BLOCK', 'OLD ADMIN', 20)
g.connect('OLD ADMIN', 'S BLOCK', 20)
g.connect('S BLOCK', 'OLD ADMIN', 20)
g.connect('R BLOCK', 'PARKING', 10)
g.connect('PARKING', 'R BLOCK', 10)
g.connect('G BLOCK', 'S BLOCK', 500)
g.connect('S BLOCK', 'BARRICADE', 500)
g.connect('BARRICADE', 'S BLOCK', 500)
g.connect('BARRICADE', 'BUS STOP', 100)
g.connect('BUS STOP', 'BARRICADE', 100)
g.connect('BARRICADE', 'BOYS BASKETBALL COURT', 10)
g.connect('BOYS BASKETBALL COURT', 'BARRICADE', 10)
g.connect('BARRICADE', 'G BLOCK', 10)
g.connect('G BLOCK', 'BARRICADE', 10)
g.connect('BUS STOP', 'PLOT GATE', 650)
g.connect('PLOT GATE', 'BUS STOP', 650)
g.connect('G BLOCK', 'ADMIN', 20)
g.connect('ADMIN', 'G BLOCK', 20)
g.connect('G BLOCK', 'AUDI', 20)
g.connect('AUDI', 'G BLOCK', 20)
g.connect('ADMIN', 'CANTEEN', 5)
g.connect('CANTEEN', 'ADMIN', 5)
g.connect('AUDI', 'CANTEEN', 5)
30
g.connect('CANTEEN', 'AUDI', 5)
g.connect('ADMIN', 'J BLOCK', 20)
g.connect('J BLOCK', 'ADMIN', 20)
g.connect('AUDI', 'J BLOCK', 20)
g.connect('J BLOCK', 'AUDI', 20)
g.connect('J BLOCK', 'GIRLS BASKETBALL COURT', 10)
g.connect('GIRLS BASKETBALL COURT', 'J BLOCK', 10)
g.connect('PARKING', 'GIRLS VOLLEYBALL COURT', 480)
g.connect('GIRLS VOLLEYBALL COURT', 'PARKING', 480)
g.connect('GIRLS VOLLEYBALL COURT', 'GIRLS BASKETBALL COURT', 20)
g.connect('GIRLS BASKETBALL COURT', 'GIRLS VOLLEYBALL COURT', 20)
g.connect('PARKING', 'WHITE BUILDING', 40)
g.connect('WHITE BUILDING', 'PARKING', 40)
g.connect('PARKING', 'BANK', 50)
g.connect('BANK', 'PARKING', 50)

This part of code is the implementation of Dijkstra’s algorithm which is the key behind the
shortest path between the two nodes. The connect lines present at the last of the code is
used to form a virtual graph with the weighted nodes which is the third parameter.
The connected_nodes() function forms the connection between the nodes.
The get_next() function is used to get the next node from the present node.
The print_path() function is used to get the entire path between two nodes.

Table 5.1 Complexity comparison

31
if command2 == 'PLAY SONG':
print('Enter the song name')
speaker.Speak('Enter the song name')
song1 = myCommand()
song = song1 + ".mp3"
path3 = fs.find(song)
subprocess.call([r"C:\Program Files (x86)\VideoLAN\VLC\vlc.exe", path3])
do_you_want_to_continue()

This part of code is used to implement the play song functionality which is clear from the
first line of code. This function takes input the song name and searches it in the memory
and if it is present then it plays the song using an appropriate media player.

if command2 == 'FILE SEARCH':


print('Which file do you want to search?')
speaker.Speak('Which file do you want to search')
file = myCommand()
print(file)
path3 = fs.find(file)
print('File ' + file + ' found.\n')
speaker.Speak('File found')
print('And location of file is :- ')
speaker.Speak('And location of file is')
print(path3)
print('Do you want to open this file?')
speaker.Speak('Do you want to open this file')
ch = myCommand()
if ch == 'yes':
os.startfile(path3)
do_you_want_to_continue()

This part of code is used to implement the file search functionality which is clear from the
first line of code. This function takes input the file name and searches it in the memory and
32
if it is present then it asks the user whether user wants to open the file and if the user says
yes then the assistant opens the file for the user else displays a message file found or not.
It is a step by step process in which first file name is take then find(file_sname) function is
called and after getting the path of file it is opened with the help of startfile(path) function
as mentioned in the code.

if command2 == 'OPEN BROWSER':


print('What you want to search?')
speaker.Speak('What you want to search')
t_search = myCommand()
print(t_search)
f_text = 'https://www.google.co.in/search?q=' + t_search
wb.get(chrome_path).open(f_text)
do_you_want_to_continue()

This part of code is used to implement the web crawler functionality in which the user
wants to search any information on web browser then he needs to input the query in voice
form and the assistant will open the web browser along with the searched information.
So the input in voice form is saved in t_search string.

Then wb.get(chrome_path).open(f_text) is used to open the browser with the user query
searched. As we can see two functions are used get and open to implement the web
crawler.

if command2 == 'OPEN APPLICATION':


print('Which application do you want to open?')
speaker.Speak('Which application do you want to open')
app_name = myCommand()
print(app_name)
app_name = app_name.upper()
if app_name == 'VLC':
op.find('vlc.exe')
if app_name == 'CALCULATOR':
33
op.find('calc.exe')
if app_name == 'GOOGLE CHROME':
op.find('chrome.exe')
if app_name == 'ECLIPSE':
op.find('eclipse.exe')
if app_name == 'CMD':
op.find('cmd.exe')
if app_name == 'NOTEPAD':
op.find('notepad.exe')
if app_name == 'VIRTUALDJ':
op.find('virtualdj_home.exe')
do_you_want_to_continue()

This part of code is used to implement the open application functionality in which the user
enters the name of application that the user wants to open which is saved in app_name
string which can be used for the further processing.

After this op.find ('vlc.exe') function is called with the application name as the parameter
in which the path of the application is searched and then the application is opened to
provide an output to the user.

if command2 == 'EXIT':
print('Okay the assistant system is closing')
speaker.Speak('Okay we are closing the assistant system')
sys.exit(2)

This part of code helps the user to directly exit from the system by simply saying exit and
the system responses by saying ‘Okay we are closing the assistant system’ and the
functioning of system stops.

def do_you_want_to_continue():
print('Do you want to continue Yes/No?')
speaker.Speak('Do you want to continue Yes/No')
ch = myCommand()
if ch == 'yes':
34
print(ch + '\n')
print('Say Open Path Finder to open Path Finder')
print('Say Open Browser to search on Google')
print('Say File Search to search a file')
print('Say Play Song to play a song')
print('Say Open Application to open an application\n')
print('I am ready for your next command')
assistant(myCommand())
else:
print(ch)
speaker.Speak('Thank you for using the system')
SystemExit(0)

This part of code is used to basically ask the user that whether the user wants to continue
the operation which is being performed.

def find(name):
for i in b:
for root, dirs, files in os.walk(i + "\\"):
if name in files:
p = os.path.join(root, name)
return p

This part of code is used to find the path of a particular file present in the system. It
provides the complete path of the saved file. This part of code makes use of a list which
contains the value returned from the called function get_drives(). This function is
explained below.

def get_drives():
drives_list = []
for drive in range(ord('A'), ord('N')):
if exists(chr(drive) + ':'):

35
drives_list.append(chr(drive) + ":")
return drives_list

The function get_drives() is used to get the number of drives present in the system. This
part of code is required for the searching of file in the system.

import cx_Oracle
def f1(s, d):
con = cx_Oracle.connect('rishabhtiwari/a@XE')
cur = con.cursor()
dataset = (s, d)
cur.execute("select description from direction where source = :1 and destination = :2",
dataset)
return cur.fetchone()
cur.close()
con.close()

This part of code is used to maintain the connectivity with the database. The database used
in the project is Oracle. Cur.fetchone() is used to fetch the data row by row from the tables
in the database.

“Select description from direction where source = :1 and destination = :2" query is used to
get the required description between the two nodes that are source and destination.

from PIL import ImageTk, Image


def getcontents():
readfile = open("pathfile.txt", "r")
contents = readfile.read()

e3.insert(END, contents + "\n")


if command2.strip() == "PATHFINDER" or command2.strip() == "PATH FINDER":
pathfile = open("pathfile.txt", "w")

36
try:
speaker.Speak("From where would you like to start")
source = myCommand()

speaker.Speak("Where would you like to go")

destination = myCommand()
source = source.upper()
destination = destination.upper()

dirstring = "Shortest path from " + str(source) + " to " + str(destination) + " is: \n\n"

e3.insert(END, "You want to go from " + source + " to " + destination + "\n")
pathfile.write("You want to go from " + source + " to " + destination + "\n\n")
speaker.Speak("You want to go from " + source + " to " + destination)

t.g.dijkstras(source, destination)
path = t.path1

for i in range(0, len(path)):


if i < len(path) - 1:
pathfile.write(path[i] + " ---> ")
speaker.speak(path[i] + " to ")
# e3.insert(END, path[i] + " to ")

elif i < len(path):


pathfile.write(path[i] + "\n")
speaker.speak(path[i])

for i in range(0, len(path)):


if i < len(path) - 1:
k = con.f1(path[i], path[i + 1])
k1 = ','.join(k)
37
speaker.speak(path[i] + " to " + path[i + 1] + " is" + k1)

dirstring += path[i] + " ---> " + path[i + 1] + " :- " + k1 + "\n"

e3.insert(END, dirstring)

readfile = open("pathfile.txt", "r")


contents = readfile.read()

e3.insert(END, contents + "\n")


def create_window():
root = Tk()
root.title("AI DRIVEN ASSISTANT SYSTEM")
root.geometry("500x600")

return root

win = create_window()
imgpath = Image.open("psit.jpg")

img = ImageTk.PhotoImage(imgpath)

panel = Label(win, image=img)

panel.place(x=10, y=10)

l0 = Label(win, text="WELCOME TO AI DRIVEN ASSISTANT SYSTEM")


l0.place(x=150, y=70)

global b2, e3

e3 = Text(win)

38
e3.place(x=10, y=150, width=480, height=380)

e3.insert(END, 'Say \'Path Finder\' to open Path Finder\n')


e3.insert(END, 'Say \'Google Maps\' to open maps\n')
e3.insert(END, 'Say \'Browser\' to search on Google\n')
e3.insert(END, 'Say \'File Search\' to search a file\n')
e3.insert(END, 'Say \'Play Song\' to play a song\n')
e3.insert(END, 'Say \'Application\' to open an application\n')
e3.insert(END, 'Say \'Exit\' to quit the application\n')
e3.insert(END, 'I am waiting for your command...\n\n')

b1 = Button(win, text="Start Application", command=getinputs)


b1.place(x=90, y=550)

b3 = Button(win, text="Quit", command=quit)


b3.place(x=350, y=550)

win.mainloop()

This part of code is used to make the user interface through which the user interacts with
the assistant system. The path which is displayed on the screen is implemented with the
help of this code.

The figure 5.3.2 is the result of above mentioned code in which the user can see the path to
the desired location.

from tkinter import *


from PIL import ImageTk, Image

def createwin():

39
window = Tk()
window.title("Join")
window.geometry("300x300")
window.configure(background='grey')

return window

window = createwin()
path = "psit.jpg"
img = ImageTk.PhotoImage(Image.open(path))
panel = Label(window, image=img)
panel.pack(side="bottom", fill="both", expand="yes")
window.mainloop()

This part of code is used to apply the images on the user interface and provide it a proper
interface to the user.

40
CHAPTER 6

TESTING

The reason behind testing was to find errors. Every program or software has errors in it,
against the common view that there are no errors in it if the program or software is
working. Executing the programs with the intention of finding the errors in it is therefore
testing; hence a successful test is one which finds errors. Testing is an activity, however, it
is restricted to being performed after the development phase is complete, but is carried
parallel with all stages of system development, starting with requirement specification.

Test cases were devised with a purpose in mind. A test case is a set of data that a system
will process as normal input. The software units developed in the system are modules and
routines that are assembled and integrated to perform the required function of the system.
Test results once gathered and evaluated, provide a qualitative indication of the software
quality and reliability and serve as the basis for design modification if required. In this
phase, testing is done at different levels. Actually testing phase of the implementations
works accurately and efficiently before live operation commences.

6.1 Unit Testing


Unit testing was done after the coding phase. The purpose of the unit testing was to locate
errors in the current module, independent of the other modules. Some changes in the
coding were done during the testing phase. Finally, all the modules were individually
tested following bottom to top approach, starting with the smallest and lowest modules and
then testing one at a time.
In our project we performed unit testing on each module by passing the parameter values
to the functions so as to check whether these units are able to provide the required output
or not. We performed this on find (filename) to check whether it will find its path or not.

41
We performed this on assistant (command) to check whether it is performing the required
functionality that the user requested or not.

We performed this on f1(s, d) to check whether it is able to fetch the correct description
from the database between the source s and the destination d.

Example- f1(“MAIN GATE”,”ADMIN”)

6.2 Black Box Testing

This method of software testing tests the functionality of an application as opposed to its
internal structures or working (i.e. white box testing). Specific knowledge of the
application’s code/internal structure and programming knowledge, in general, is not
required. The figure 6.1 represents the process of black box testing. Test cases are built to
specifications and requirements, i.e., what the application is supposed to do. It uses
external descriptions of the software, including specifications, requirements, and design to
derive test cases. These tests can be functional or non-functional, though usually
functional. The test designer selects valid and invalid inputs and determines the correct
output. There is no knowledge of the test object’s internal structure.

Figure 6.1 Black Box Testing [3]

42
6.3 White Box Testing
This method of software testing tests internal structures or workings of an application, as
opposed to its functionality (i.e. black-box testing). In white-box testing, an internal
perspective of the system as well as programming skills are required and used to design
test cases. The tester chooses inputs to exercise paths through the code and determine the
appropriate outputs. The figure 6.2 represents the process of white box testing.

Figure 6.2 White Box Testing [4]

6.4 Integration Testing


Once the unit was over, all the modules were integrated for integration testing. External
and internal interfaces are implemented and work as per design, the performance of the
module is not degraded. The figure 6.3 represents the process of integration testing.
In our project we integrated various modules into a main module and all these code were
tested together to check whether they are working on integration or not.

The main module in which all the unit tested modules were integrated was called
guipathfinder.py.

43
Figure 6.3 Integration Testing [5]

6.5 Test Cases

The test cases are shown below:-

Test Case Id Test Case Scenario Test Steps Status


Case 1 Speech recognition 1. Give user input Pass
through voice
2. Check if the
assistant recognizes
correct words.
Case 2 Check the path 1. Give source and Pass
finder functionality destination in voice
format.
2. Check if the
assistant provides
the shortest path.

44
Case 3 Check the open 1. Give application Pass
application name in voice
functionality format.
2. Check if the
assistant opens the
application.
Case 4 Check the web 1. Give query to be Pass
crawler functionality searched in voice
format.
2. Check if the
assistant opens the
browser with correct
query searched.
Case 5 Check the file search 1. Give file name in Pass
functionality voice format.
2. Check if the
assistant opens the
correct file.
Case 6 Check the play song 1. Give song name in Pass
functionality voice format.
2. Check if the
assistant plays the
song.

Table 6.1 Test Cases

All the inputs are in voice form and the output provides in path finder functionality is in
voice form and also displayed on the screen.

45
CONCLUSIONS AND FUTURE SCOPE

Conclusion

AI DRIVEN ASSISTANT SYSTEM is an assistant system that is used to solve path


related queries of the user inside the college and is used for the automation purpose.

The assistant system proves to be helpful to the people who are new to the college that is
both parents and the students. The new comer only needs to input the source and
destination and the assistant system provides the shortest path between the two locations to
the user. This output is in voice form and is also displayed on the screen so that there is no
ambiguity in the mind of the user.

Now the other functionalities include open application, web crawler, searching and
opening file, play song.

These functionalities if done by the user it requires many steps to be performed but with
the help of this assistant system these steps are automated and the work for the user
becomes easy.

Taking example of play song the user first needs to search where the song is then the user
needs to click on the song and then it will start and many a times user does not remember
where the song is saved. So to provide solution to the problem this assistant is made.

So the automation process is conducted that user only needs to enter the song name and the
assistant system will search the song and will play the song for the user.

Future Scope

The assistant system would be useful to parents when they come to admit their children in
first year.

The system will help them to reach their locations by telling them the path.

46
A link would be sent to parents, through that their child’s number could be traced and they
would come to know where their child is at that particular moment.

So this is how the parents will also be able to use the application.

47
REFERENCES

[1] Vishmita Yashwant Shetty, Nikhil Uday Polekar, Sandipan Utpal Das, Prof. Suvarna
Pansambal ARTIFICIALLY INTELLIGENT COLLEGE ORIENTED VIRTUAL
ASSISTANT e-ISSN: 2320-8163, Volume 4, Issue 2 (March-April, 2016)

[2] Unnati Dhavare , Prof. Umesh Kulkarni ,”NATURAL LANGUAGE PROCESSING


USING ARTIFICIAL INTELLIGENCE “,International Journal of Emerging Trends &
Technology in Computer Science (IJETTCS) I ISSN 2278-6856

[3] Arjun RK, Pooja Reddy, Shama, M. Yamuna RESEARCH ON THE OPTIMIZATION
OF DIJKSTRA’S ALGORITHM AND ITS APPLICATIONS Volume No 04, Special
Issue No. 01, April 2015 ISSN (online): 2394-1537.

[4] Ravi, N., Sireesha, V. Using Modified Dijkstra’s Algorithm for Critical Path Method in
a Project Network International Journal of Computational and Applied Mathematics
Volume 5 number 2 pp 217-225. 2010.

[5] H.N.Ahuja, S.P. Dozzi, and S.M. Abourizk, Project Management, New York: Wiley,
1994.

[6] E.W.Dijkstra, A Note in Connectin with Graphs, Numerische Mathematik, Vol.1, pp.
269-271, 1959.

[7] M.Barbehenn, A Note on the Complexity of Dijkstra’s Algorithm for Graphs with
Weighted Vertices, IEEE Transactions on Computers, Vol. 47, No.2, 1998.
[8] T.A.J.Nicholson, Finding the shortest route between the two points in a network,
computer Journal 9 (1966), 275-280.

[9] Nar Singh Deo, Graph theory with applications to engineering and computer science,
Prentice Hall of India 1997 edition.

[10] A.V. Goldberg and C. Silverstein, Implementations of Dijkstra’s Algorithm Based on


Multi-Level Buckets, Technical Report 95-187, NEC Research Institute, Inc.,1995.
48
[11] Ravi, N., Sireesha, V. Using Modified Dijkstra’s Algorithm for Critical Path Method
in a Project Network. International Journal of Computational and Applied Mathematics.
Volume 5 number 2 pp 217-225. 2010.

[12] Sudhakar, T.D.,Vadivoo, N.S., Slochanal, S.M.R., Ravichandran, S. Supply


restoration in distribution networks using Dijkstra's algorithm. International Conference on
Power System Technology, Signapure. 2004 (PowerCon 2004)

[13] Bhabad, S. S., & Kharate, G. K. (2013). An Overview of Technical Progress in


Speech Recognition. International Journal of advanced research in computer science and
software Engineering.

[14] Gamit, M. R., Dhameliya, P. K., & Bhatt, N. S. (2015). Classification Techniques
for Speech Recognition: A Review. vol, 5, 58-63.

[15] Gaikwad, S. K., Gawali, B. W., & Yannawar, P. (2010). A review on speech
recognition technique. International Journal of Computer Applications, 10(3), 16-24.

[16] DEEP CONTEXTUALIZED WORD REPRESENTATIONS, by Matthew E. Peters,


MARK Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke
Zettlemoyer (2018)

49
APPENDIX

AI DRIVEN ASSISTANT SYSTEM has two main functionalities in it. First is path finder
and the second is used for the automation purpose.

The path finder functionality is implemented with the help of Dijkstra’s algorithm which
provides the shortest path between two locations.

The algorithm is shown below-

1 function Dijkstra(Graph, source):

2 create vertex set Q

3 for each vertex v in Graph:

4 dist[v] ← INFINITY

5 prev[v] ← UNDEFINED

6 add v to Q

7 dist[source] ← 0

8 while Q is not empty:

9 u ← vertex in Q with min dist[u]

10 remove u from Q

11 for each neighbor v of u: // only v that are still in Q

12 alt ← dist[u] + length(u, v)

13 if alt < dist[v]:

14 dist[v] ← alt
50
15 prev[v] ← u

16 return dist[], prev[]

The main condition in the algorithm is dist[u] + length (u, v) <dist[v] which helps to
implement it. Dijkstra’s original algorithm does not use min priority queue and runs in
time O (|V2|) where V is number of nodes. On implementing it with Fibonacci heap the
complexity becomes O (|E|+|V| log |V|) where E is number of edges.

51

You might also like