You are on page 1of 28

A

Technical Seminar report

On

VIRTUAL PERSONAL ASSISTANT

Submitted for the partial fulfillment of requirements for the award of the

degree of

BACHELOR OF TECHNOLOGY
IN
ELECTRONICS AND COMMUNICATION ENGINEERING

Submitted by

PATURI SASIREKHA
20BF1A04H2

SRI VENKATESWARA COLLEGE OF ENGINEERING


DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
(AUTONOMOUS)
Karakambadi Road, TIRUPATI – 517507
2023-24

i
SRI VENKATESWARA COLLEGE OF ENGINEERING
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
(AUTONOMOUS) TIRUPATI – 517507

2023-2024

CERTIFICATE

This is to certify that a seminar report entitled “Virtual Personal Assistant”a bonafide record

of the technical seminar done and submitted by P.SASIREKHA bearing 20BF1A04H2 for

the partial fulfillment of the requirements for the award of B.Tech Degree in ELECTRONICS

AND COMMUNICATION ENGINEERING of Sri Venkateshwara college of enginerring,tirupati.

SEMINAR COORDINATOR HEAD OF THE DEPARTMENT

ii
ACKNOWLEDGEMENTS

I would like to express my gratefulness and sincere thanks to Dr D.Srinivasulu Reddy,

Head of the Department of ELECTRONICS AND COMMUNICATION ENGINEERING, for his

kind support and encouragement during the course of my study and in the successful completion

of the technical seminar.

I would like express gratitude to Mr.P.Rajesh, Associate Professor, seminar coordinator,

ECE Department for his continuous follow up and timely guidance in delivering seminar

presentations effectively.

Its my pleasure to convey thanks to Faculty of ECE department for their help in selection

of right theme for the technical seminar.

I have great pleasure in expressing my hearty thanks to our beloved Principal

Dr.N.Sudhakar Reddy for his support and encouragement.

I would like to thank our parents and friends, who have the greatest contributions in all my

achievements.

P.SASIREKHA

( 20BF1A04H2)

iii
ABSTRACT

Virtual Personal Assistants (VPAs) are AI-applications that have revolutionized the way we
manage tasks and access information. These intelligent systems, found in smartphones, smart speakers, and
other devices, utilize artificial intelligence and natural language processing to provide personalized assistance.
VPAs can schedule appointments, send messages, answer questions, and control smart home devices, among
other tasks. They live in your gadgets like phones and speakers and can-do things like make plans, answer
questions, and control your smart home stuff. VPAs are always listening and learning from what we say, which
can be a bit scary for your privacy. So, while use these we are careful about the information how they use.
They are super helpful. Despite that, VPAs have changed the way we use computers and have become a big
part of our lives.

iv
CONTENTS
CHAPTER DESCRIPTION PAGE NO

LIST OF TABLE vi

LIST OF FIGURES vii

1 INTRODUCTION 1-2

2 BACKGROUND

2.1 Investigational years: 1910s: 1980s 3

2.2 The beginning of intelligent virtual assistants: 1990s: Present 4

3 BASIC CONCEPT USED

3.1 Natural Language Processing 5

3.2 Automatic Speech Recoginition 6

3.3 Artificial Intelligence 7-8

3.4 Inter Process Communitation 9-10

4 WORKING 11-15

5 HARDWARE AND SOFTWARE REQUIREMENTS 16

6 FEATURES 17

7 COMPARISION OF NOTABLE VIRTUAL ASSISTANTS 18

8 MERITS AND DEMERITS 19

9 CONCLUSION 20

10 REFERENCES 21

v
LIST OF TABLES

TABLE 1 COMPARISION 18

vi
LIST OF FIGURES

Figure 4.1 WORKING OF VPA’S 13

Figure 4.2 STEPS OF NLP 14

Figure 4.3 WORKING OF SPEECH RECOGNITION 15

Figure 6.1 FEATURES 17

vii
Virtual Personal Assistant

1. INTRODUCTION

What Is a Virtual Assistant?


A virtual assistant is an independent contractor who provides administrative services to
clients while operating outside of the client's office. A virtual assistant typically operates from a home office
but can access the necessary planning documents, such as shared calendars, remotely.

The application of Virtual Assistants (VAs) is growing fast in our personal and professional
life. It has been predicted that 25% of households using a VA will have two or more devices by 2021. A virtual
assistant is an intelligent application that can perform tasks or provide services for a person responding to
orders or inquiries. Some VAs can understand and respond to human speech using synthesized voices. Users
may use voice commands to request their VA to answer the questions, manage home appliances, control media
playing, and handle other essential activities like email, creating the actions lists, and organize the meetings
on calendars . In the Internet of Things (IoT) world, an VA is a popular service to communicate with users
based on voice command.

VA capabilities and usage are rapidly rising, thanks to new technologies reaching the people’s
requirements and a robust focus on voice user interfaces. Samsung, Google, and Apple each have a
considerable smartphone user base. Microsoft’s Windows-based personal computers, smartphones, and smart
speakers have an intelligent VA installed base. On Amazon, smart speakers have a sizable installed base Over
100 million people have used Conversica’s short message and email interface Intelligent Virtual Assistants
(IVAs) services in their companies.

Famous virtual assistants like Amazon Alexa and Google Assistant are typically cloud-based
for maximum performance and data management. Many behavioral traces, including the user’s voice activity
history with extensive descriptions, can be saved in a VA ecosystem’s remote cloud servers during this
process.
The VAs story started in the 1910s, and the growth of technology has supported VAs’
improvement. The application of Artificial Intelligence (AI) also was a turning point in VAs journey. Using
AI to develop the VAs was a great jump to increase the VAs’ capabilities. Currently, VAs use narrow AI with
limited options.

DEPARTMENT OF ECE, SVCE, TIRUPATI 1


Virtual Personal Assistant
However, using general AI in the near future can be a revolution to improve the quality of VAs’
services. Virtual assistants have become more prominent as small businesses and startups rely on virtual
offices to keep costs down and businesses of all sizes increase their use of the internet for daily operations.
Because a virtual assistant is an independent contractor, a business does not have to provide the same benefits
or pay the same taxes that it would for a full-time employee. A virtual assistant is different from a salaried
administrative assistant who works from home and would have the same compensation and same tax structure
as any other full-time employee.

Also, since the virtual assistant works offsite, there is no need for a desk or other workspace at
the company's office. A virtual assistant is expected to pay for and provide their own computer equipment,
commonly used software programs, and high-speed Internet service.

KEY TAKEAWAYS

 A virtual assistant is a self-employed worker who specializes in offering administrative services to


clients from a remote location, usually a home office.
 Typical tasks a virtual assistant might perform include scheduling appointments, making phone calls,
making travel arrangements, and managing email accounts.
 Some virtual assistants specialize in offering graphic design, blog writing, bookkeeping, social media,
and marketing services.
 For an employer, one advantage of hiring a virtual assistant is the flexibility to contract for just the
services they need.

People employed as virtual assistants often have several years of experience as an


administrative assistant or office manager. New opportunities are opening up for virtual assistants who are
skilled in social media, content management, blog post writing, graphic design, and internet marketing.
As working from home has become more accepted for both workers and employers, particularly in the
aftermath of the COVID-19 pandemic, the demand for skilled virtual assistants is expected to grow

DEPARTMENT OF ECE, SVCE, TIRUPATI 2


Virtual Personal Assistant
2. BACKGROUND

2.1 Investigational years: 1910s: 1980s


In 1922, an interesting toy named Radio Rex was introduced that was the first voice-
activated doll . A toy in the dog shape would appear from its den the moment it was given a name.
Bell Labs introduced the “Audrey,” which was an Automatic Digit Identification device
in 1952. It took up a six-foot-high relay rack, used much power, had many wires, and had all of the issues that
come with complicated vacuum-tube electronics. Despite this, Audrey was able to discriminate between
phonemes, which are the basic components of speech. However, it was restricted to precise digit identification
by assigned speakers. As a result, it may be utilized for voice dialing. However, push-button dialing was
generally less expensive and faster than pronouncing the digits in order.
Another early gadget that could carry out digital language identification was Shoebox
voice-activated calculator that IBM developed. It was revealed to the public for the period of the 1962 Seattle
World’s Fair after its first market debut in 1961. This initial machine, which was built nearly twenty years
earlier than the first Personal Computer made by IBM and debuted in 1981, was capable of detecting sixteen
verbal phrases and the numbers 0 through 9.
ELIZA, the first Natural Language Processing (NLP) application or chatbot, was
invented by MIT in the 1960s. ELIZA was designed in order to “show that man-machine interaction is
essentially superficial” . It applied configuration matching and replacement procedures in written reactions to
simulate conversation, creating the impression that the machine understood what was being said.
The ELIZA was designed by professor Joseph Weizenbaum. During the ELIZA
development period, Joseph’s assistant has requested that he leave the room so that she and ELIZA can chat.
Professor Weizenbaum later remarked, “I had no idea that brief exposures to a really simple computer software
might cause serious delusional thinking in otherwise normal people.” The ELIZA impact, or the tendency to
instinctively believe machine activities are equal to people’s behaviors, was called after this.
Anthropomorphizing is a phenomenon that occurs in human interactions with VAs.
When DARPA funded a five-year Speech Understanding Research effort at Carnegie
Mellon in the 1970s, the goal was to reach a vocabulary of 1,000 words. Participants included IBM, Carnegie
Mellon University (CMU), and Stanford Research Institute, among many others.
The result was “Harpy,” a robot that could understand speech and knew around 1000 words, roughly
equivalent to a three-year vocabulary.
To reduce voice recognition failures, it could also analyze speech that followed pre-programmed
DEPARTMENT OF ECE, SVCE, TIRUPATI 3
Virtual Personal Assistant
vocabularies, pronunciations, and grammatical patterns to determine which word sequences made sense when
spoken.
An improvement to the Shoebox was released in 1986 with the Tangora, a speech recognition
typewriter. With a vocabulary of 20,000 words, it was able to anticipate the most likely outcome based on its
information. Because of this, it was given the name “Fastest Typewriter. As part of its digital signal processing,
IBM used a Hidden Markov model, which integrates statistics into the Using this strategy, you may anticipate
which phonemes will follow a given phoneme. However, every speaker was responsible for training the
typewriter to recognize his or her voice and halt in.
2.2 The beginning of intelligent virtual assistants: 1990s: Present
To compete for customers in the 1990s, companies such as IBM, Philips, and Lemont &
Hauspie began integrating digital voice recognition into personal computers. The first smartphone introduced
in 1994, the IBM Simon laid the groundwork for today’s smart virtual assistant.
In 1997, Dragon’s Biologically Talking application was able to detect and transcribe natural human speech at
a pace of 100 words per minute, with no gaps between syllables. Biologically Talking is still accessible for
download, and many doctors in the United States and the United Kingdom continue to use it to keep track of
their medical records.
In 2001, Colloquies released Smarter Child on AIM and MSN Messenger, among other
platforms. “Smarter Child” can play games and check the weather as well as seek up data. It can even speak
with others to a certain extent, even if it is text.
Siri, which debuted on October 4, 2011, as an option of the iPhone 4S, was the first innovative
digital VA to be placed on a smartphone. Siri was built when Apple Inc. purchased Siri Inc. in 2010, a spin-
off of SRI International, a research institute financed by DARPA and the US Department of Defense. It was
created to make texting, making phone calls, checking the weather, and setting the alarm easier. In addition,
it can now make restaurant recommendations, perform Internet searches, and offer driving directions.
Amazon debuted Alexa alongside the Echo in November 2014. Later, in April 2017, Amazon
launched a facility that allows users to create conversational interfaces for any VA or interface.
From 2017 till 2021, all the VAs mentioned above have been developed, and there are the more intelligent
VAs using for individuals and professional activities. The companies in different areas use the VAs to improve
the quality of their decisions at different levels, from operation to the high management level.

DEPARTMENT OF ECE, SVCE, TIRUPATI 4


Virtual Personal Assistant

3.BASIC CONCEPTS USED

The working of Virtual Assistant uses following principles:

3.1Natural Language Processing:

To Understand user's speech input.


Speech Recognition:
VPAs use speech recognition algorithms to convert spoken language into
text. This involves processing audio input to identify individual words and transcribe them into a
format that the VPA can understand.
Intent Recognition:
Intent recognition is a crucial aspect of NLP in VPAs. It involves
determining the user's intention behind a given command or query. For example, if a user says, "What's
the weather like today?" the VPA needs to recognize the intent is to inquire about the weather.
Entity Recognition:
Once the intent is identified, entity recognition comes into play. This involves
identifying specific pieces of information within the user's input that are relevant to fulfilling the intent.
In the weather example, the entities might include "weather" and "today."
Context Awareness:
NLP helps VPAs maintain context during conversations. For instance, if a user asks,
"Who is the president?" and follows up with "Tell me more about him," the VPA needs to understand
that the second question is related to the first.
Language Understanding:
NLP algorithms enable VPAs to understand the intricacies of human language, such as
synonyms, homonyms, and variations in sentence structure. This is essential for accurately interpreting
user input.
Response Generation:
Once the user's intent and relevant entities are identified, the VPA generates a response.
This could involve accessing databases, web services, or built-in functionalities to provide information
or perform actions

DEPARTMENT OF ECE, SVCE, TIRUPATI 5


Virtual Personal Assistant
Continuous Learning:
Some VPAs employ machine learning techniques to improve their performance over time.
They learn from user interactions, adapting to individual preferences and refining their language
understanding capabilities.
Multimodal Interaction:
Modern VPAs support multiple modes of interaction, including speech, text, and visual
inputs. NLP helps in seamlessly integrating these modalities, allowing users to interact in the way that
is most convenient for them
Personalization:
NLP facilitates personalization by allowing VPAs to understand user preferences, adapt
to individual communication styles, and provide tailored responses based on historical interactions.
Integration with Other Services:
NLP is crucial for integrating VPAs with various third-party services. This involves
understanding and interpreting information from external sources, such as calendars, maps, or online
databases, to fulfill user requests effectively.
In summary, NLP in VPAs is a sophisticated process that involves several stages, from
recognizing speech and intent to understanding context and generating personalized and contextually
relevant responses. As technology advances, the capabilities of VPAs are expected to improve, offering
more natural and intuitive interactions with users.

3.2Automatic Speech Recognition:


To understand command according to user's input.
Automatic Speech Recognition (ASR) is a technology that converts spoken language into written text.
It plays a crucial role in various applications, including virtual personal assistants, transcription
services, voice-activated devices, and more. Here's an in-depth look at the key aspects of Automatic
Speech Recognition:
Acoustic Processing:
ASR begins with the acquisition of audio data. Microphones capture the spoken words, and
the acoustic signal is processed to extract relevant features. This involves breaking down the audio
signal into smaller, manageable units.

DEPARTMENT OF ECE, SVCE, TIRUPATI 6


Virtual Personal Assistant
Feature Extraction:
Feature extraction involves transforming the acoustic signal into a set of features that can be
used for analysis. Common features include spectrograms, mel-frequency cepstral coefficients
(MFCCs), and others that highlight relevant aspects of the speech signal.
Phonetic Analysis:
ASR systems analyze the phonetic components of speech. Phonetic models map the extracted
features to phonemes, which are the smallest units of sound in a language. This mapping is crucial for
accurately recognizing spoken words.
Language Modeling:
Language models help ASR systems understand the structure and patterns of natural
language. They incorporate statistical or machine learning techniques to predict the likelihood of word
sequences. This helps in distinguishing between homophones and contextually selecting the most
probable words.
Hidden Markov Models (HMMs):
Many traditional ASR systems use Hidden Markov Models to represent the statistical
relationship between observed acoustic features and the underlying phonetic units. HMMs model the
dynamics of speech and aid in aligning the observed features with phonetic units.
Deep Learning Approaches:
More recent advancements in ASR involve deep learning techniques, particularly deep
neural networks (DNNs) and recurrent neural networks (RNNs). Deep learning has shown significant
improvements in speech recognition accuracy by learning complex patterns and representations
directly from the data.
End-to-End ASR:
End-to-End ASR systems, often based on deep learning architectures like Long Short-Term
Memory (LSTM) networks or Transformer models, aim to directly map input audio to output text
without explicitly modeling intermediate linguistic units. This approach has gained popularity for its
simplicity and effectiveness.
Speaker Adaptation:
ASR systems can be adapted to individual speakers or specific environments. This involves
fine-tuning the model based on the characteristics of a particular speaker's voice or the acoustic
conditions in which the system is used.

DEPARTMENT OF ECE, SVCE, TIRUPATI 7


Virtual Personal Assistant
Real-Time Processing:
ASR systems are often designed for real-time processing, especially in applications like
voice assistants and dictation services. Low-latency algorithms are crucial to provide quick and
responsive results.
Continuous Learning:
Some ASR systems incorporate continuous learning techniques to adapt to variations in
speech patterns over time. This allows the system to improve its performance with more exposure to
diverse speech data.
In summary, Automatic Speech Recognition is a complex process that involves capturing,
processing, and understanding spoken language. The evolution of ASR, from traditional approaches
using HMMs to modern deep learning techniques, has significantly improved the accuracy and
applicability of speech recognition systems in various domains.

3.3 Artificial Intelligence.


To learn things from user and to store all information about behavior
Artificial Intelligence (AI) is a fundamental component of virtual personal assistants (VPAs), enabling
them to understand, process, and respond to user commands and queries in a way that mimics human-
like interaction. Here are key aspects of AI in virtual personal assistants:
Natural Language Processing (NLP):
 Purpose: NLP is used to enable the VPA to understand and interpret human language. It
involves tasks such as speech recognition, intent recognition, entity recognition, and sentiment
analysis.
 How it Works: NLP algorithms analyze the structure, context, and meaning of user input,
allowing the VPA to discern the user's intent and provide relevant responses.
Speech Recognition:
 Purpose: To convert spoken language into text that the VPA can understand and process.
 How it Works: Speech recognition algorithms identify and transcribe spoken words, enabling
users to interact with the VPA through voice commands.
Machine Learning (ML):
 Purpose: ML is employed for tasks like personalization, continuous learning, and improving
the VPA's performance over time.
 How it Works: ML algorithms analyze user interactions and feedback, adapting the VPA to

DEPARTMENT OF ECE, SVCE, TIRUPATI 8


Virtual Personal Assistant
individual preferences and enhancing its language understanding and response generation
capabilities.
Intent Recognition:
 Purpose: To determine the user's intention behind a given command or query.
 How it Works: Through the use of machine learning and NLP, the VPA identifies the purpose
or goal behind a user's input, enabling it to take appropriate actions.
Response Generation:
 Purpose: To generate coherent and contextually relevant responses based on user queries.
 How it Works: Using pre-defined responses, natural language generation, or accessing
external information sources, the VPA formulates replies that align with the user's intent.
Context Awareness:
 Purpose: To maintain context across multiple interactions and understand the meaning of
subsequent commands in relation to previous ones.
 How it Works: The VPA uses context-aware algorithms to remember and reference previous
interactions, providing a more coherent and personalized user experience.
Personalization:
 Purpose: To tailor the VPA's responses and functionalities to individual user preferences.
 How it Works: AI algorithms analyze user behavior, preferences, and historical data to
customize the VPA's interactions, making it more aligned with the user's needs and preferences.
Knowledge Integration:
 Purpose: To access and integrate information from various sources to provide comprehensive
answers.
 How it Works: AI algorithms enable the VPA to connect with databases, online services, and
other sources to retrieve relevant information and answer user queries effectively.
Multimodal Interaction:
 Purpose: To support interaction through various modes, such as voice, text, and visuals.
 How it Works: AI enables the VPA to seamlessly integrate and process inputs from different
modalities, providing users with a more versatile and natural interaction experience.
Security and Privacy:
To ensure the security of user data and uphold privacy standards. AI algorithms are used to
implement security measures, such as biometric authentication and encryption, to protect user
information and maintain privacy.

DEPARTMENT OF ECE, SVCE, TIRUPATI 9


Virtual Personal Assistant
In essence, AI is the backbone of virtual personal assistants, empowering them to understand,
learn, and adapt to user needs while providing intelligent and contextually relevant assistance.
Advances in AI technologies continue to enhance the capabilities and performance of VPAs, making
them more integral to our daily lives And relations of user
3.4 Inter Process Communication.
To get important information from other software applications. Inter-Process Communication
(IPC) is a set of techniques and mechanisms that allow different software processes to communicate
with each other. In the context of Virtual Personal Assistants (VPAs), IPC is essential for coordinating
the various components and services that contribute to the overall functionality of the assistant. Here's
how IPC is relevant in VPAs:
Microservices Architecture:
VPAs often follow a microservices architecture where different components or
services handle specific tasks, such as natural language processing, speech recognition, and external
integrations. IPC is used to facilitate communication between these microservices.
Speech Recognition and Natural Language Processing:
In a VPA, speech recognition and natural language processing components may be
separate processes. IPC enables these components to exchange information seamlessly. For example,
speech-to-text results from the recognition module are communicated to the NLP module for further
processing.
Intent Recognition and Response Generation:
The intent recognition component of a VPA needs to communicate with the module
responsible for generating appropriate responses. IPC ensures that the recognized intent is transferred
to the response generation module for the VPA to formulate and deliver a relevant reply.
External Service Integration:
VPAs often integrate with external services, such as calendars, weather APIs, or third-
party applications. IPC is employed to establish communication between the VPA and these external
services, allowing for data retrieval and updates.

Multimodal Interaction:
VPAs support various modes of interaction, including voice, text, and visuals. IPC helps
in coordinating and synchronizing these different modalities, ensuring a cohesive and integrated user
experience.

DEPARTMENT OF ECE, SVCE, TIRUPATI 10


Virtual Personal Assistant
Context Maintenance:
To provide meaningful and context-aware responses, VPAs need to maintain information
about ongoing interactions. IPC helps in passing context information between different components,
allowing the VPA to remember previous user inputs and actions.
Continuous Learning and Personalization:
Components responsible for continuous learning and personalization in VPAs
communicate through IPC. This facilitates the sharing of user preferences, feedback, and behavior
patterns, allowing the assistant to adapt and improve over time.
Security and Authentication:
IPC is utilized to enforce security measures, such as secure communication channels and
authentication protocols, to protect sensitive information processed by the VPA. This is crucial,
especially when dealing with personal data and user accounts.
Real-Time Updates:
VPAs may need to provide real-time updates or notifications. IPC enables efficient
communication between components, allowing the VPA to push relevant information to the user
promptly.
Distributed Systems:
VPAs often operate as distributed systems where different components may run on
separate servers or devices. IPC ensures communication and coordination between these distributed
components for a seamless user experience.

In summary, Inter-Process Communication is a foundational aspect of Virtual Personal


Assistants, enabling the coordination and collaboration of various components to provide users with
intelligent, context-aware, and personalized assistance across different modes of interaction.

DEPARTMENT OF ECE, SVCE, TIRUPATI 11


Virtual Personal Assistant
4.WORKING

Virtual personal assistants have practically become a basic requirement in all electronic gadgets
to resolve the problem quickly. To implement this, speech recognition become the new integration into the
VPA.Virtual Personal Assistant has almost become a basic necessityin all electronic devices so as to execute
the required problemseasily. More than just being a bot, VPA can make life easier for the user in various ways.
Speech recognition is one of the relatively new integrations into the VPA. But, though its moderately efficient,
it is not very helpful and are not used by the user due to Fig1 its high amount of error. Though the error
percentage of the upcoming VPAs is around 5 percent, it still isnot quite up to the mark to where it becomes
a basic part of theuser’slife. Thus, the projects aim is to build a VPA with speech recognition which has a very
minimal error percentage.

Voice recognition is a complex process using advanced concepts like neural networks and
machine learning. The auditory input is processed and a neural network with vectors for each letter and syllable
is created. This is called the data set When a person speaksthe device compares it to this vector andthe different
syllables are pulled out with which it has the highest correspondence. The fact that the car has evolved into
mobile office and safety become a measure concern for it. According to the Statista there will be over 8 billion
digital voice assistants in use worldwide by 2024, roughly equals to the world population. It is estimated that
it will be worth several billions by 2007. While indirect revenues for the carriers will be several folds. Afew
companies have started offering converging products in the VPA direction, e.g. Conita, WildFire, VoxSurf,
VoiceGeneie, and VoiceTel, and Mitel Networks, though own or two methods will provide the solutions for
mobile carrier environment. At last, it provides hands-free, eyes-free access to the web anywhere, anytime
from any phone. Thus, the project’s goal is to create a very basic activated assistant (VPA) using speech
recognition and plays the songs from the You Tube.

The following is the architecture diagram. The majority of current efforts have simply used
neural networks for speech recognition. Despite having a decent level of accuracy, these techniques are neither
efficient nor practical to be of any meaningful value. They e mploy a few simple strategies, including:

Speech-to-text – It allow the applications to translate the spoken words into digital signals. At the point
when you talk, you make a progression of vibrations. The software converts them into digital signals

DEPARTMENT OF ECE, SVCE, TIRUPATI 12


Virtual Personal Assistant
with an analog-to-digital converter (ACD), extracts sounds, segments them, and matches them to existing
phonemes. The smallest unit of a language that is capable of distinguishing the sound shells of various words
are called phonemes. The system creates a text version of what you said by comparing these phonemes to
individual words and phrases using intricate mathematical models. Text-to-speech- It concept is entirely
opposite too previous one. This technology translate text into voice output learning. The system must go
through three steps to convert text to voice. First, the system needs to convert text to words, then perform
phonetic transcription and then convert transcription to speech. Speech-to-text (STT) and Text-to-speech
(TTS) are used in virtual assistant technology to ensure smooth and efficient communication between users
and applications. To turn a basic voice assistant with static commands into a proper AI assistant, you also need
to give the program the ability to interpret user requests with intelligent tagging and heuristics

Figure 4.1 working of VPA’s

DEPARTMENT OF ECE, SVCE, TIRUPATI 13


Virtual Personal Assistant
Natural Language Processing: Natural LanguageProcessing (NLP) refers to AI method of
communicating with an intelligent system using a natural language such as English. Processing of Natural
Language is required when youwant an intelligent system like robot to perform as peryour instructions, when
you want to hear decision from a dialogue based clinical expert system, etc.

Figure 4.2:Steps of NLP

Automatic Speech Recognition: To understandcommand according to user’s input.

DEPARTMENT OF ECE, SVCE, TIRUPATI 14


Virtual Personal Assistant

Figure 4.3:Working of Speech Recognition

Artificial Intelligence is the concept to learn from the user and store all of their behavior
and relationships information. The capacity of a system to calculate, reason, perceive relationships and
analogies, learn from experience, store and retrieve information from memory, solve problems, comprehend
complex ideas, fluently use natural language, classify, generalize, and adapt to new circumstances.Inter
Process Communication: To get important information from other software applications.

DEPARTMENT OF ECE, SVCE, TIRUPATI 15


Virtual Personal Assistant

5.HARDWARE AND SOFTWARE REQUIREMENTS

Hardware:
 A phone with a touch screen interface.
 Phone Ram should be of a minimum 512 MB.
 Internet connectivity.
 The phone should have USB debugging mode for development and testing purposes.
Software:
 Operating system should be android 4.1/win 8.1/IOS 6 or higher.
 The kernel version should be 3.0.16 or higher.
 Support of other basic applications like maps, calender, camera, web connection etc.

DEPARTMENT OF ECE, SVCE, TIRUPATI 16


Virtual Personal Assistant
6.FEATURES

Some of the features of Virtual Assistant, you may ask him in day by day
uses are shown below:
• Make Phone Calls
• Schedule meetings & Appointments
• Get Direction
• Send Messages
• Set Reminders
• Ask Questions
• Play Music & Videos
• Wake me up at 6.30AM

Figure 6.1: Features

DEPARTMENT OF ECE, SVCE, TIRUPATI 17


Virtual Personal Assistant
7.COMPARISION OF NOTABLE VIRTUAL ASSISTANTS

Different AI products work as VA in the market. Each product has been designed to provide the
assistant service for the specific product. There also are different brands of VAs, and behind them are genius
companies who annually are investing billion dollars in this field. Table 1 shows a shortlist of the most used
VAs and their capabilities.

Virtual Assistant Developer IOT Chromecast Integration Smart Phone App

Alexa Amazon Yes No Yes


Alice Yandex Yes No Yes
AliGenie Alibaba Yes No Yes
Assistant Speaktoit No No Yes
Bixby Samsung No No Yes
BlackBerry Assistant BlackBerry No No Yes

Braina Brain soft No No Yes


Clova Naver Yes No Yes
Cortana Microsoft Yes No Yes
Duer Baidu N/A N/A N/A
Evi Amazon No No Yes
Google Assistant Google Yes Yes Yes
Google Now Google Yes Yes Yes
M Facebook N/A N/A N/A
Mycroft Mycroft Yes Yes Yes
SILVIA Cognitive Code No No Yes

Siri Apple Inc. Yes No Yes


Viv Samsung Yes No Yes
Xiaowei Tencent N/A N/A N/A
Celia Huawei Yes No Yes

TABLE:Comparision

DEPARTMENT OF ECE, SVCE, TIRUPATI 18


Virtual Personal Assistant
8.ADVANTAGES AND DISADVANTAGES

ADVANTAGES:

 These applications make small and smart hand-held devices to combine multiple features.
 They allow you to export and import data.
 Store various information.
 Make to do lists.
 Recognizes voice commands.
 Controls various applications of device.
 Provides services regarding your location.
 Helps to plan your whole day.
 Reminds you important things on accurate situations or location.

DISADVANTAGES:
 Listening problem.
VPA get problem to process wrong pronounced words and miscellaneous words.
 Silent mode support.
VPA gives response in voice output thus it doesn't work properly in silent mode.
 Navigation languages.
Most of VPAs can understand only English language.
 Internet access.
VPA needs internet connection to give desired output.

DEPARTMENT OF ECE, SVCE, TIRUPATI 19


Virtual Personal Assistant

9.CONCLUSION

The Virtual Personal Assistance offers an intelligence computer secretarial service. The
new service is based on the convergence of internet, mobile, and speech recognition technology. The VPA
provides a single point of communication for all of the user's messages, contacts, schedule, and information
sources, reducing interruptions and enhancing time utilization. The paper also suggests a decision structure
for handling appointment and meeting request requests as well as call screening. The framework at first
targets legal counselors, specialists, deals work force, little workplaces, upkeep teams, and so forth.
However, millions of additional users are anticipated to adopt it as a standard feature. It gets around many
of the problems with the other solutions. It is mostly designed to make a VPA that works much better so
that they can be used in more everyday situations. However, the system has limitations of its own. Despite
its high efficiency, the time it takes to complete each task may be longer than that of other VPAs, and the
complexity of the algorithms and concepts makes it difficult to modify in the future

DEPARTMENT OF ECE, SVCE, TIRUPATI 20


Virtual Personal Assistant

10.REFERENCES
[1] A. Sudhakar Reddy M, Vyshnavi, C. Raju Kumar, and Saumya,”Virtual Assistant Using Artificial
Intelligence” in J ETIR March 2020, Volume 7, Issue 3 ISSN-2349-5162.
[2] G. O. Young, “Synthetic structure of industrial plastics (Book style with paper title and editor),” in
Plastics, 2nd ed. vol. 3, J. Peters, Ed. New York: McGraw-Hill, 1964, pp. 15–64.
[3] W.-K. Chen, Linear Networks and Systems (Book styl\e).Belmont, CA:Wadsworth, 1993, pp. 123–135.
[4] H. Poor, An Introduction to Signal Detection and Estimation. New York:Springer-Verlag, 1985, ch. 4.
[5] B. Smith, “An approach to graphs of linear forms (Unpublished work style),” unpublished.
[6] E. H. Miller, “A note on reflector arrays (Periodical style—Accepted for publication),” IEEE Trans.
Antennas Propagat., to be published.
[7] Ardissono, L., Boella. And Lesmo, L. (2000) “A Plan-Based AgentArchitecture for Interpreting Natural
Language Dialogue”, International Journal of Human-Computer Studies.
[8] Nguyen, A. and Wobcke, W. (2005), “An Agent-Based Approach to Dialogue Management in Personal
Assistant”, Proceedings of the 2005 International Conference on Intelligent User Interfaces.
[9] Jurafsky & Martin. Speech and Language Processing – An Introduction to Natural Language Processing,
Computational Linguistics, and Speech Recognition. Prentice-Hall Inc., New Jersey,2000.
[10] Wobcke, W., Ho. V., Nguyen, A. and Krzywicki, A. (2005), “ A BDI Agent Architecture for Dialogue
Modeling and Coordination in a Smart Personal Assistant”, Proceedings of the 2005 IEEE/WIC /ACM
International Conference on Intelligent Agent Technology. [11] Knote, R., Janson, A., Eigenbrod, L. and
Söllner, M., 2018. The What and How of Smart Personal Assistants: Principles and Application Domains
for IS Research. [12] Feng, H., Fawaz, K. and Shin, K.G., 2017, October. Continuous authentication for
voice assistants. In Proceedings of the 23rd
[13] Canbek, N.G. and Mutlu, M.E., 2016. On the track of artificial intelligence: Learning with intelligent
personal assistants. Journal of Human Sciences, 13(1), pp.592-601.
[14] Hwang, I., Jung, J., Kim, J., Shin, Y. and Seol, J.S., 2017, March. Architecture for Automatic
Generation of User Interaction Guides with Intelligent Assistant. In Advanced Information Networking and
Applications Workshops (WAINA), 2017 31st International Conference on (pp. 352-355). IEEE.
[15] Buck, J.W., Perugini, S. and Nguyen, T.V., 2018, January. Natural Language, Mixed-initiative
Personal Assistant Agents. In Proceedings ofthe 12th International Conference on Ubiquitous.

DEPARTMENT OF ECE, SVCE, TIRUPATI 21

You might also like