You are on page 1of 32

Marathwada Shikshan Prasarak Mandal’s

Deogiri Institute of Engineering and Management Studies,


Aurangabad

Seminar Report

On

Google Assistant with Dialogflow

Submitted By

Rhugved Takalkar (36058)

Dr. Babasaheb Ambedkar Technological University


Lonere (M.S.)

Department of Computer Science and Engineering


Deogiri Institute of Engineering and Management Studies,
Aurangabad
(2019- 2020)
Seminar Report
On

Google Assistant with Dialogflow


Submitted By

Rhugved Takalkar (36058)

In partial fulfillment of
Bachelor of Technology
(Computer Science & Engineering)

Guided By
Prof. Amruta Joshi

Department of Computer Science & Engineering


Deogiri Institute of Engineering and Management Studies,
Aurangabad
(2019- 2020)
CERTIFICATE

This is to certify that, the Seminar entitled “Google Assistant with Dialogflow”
submitted by Rhugved Takalkar is a bonafide work completed under my supervision and
guidance in partial fulfillment for award of Bachelor of Technology (Computer Science and
Engineering) Degree of Dr. Babasaheb Ambedkar Technological University, Lonere.

Place: Aurangabad
Date: 17/10/2019

Prof. Amruta Joshi Mr. S.B. Kalyankar


Guide Head

Dr. Ulhas D. Shiurkar


Director,
Deogiri Institute of Engineering and Management Studies,
Aurangabad
ABSTRACT

Google Assistant is the virtual assistant that's found on smartphones, homes, and a host of other
devices. It's the application that takes in voice commands and completes tasks based on user
input.

Actions on Google is the developer platform that allows you to build applications for Google
Assistant. This is going to be the central console for conversational application development.

Google Assistant is a personal voice assistant that offers a host of actions and integrations. From
sending texts and setting reminders, to ordering coffee and playing your favorite songs, the 1
million+ actions available suit a wide range of voice command needs. Google Assistant is
offered on Android and iOS, but it can even be integrated with other devices like smartwatches,
Google Homes, and Android TVs.

As we know, Actions is the central platform for developing Google Assistant applications.
Actions work with a number of human-computer interaction suites, which simplifies
conversational app development. Out of all the platforms, the most popular is Dialogflow, which
uses an underlying machine learning (ML) and natural language understanding (NLU) schema to
build rich Assistant applications.

This small project will get hands-on practice with Actions and Dialogflow by building an
Assistant application that generates quotes when prompted by a user. I will gain practical
knowledge of computer-human interaction suites and by the end of this lab, I will have
successfully built a fully-fledged Google Assistant application.
Contents

1. INTRODUCTION 1
1.1 What is Google Assistant? 1

2. LITERATURE SURVEY 3
2.1 Conversational Actions 3
2.2 Use Cases 4
2.3 How Conversational Actions work 4
2.4 Building a Conversational Action 5
2.5 Fulfillment using Dialogflow 6
2.6 Fulfillment using Actions SDK 7
2.7 Responses 7
2.8 Entities 7

3. BRIEF ON SYSTEM 9
3.1 Conversational Actions 9
3.1. Overview 10
3.2. Understand how it works 11
3.3 Create an Actions project 14
3.4 Create a Dialogflow agent 15
3.5 Starting a conversation 16
3.6 Functioning 19
4. CONCLUSIONS 21
4.1 Conclusion 21
4.2 Application 22

REFERENCES
ACKNOWLEDGEMENT
List of Screens

Figure Illustration Page

2.1 Screen1 11

2.2 Screen2 12

2.3 Screen3 12

2.4 Screen4 13

2.5 Screen5 14

3.1 Screen6 18

3.3 Screen7 20
3.3 Screen8 22

3.3 Screen9 24
3.5 Screen10 25

3.5 Screen11 26
3.5 Screen12 27

3.5 Screen13 27

3.5 Screen14 28
3.5 Screen15 28
1. INTRODUCTION

Google Assistant is coming generally to Android phones. Here's the means by which to utilize it
and different approaches to Google on your cell phone - notwithstanding for iPhone clients.

Google Assistant has now come to Android phones past Google's own Pixel. It's a critical
change to how Google's already offered search on cell phones. It's additionally the most recent in
a wide assortment of ways individuals can search with Google on Android.

Beneath, how Google Assistant fits in with Google's other search options on Android, and
additionally a refresher for taking advantage of Google on the iPhone

1.1 What is Google Assistant?

Google Assistant is Google's next generation method for searching with Google. Instead of
giving links to websites, Google Assistant is intended to have discussions with you keeping in
mind the end goal to finish errands.

Like Siri, Google Assistant can collaborate with your Android phone to do an assortment of
errands, for example, setting cautions or playing music. Like Siri, it can even deal with some
home automation devices. Google has a page clarifying different sorts of activities here.

Like Siri, you can ask Google Assistant general inquiries. Not at all like Siri, you'll likely find
that Google can deal with a more extensive scope of inquiries than Siri can. That is on account of
Google Assistant takes advantage of Google's all, web-wide search results every single time you
look, making it more exhaustive.[1]

Siri doesn't go to the web each time you search. Rather, it tries to figure which of various
sources it uses may have a solution to your question. In the event that those don't, once in a while
it swings to Bing's far reaching indexed lists. Once in a while it doesn't. Generally speaking, it
implies that Siri may frequently come up short for an assortment of hunting where Google
succeeds.

Google Assistant can think of answers when you banter with it. For instance, ask "how old is
Stephen Colbert," and it will give you his age. Then just ask "how tall is he," and it comprehends
you need Colbert's starter, despite the fact that you never said his name again. Be that as it may,

1
this isn't something new or one of a kind of Google Assistant. Google's acted along these lines
with talked inquiries since 2013. Likewise, Siri is getting more astute about taking care of these
sorts of discussions, as well.

According to the overall description in the context, the purpose of the project is to develop an
Android application that provides an intelligent voice assistant with the functionalities as calling
services, message transformation, mail exchange, alarm, event handler, location services, music
play service, checking weather, searching engine (Google, Wikipedia), camera, Bing translator,
Bluetooth headset support, help menu and Windows azure cloud computing.

Many years ago, software programs were developed and run on the computer. Nowadays, smart
phones are widely used by all people. About 35 percent of the Americans have some sort of
Smartphone. This shows that the market is increasing fast and there are also more capabilities for
Smartphone because of this wide use.

Therefore, the software development on the Smartphone is very promising. The operation
modes on the Smartphone are by working with gestures and through the keyboard. It is not a
convenient way for users with completely manually input. The common way of communication
used by people in daily life is through the speech. If the mobile phone can listen to the user for
the request or handle the daily affairs, then give the right response, it will be much easier for
users to communicate with their phone, and the mobile phone will be much more “Smart” as a
human assistant.

This project is focusing on the Android development over the voice control (recognition,
generate and analyze corresponding commands, intelligent responses automatically), Google
products and relevant APIs (Google map, Google weather, Google search and etc), Wikipedia
API and mobile device references ranging from Speech-To-Text, Text-To-Speech technology,
Bluetooth headset support and camera; advanced techniques of Cloud computing, Multi-
threading, Adobe Photoshop image editing skills. As all those functionalities and services for the
project have been explained, the main structure and construction of the project has been basically
illustrated with its goals.

2
2. LITRATURE SURVEY

2.1 Conversational Actions

Conversational Actions extend the functionality of the Google Assistant by allowing developers
to create custom experiences, or conversations, for users on the Assistant. In a conversation, your
Conversational Action handles requests from the Assistant and returns responses with audio and
visual components. Conversational Actions can also connect to external services for added
conversational or business logic before returning a response.

For example, users can invoke your Conversational Action to get a response from your external
fulfillment service when they want to look up information, get a personalized recommendation,
or perform transactions involving digital payments.[2]

Screen1. An example of a Conversational Action

3
2.2 Use cases

Conversational Actions work best for simple use cases that complement another experience.
Good Conversational Actions often fall into these general categories:

 Things people can easily answer. Actions that can be accomplished with familiar input like times or
dates, like booking a flight.
 Quick, but compellingly useful Actions. These usually give users immediate benefit for very little time
spent, like finding out when their favorite sports team plays next.
 Actions that are inherently better suited for voice. These are typically things you want to do hands-
free, like receiving coaching during yoga or light exercise.

2.3 How Conversational Actions work

Unlike with traditional mobile and desktop apps, which use computer-centric paradigms, users
interact with Actions for the Assistant through natural-sounding, back and forth conversation.
Conversational Actions begin when invoked by a user and continue until the user chooses to exit
(using predetermined phrases) or your Conversational Action denotes the end of the
conversation.

During a conversation, user inputs are transformed from speech to text by the Assistant, and
formed into JSON requests for natural language processing. These requests are sent to what's
known as your conversation fulfillment.

Your conversation fulfillment parses the user's query into structured data, processes that data,
and returns a webhook JSON response to the Assistant. The Assistant then processes and
presents your response to the user.

Screen 2. Conversation fulfillment is a JSON in-JSON out system

Building your own natural language processing service can be challenging, so we provide
Dialogflow as a way to handle it for you. For developers who cannot use Dialogflow, we also
provide the Actions SDK as a backup option with a separate, but related, development path.

4
Once you set up an agent in Dialogflow, your conversation fulfillment is augmented by
Dialogflow's features, including the ability to use Dialogflow fulfillment. This approach allows
you to isolate conversation fulfillment from other services you may need to provide users with
their desired outcome.

Screen 3. Conversation fulfillment when using Dialogflow

2.4 Building a Conversational Action

Most of building your Conversational Action is designing the conversation and building your
conversation fulfillment. Think of the conversation as the user interface for your Conversational
Action. You need to think about how users invoke your Actions project, the valid things that they
can say in a conversation, and how your Actions project responds to them.

In your Actions project, you provide metadata for publishing the project and specify a method of
conversation fulfillment. Developers using Dialogflow associate their Dialogflow agent with the
project, then build fulfillment through Dialogflow. For developers using the Actions SDK,
building conversation fulfillment involves coding and deploying in the Conversation
Webhook format.

When designing your conversation, we recommend using our processes and design principles.
Conversational interfaces are still a relatively new technology, and learning about best practices
can save you time in the future.

5
2.5 Fulfillment using Dialogflow

When integrating with a Dialogflow agent, the agent handles NLU for user queries in your
Conversational Action. Your Dialogflow agent does the following for you during this step:

1. Parses each incoming request from the Assistant based on training phrases you provide and
conversational context.
2. Matches each request to a Dialogflow intent (also known as an event).
3. Extracts parameters into Dialogflow entities.

Your Dialogflow agent can then call on its own fulfillment (deployed as a webhook) to carry out
some logic like calling a REST API or other backend service that generates a response to return
to the Assistant. This webhook is also known as your Dialogflow fulfillment.

Screen 4. A Dialogflow agent parses a user query into structured data for Dialogflow fulfillment

Building conversation fulfillment when using Dialogflow primarily consists of developing your
Dialogflow fulfillment webhook. In the Actions on Google documentation, you'll find resources
to help you design, build, and test your Dialogflow fulfillment webhook. Most notably, those
resources include the Node.js client library and the Java client library.

As you build with Dialogflow, you'll use the Dialogflow Console to create Dialogflow intents,
entities, and training phrases.

Note: Terminology in Actions on Google and Dialogflow can sometimes be very similar, but they
represent different sets of information. For example, intents (and built-in intents) in Actions on Google
are distinct from Dialogflow intents and instead map to events in Dialogflow.

6
For more general information about Dialogflow, you can read about the Actions on Google
integration in the Dialogflow documentation.

2.6 Fulfillment using Actions SDK


Building conversation fulfillment with the Actions SDK primarily consists of creating and
deploying your Action package. Action packages are created in the ActionPackage format and use
the Actions on Google Conversation HTTP/JSON Webhook API. An Action package contains
all Actions for a given Actions project.

The Assistant provides user queries to your conversation fulfillment using Actions on Google
intents. For each intent, your fulfillment webhook must parse the intent, process it, and return a
JSON response to the Assistant for the user.

2.7 Responses
When you build an Action for the Assistant, you design your conversations for a variety of
surfaces, such as a voice-centric conversation for voice-activated speakers or a visual
conversation on a surface that the Assistant supports. This approach lets users get things done
quickly through either voice or visual affordances.

As you build your fulfillment, you can select from a variety of engaging response types for the
Assistant to present to users. These range from chat bubbles containing simple text to media
responses, carousels, and even HTML using Interactive Canvas.

2.8 Entities

Each intent parameter has a type, called the entity type, which dictates exactly how data from an
end-user expression is extracted.

Dialogflow provides predefined system entities that can match many common types of data. For
example, there are system entities for matching dates, times, colors, email addresses, and so on.
You can also create your own developer entities for matching custom data. For example, you
could define a vegetable entity that can match the types of vegetables available for purchase with
a grocery store agent.[3]

7
2.9 Entity terminology

The term entity is used in this documentation and in the Dialogflow Console to describe the
general concept of entities. When discussing entity details, it's important to understand more
specific terms:

 Entity type: Defines the type of information you want to extract from user input. For
example, vegetable could be the name of an entity type. Clicking Create Entity from the
Dialogflow Console creates an entity type. When using the API, the term entity type refers to
the EntityType type.
 Entity entry: For each entity type, there are many entity entries. Each entity entry provides a set
of words or phrases that are considered equivalent. For example, if vegetable is an entity type,
you could define these three entity entries:
 carrot
 scallion, green onion
 bell pepper, sweet pepper
When editing an entity type from the Dialogflow Console, each row of the display is an entity
entry. When using the API, the term entity entry refers to the Entity type
(EntityType.Entity or EntityType_Entity for some client library languages).
 Entity reference value and synonyms: Some entity entries have multiple words or phrases that
are considered equivalent, like the scallion example above. For these entity entries, you provide
one reference value and one or more synonyms.

8
3. BRIEF ON SYSTEM

3.1. Overview

Actions on Google is a developer platform that lets you create software to extend the
functionality of the Google Assistant, Google's virtual personal assistant, across more than 1
billion devices, including smart speakers, phones, cars, TVs, headphones, and more.

Users engage Google Assistant in conversation to get things done, like buying groceries or
booking a ride. (For a complete list of what's possible now, see the Actions directory.) As a
developer, you can use Actions on Google to easily create and manage delightful and effective
conversational experiences between users and your own 3rd-party fulfillment service.

This codelab is part of a multi-module tutorial. Each module can be taken standalone or in a
learning sequence with other modules. In each module, we'll provide you with end-to-end
instructions on how to build Actions from given software requirements and how to test your
code. We'll also teach the necessary concepts and best practices for implementing Actions that
give users high-quality conversational experiences.

This codelab covers beginner-level concepts for developing with Actions on Google. You do not
need to have any prior experience with the platform to follow this codelab.

What you'll build

In this codelab, you'll build a simple conversational Action with these features:

 Users can start a conversation by explicitly calling your Action by name, which then
responds with a greeting message.
 Once in conversation, users are prompted to provide their favorite color. Your Action
parses the user's input to extract the information it needs (namely, the color parameter).
 If a color is provided, your Action processes the color parameter to auto-generate a
"lucky number" to send back to the user and the conversation ends.
 If no color is provided, your Action sends the user additional prompts until the parameter
is extracted.
 Users can explicitly leave the conversation at any time.

9
3.2. Understand how it works

To start a conversation, the user needs to invoke your Action through the Assistant. Users say or
type a phrase like "Hey Google, talk to Google IO". This tells the Assistant the name of the
Action to talk to.

From this point onwards, the user is talking to your Action and giving it input. This conversation
continues as a two-way dialog until the user's intent is fulfilled or the conversation is finished.

Screen 5

Key terms:
 Intent: An underlying goal or task the user wants to do; for example, ordering coffee or
finding a piece of music. In Actions on Google, this is represented as a unique identifier
and the corresponding user utterances that can trigger the intent.
 Fulfillment: A service, app, feed, conversation, or other logic that handles an intent and
carries out the corresponding Action.

Screen 6

10
 Your Actions run entirely in the cloud, even when users talk to them on their phone,
smart home device, or watch.
 Every Action supports a specific intent and has a corresponding fulfillment that processes
the intent.
 The user's device sends the user's utterance to the Google Assistant, which routes it to
your fulfillment service via HTTP POST requests.
 Your fulfillment figures out a relevant response and sends that back to the Assistant,
which ultimately returns it to the user.

3.3 Create an Actions project

Actions projects are containers for your Action(s) with the metadata (name, description,
category) that becomes your Actions directory listing.

Screen 7. To start building Actions, you'll first need to create an Actions project as follows:

1. Open the Actions Console.


2. Click New project.
3. Type in a Project name, like "actions-codelab". This name is for your own internal
reference; later on, you can set an external name for your project.

11
Screen 8

4. Click Create Project.


5. Rather than pick a category, scroll down to the More options section and click on
the Conversational card.

Screen 9

12
6. Click Build your Action to expand the options and select Add Action(s).
7. Click Add your first action.
8. On the Create Action dialog, select Custom Intent and click Build. This will open the
Dialogflow Console in another tab.

Screen 10

How Actions work with Dialogflow

You might be wondering how the Assistant parses the semantic meaning of user input (such as spoken
utterances). This is done via natural language understanding (NLU), which allows Google's software to
recognize words in speech.

For your own Actions, Google provides a service called Dialogflow to let you handle NLU easily. Dialogflow
simplifies the task of understanding user input, extracting key words and phrases from the input, and returning
responses. You define how all this works within a Dialogflow agent.

Key terms:
 Dialogflow: A web-based service provided by Google that uses an agent to process user input.
This service allows you to integrate conversational apps with the Assistant, as well as with other
conversation platforms.

13
 NLU: Acronym for "Natural Language Understanding". This refers to the capability of software
to understand and parse user input. Developers can choose to use Dialogflow or their own NLU
solutions when creating Actions.

The following diagram shows an example of how Dialogflow (represented by the 2nd column from the right)
handles user input from the Assistant and sends requests to your fulfillment (represented by the rightmost
column).

Screen 11

3.4 Create a Dialogflow agent

Now that you've built your Actions project, create a Dialogflow agent and associate it with your project:

If using Google Chrome and signed into more than one account, ensure that you are using the
same account across consoles.
1. After following the steps above, you should already be in the Dialogflow Console with your Actions
project name at the top. You may need to authorize Dialogflow to use your Google account, and
accept the Terms of Service.

Not in the Dialogflow Console? Make sure you've completed all the steps in the ‘Create an
Actions Project' section. Still don't see it? Then you can navigate to the Dialogflow Console and
select Create new agent in the left navigation.
2. Click Create.

14
Screen 12

If the agent creation is successful, you will be in the Intents page. You can now begin customizing how your
Dialogflow agent responds to user requests.

15
3.5 Starting a conversation
Users start the conversation with your Actions through invocation. (An example of an explicit
invocation is a phrase like "Hey Google, talk to MovieTime".) The Assistant then tries to match
the user's input to one of your Actions.

Create a welcome intent

Every Actions project must have a welcome intent that acts as an entry point for users to start
conversations. The welcome intent is triggered when users explicitly invoke an Action by
uttering its name.

By default, Dialogflow creates a welcome intent for us. For this codelab, you'll modify the
welcome intent that users trigger when they say "Hey Google, talk to my test app".

To modify the welcome intent:

1. In the Intents page of the Dialogflow Console, click on Default Welcome Intent.

Screen 13

16
2. Delete all of the other text responses by clicking the trash icon next to each one.

Screen14

3. Under the Responses section, click Enter a text response and type "Welcome! What is
your favorite color?"
4. Click Save. You should see a "Intent saved" notification briefly pop up.

Test the welcome intent

The Actions Console provides a web interface for testing called the simulator. This interface lets
you simulate hardware devices and their settings. You can also access debug information such as
the request and response that your fulfillment receives and sends.

Tip: You can find the most recent information about using the Actions Console simulator in
this guide. Please refer there if you run into any issues following the steps listed below.

To test out your Action in the Actions console simulator:

1. In the Dialogflow Console left navigation, click on Integrations. Then, click on Google
Assistant > Integration Settings.

17
2. Click Test to update your Actions project and load it into the Actions Console simulator.
(If you see a ‘Check auto-preview setting' dialog, you can leave the ‘Auto-preview
changes' option enabled, then click Continue.)

Screen 15

3. To test your Action in the simulator, type "Talk to my test app" into the Input field and
hit enter.

18
Screen 16

Notice that when your Action's welcome intent is invoked, the Assistant responds with the
greeting message and prompts the user to provide their favorite color.

If you see "Welcome" as the response in the simulator, without a prompt for your favorite color,
it may be because you didn't delete the other responses. Make sure to complete step 2 in the
‘Create a welcome intent' section above.

Debug your Action

The panel on the right shows debugging information that's useful when testing your Action:

 In the REQUEST and RESPONSE tabs, you can see the request and response bodies in
JSON between the Assistant and Dialogflow.
 In the AUDIO tab, you'll see your Action's speech response. You can change the
response and update to hear how it sounds when synthesized into speech.
 The DEBUG and ERRORS tabs provide additional information to help you troubleshoot
issues with your Action's response.

In the next section of this codelab, we'll show how you can create dynamic, conversational
responses.

19
3.6 Functioning
The program should firstly be started on the Android phone; the initial mode of the program is
Voice mode since this program aims at making a voice assistant program. However, if there are
users who prefer to operate in text mode by inputting the text manually, the text mode is also
available. After the program has been started, the user should have correct voice input
“command/request” to make those functions work properly. And this program includes the
functions and services of: calling services, text message transformation, mail exchange, alarm,
event handler, location services, music player service, checking weather, Google searching
engine, Wikipedia searching engine, robot chat, camera, Bing translator, Bluetooth headset
support, help menu. The details below explain how those functions work and different
possibilities while facing different commands.

 Calling service, the calling function allows the users to give a call to the person in the
contacts. By giving a correct command with the calling request to a stored person, the
Android phone will check the contact list and get the phone number of the person, then
successfully direct to the phone number found in the contacts.
 Text Message transformation, the text message transformation enable customers able to
send the SMS to the person in the contacts. By giving a correct command contains the
request keyword to send SMS together with the destination person; the program will
navigate to the sending message function on the mobile phone with the phone number,
message content. The message will be sent to the destination immediately if the user
selects to send it with the correct content.
 Mail exchange, customers are able to send the mail to the person with mail address in the
contacts. By giving a correct command contains the mail request keyword together with
the destination person; the program will switch to the sending mail function on the
mobile phone with the mail address and mail content. If the content is correctly detected,
the mail will be received by the recipient after the user selects to send the mail, otherwise
the user can modify the mail content if the voice recognition is not well detect the mail
content.
 Alarm, as a basic function on the mobile phone, the user could simply set the alarm
through the command with the setting alarm keyword and a specific valid time. When the
alarm request and time are detected, the program will set the alarm to the given time with
dedicated hour, minute and second; when the time comes up, the alarm will be trigged
with a alarm bell and an alert notification which the user can choose to stop the alarm,
otherwise the alarm will keep working and the song will always be playing.
 Event handler, the application allows the user to set as many events as they want.
Customers set the events with the content and title, the program switch to the event
handler interface with the content and the title, and the event will be stored immediately
if the user ensure the event. With INTELLIGENT VOICE ASSISTANT 37 the stored
events, the event handler makes the events available for the user to check all events,
check one event, modify the selected event and delete all events.

20
CONCLUSIONS

- Project development and implementation

As it has been previous stated, the program is mainly concerns with the techniques of
Android development, Java programming, Database management, Cloud computing,
different APIs for Google products, Bing translate and etc. The program is developed by
two developers and follows the extreme programming model. During the eight weeks
development, the developers did the same cycle in each phase of analyze requirements,
construct design, implement the solutions in pair programming mode and test the result.
The development is carried out as its primary planning which guide the work process of
how to work with the program, how much time should the each of the developers spent in
every week, the rescores needed for developing and how to handle the problems while it
came up. The project was efficiently completed under the development model and the
resources we found in early time were really useful when implementing the program.

- Project usage & prospect, potential

The project is very useful and owns a large potential use in different industries. Although
the program primary concerns more about how to do the personal assistant on Android
phone using the voice, the concept of voice recognition can be applied in different
industries as in many situations it will be more convenient, save a lot of time and helpful
especially for those who have difficulty in working with manual operations. Thus, the
concept is only for programming the Android application.
For the program itself, it is a collection of 15 functions that are frequently used on a
mobile phone. The user can enjoy different services within this platform. Therefore, it is
easy to use with simple operation compared with the traditional working strategies which
the user should well know how to work with the mobile phone.
In addition, the program which works using the voice is helpful for those who prefer
voice operation and those who have difficulty /disability with the manual operations. The
primary objective of the program is to provide services using the voice, and it enables
more people who can enjoy this program.
The prospect of the program can be more applications or products developed using the
voice control, and it could in some sense change the working forms that is totally
different from the traditional form. As people can easily operate and have a lot of fun
from it, it owns an enlightened prospect as SIRI succeed in attracting people in the
market.

21
Applications
Google Assistant on phones

Google expanded its Google Assistant service in 2017 so that it would be available on more
mobile devices. That saw the roll-out of Assistant to most Android phones, with all recent
launches offering the AI system. Even devices that offer another AI system, like Samsung's
Bixby, also offer Google Assistant. Essentially, if your phone has Android, your phone has
Google Assistant, so the user base for Google Assistant is huge.

It's possible to have Assistant respond to you even when your Android phone is locked too, if
you opt-in through your settings and you can also opt in to see answers to personal queries too.

Google Assistant is also available on the iPhone, although there are some restrictions. So,
Google Assistant is no longer the preserve of Pixel phones; it's something that all Android users
and even iOS users can enjoy.[4]

Google Maps app

Google Assistant can help you navigate in Google Maps, on both Android and iOS devices. With
your voice, you can share your ETA with friends and family, reply to texts, play music and
podcasts, search for places along your route, or add a new stop, all in Google Maps. Google
Assistant can also auto-punctuate your message (on Android and iOS phones) and read back and
reply to all your notifications (Android only).

Assistant works with many popular messaging apps and SMS, including: WhatsApp, Messenger,
Hangouts, Viber, Telegram, Android Messages and more. When driving, Google Assistant can
auto-calculate your ETA from Google Maps and can send to friends too, if you have an Android
device.

Just say, "Hey Google, take me home" to open Google Maps and get started.

Google Home devices

Google Home is the company's direct competitor to the Amazon Echo. Google Home is
essentially a Chromecast-enabled speaker that serves as a voice-controlled assistant. It's the first
port of call for Google Assistant in the home and it's an expanding ecosystem. There are
currently five devices currently available in the Google Home portfolio including the Google
Home, Google Home Max, Google Home Mini, the Nest Hub and Nest Hub Max.

You can ask Google Home devices to do anything you'd ask Assistant to do on Android phones,
but moving into the home really puts the emphasis on other services and functions, like smart
home control, compatibility with Chromecast to send movies to your TV, and a whole lot more.

 Google Home vs Google Home Mini vs Google Home Max

22
 Google Nest Hub vs Nest Hub Max: Which should you buy?

Wear OS

Google Assistant is also available on wearables running Wear OS. Using the wake words, you'll
be able to ask Assistant to perform a number of tasks from your watch, such as turning the
heating down, or replying to a message.

Android TV

Android TV also offers Google Assistant on a number of devices. Sony offers Android TV
across its models, for example. There's another dimension here though: Sony TVs not only run
Android TV, but they are also compatible with Google Home and Amazon Alexa, meaning you
can control your TV by talking to your speaker, as well as control your lights by talking to your
TV.

Set-top boxes like Nvidia Shield TV also support Google Assistant and the list of popular media
and entertainment devices supporting Assistant is constantly expanding, with TVs from Samsung
also in the mix, as well as DISH's Hopper family of receivers. With Google Assistant built-in,
you can use your voice to turn on the TV, change volume and channels, and switch between
inputs.

At CES 2019, new partners launching Android TV devices with the Google Assistant included
Sony, Hisense, Philips, TCL, Skyworth, Xiaomi, Haier, Changhong, JVC and Toshiba.

23
References

[1] https://en.wikipedia.org/wiki/Google_Assistant
[2] https://developers.google.com/assistant/sdk/guides/service/python
[3] https://cloud.google.com/natural-language/docs/reference/rest/v1/Entity
[4] https://www.home-assistant.io/integrations/google_assistant/
.

24
ACKNOWLEDGEMENT
I would like to place on record my deep sense of gratitude to Prof. Sanjay Kalyankar, HOD-
Dept. of Computer Science and Engineering, Deogiri Institute of Engineering and management
Studies Aurangabad, for his generous guidance, help and useful suggestions.

I express my sincere gratitude to Prof. Amruta Joshi, Dept. of Computer Science and
Engineering, Deogiri Institute of Engineering and management Studies Aurangabad, for her
stimulating guidance, continuous encouragement and supervision throughout the course of
present work.

I am extremely thankful to Ulhas Shiurkar, Director, Deogiri Institute of Engineering and


management Studies Aurangabad, for providing me infrastructural facilities to work in, without
which this work would not have been possible.

Rhugved Takalkar (36058) Sign

25

You might also like