You are on page 1of 6

SpeechParser

SpeechParser is a module for Premise Home Control that allows you to pass a natural language
string to Premise. Once the string is received, Premise will intelligently interpret it, performing
actions on objects in your home or providing a response to questions you ask. If a string is
understood, Premise will also create a natural language response so the user knows for sure that
an action was invoked.
SpeechParser is meant to work in conjunction with Android, running the following apps: Tasker,
Google Now and AutoVoice. When used with these apps, a user can talk into their phone after
clicking the Google Voice Search button and saying a phrase. When the user is done talking, a
string is passed to Premise containing the spoken phrase. Premise will also send a response that
is read back to the user on the phone using text to speech.
It should be noted that SpeechParser is generic in nature. Any device or OS could be made to
work with it and Premise, provided the SDK or APIs exist and the device has a microphone.
There are a variety of ways not documented here to use SpeechParser; it can even be used with an
Android Wear device such as a Moto 360 smartwatch to control your home hands free!

Requirements
A free program called Motorola Premise Home Control found here:
http://cocoontech.com/wiki/Premise
Google Now installed on an Android device.
Tasker downloaded from the Google Play store.
AutoVoice downloaded from the Google Play store.

Instructions
1. From the Google Play store, install the the android applications mentioned above on your
phone. Everything should work without being having to root your phone.
a. Open Tasker and enable the AutoVoice plugin: Misc->Allow External Access.
b. Open AutoVoice and click Google Now Integration.
c. Ensure the AutoVoice Google Now Integration accessibility service is enabled.
If its not click not enabled, then enable the service by clicking on it and sliding
the slider to On.
2. Edit the included Tasker javascript tasker URL fetch.js. Change the ip address,
username, password and port to that of your Premise server. Youll also need to add the
GUID of the SpeechParser object you created in step 5 below. You can easily modify the
js file using any text editor, notepad++ is the one I prefer.
3. Paste the tasker URL fetch.js script onto your phone under the any folder and setup a
Tasker Task and Profile that will use it (detailed instructions are later in this document).
4. Import the SpeechParser module into Premise. If you are updating from a previous
version, remove it beforehand or youll have duplicate GUIDs!
5. Right click under Home and create a new SpeechParser object. Now right click on the
new SpeechParser Home object and view its GUID. This GUID, needs pasted in the
tasker URL fetch.js script (see step 2 above).
6. Note that your existing home objects (and any new objects you add later) will now have
two new properties under NaturalLanguage named VoiceExpression and
ResponseName. Read the detailed setup found later for these strings as they are

2014 Ellery Coffman


If you like this module, please consider a Paypal donation to: ellery.coffman@gmail.com

important. Youll need a unique mutually exclusive VoiceExpression for each location
object. The ResponseName is an optional parameter.
7. Test everything by clicking the Google Vocie Search button and talking into your phone.

Detailed Setup
VoiceExpression Parameter
VoiceExpression is a string you are required to fill out for each room (e.g. location). The string
can take the form of a regular expression or it could be text. However, remember that all names
should be mutually exclusive. So if you have a VoiceExpression entry for one Home object
and also a VoiceExpression entry closet for a different object, you will need a regular
expression such as entry(?! closet) under the Entry home object, otherwise when you say entry
closet Premise will think you want to manipulate objects under both rooms: the closet and the
entry itself.
VoiceExpression extends Premises tag class. This means that every home device object (e.g.
thermostats, lights, televisions, etc) can also have a custom voice expression. The global
scripts under VoiceProcessing will automatically perform recursive searches for the intended
object, using any of the following: the objects type (e.g. light), the objects name, the objects
descript, the objects display name and even the bound device objects name.
Therefore, unless there are two objects of the same type you want to differentiate from in your
speech pattern (e.g. two thermostat zones), you need not specify a VoiceExpression for the home
device objects, only for the home locations. In short, leave VoiceExpression blank for home
device objects unless you intend to never provide a location and object type in your spoken
sentence. If you do use VoiceExpression with a given command, it will take precedence over the
automatic recursion that the SpeechParser module performs, so using a room location, object type
will not find an object match for the command.
ResponseName
ResponseName is a string that is used to form the natural language response to what actions were
performed. This is optional. For simplicity, the home object name is not part of the response
phrase, only the home object type is.
A typical response phrase will look like this: Third bedroom light is off. Where: Third bedroom
is the ResponseName for the ThirdBedroom, light is the object type, and off is the home objects
new state. Please remember that the ResponseName is optional. If ResponseName property is
blank, Premise will attempt to use the locations display name. If the locations DisplayName
property is blank, Premise will use the locations name.
User Customization
sys://Schema/Modules/SpeechParser/Classes/SpeechParser/UserCustomization
By default most home objects will work, so user customization is optional. However, if you have
special global scripts, scenes or macros for performing tasks such as a goodnight scene, speaking
the weather, etc -you may want to trigger these too.
To do this, visit the UserCustomization script at the address above in Premise Builder. Scroll
through the several examples that are included in the script. You will have to modify the lines of
code to work with your system, and delete or comment out the code that does not (e.g. calls a
global script not on your system, uses a weather module that is not present, etc).

2014 Ellery Coffman


If you like this module, please consider a Paypal donation to: ellery.coffman@gmail.com

After studying the examples and modifying UserCustomization, youll need to enable the script in
Premise Builder by visiting the properties for UserCustomization and enabling it. Dont forget to
commit the code changes when youre done by clicking the commit to server button in Premise
Builder.

How to Talk to Your System


Overview
COMMANDS: Unless a UserCustomization is used, whatever sentence you want Premise to
process must have each of the items below at a minimum (but in any order):
1. A location and/or explicit object name defined by its VoiceExpression property.
a. This can be a nickname for a room, the actual room name, etc. The location is
interpreted based on the required VoiceExpression string. If you do not give a
location (e.g. set the thermostat to 65 degrees), you must set a VoiceExpression
for thermostat.
2. An object type and/or property name.
a. If no object type is given, Premise will interpret the property name and attempt to
figure out what object you are talking about. For example, if you say only
brightness, Premise assumes you are talking about a light object.
b. If you want to perform audio/video commands (play, pause, etc..), you only need
to give the command name and the room. Premise will find the source selected
in the room. For example, if you say Mute the Theater, Premise will find the
appropriate object, and Mute it. However, you could also say Mute the receiver
in the theater. Premise would correctly interpret this too.
c. Since audio/video components and lights both use the PowerState property name,
youll have to remember to say light when trying to turn on/turn off a light in a
room containing a media zone, as the media zone will toggle on/off without the
words lamp or light.
3. A property value.
a. For a few special cases, a property name is automatically implied by the property
value. For example, turn on means set the PowerState (property name) to true
(property value). The scripting is smart enough to figure out what you mean.
The global script gGetPropertyNameAndValue interprets what property and
value pair you want to modify.
b. The SpeechParser will automatically figure out the property value type for you
(percent, Boolean, etc)
QUESTIONS: Questions need one of the following to be present in a phrase (in any order) for a
successful response.
1. Location plus object type or name plus property name (e.g. is the master bedroom light
on)
2. A phrase that matches a VoiceExpression property for an object plus a property name
(e.g. what is the downstairs thermostat temperature)
A few special notes
Whatever sentence you speak can include multiple locations, but must only include a single
object type (e.g. light, fan, TV, etc). However, you can combine user customizations in a
sentence along with a normal command. See the last example phrase in the next section for an
example, and also read the User Customization section.

2014 Ellery Coffman


If you like this module, please consider a Paypal donation to: ellery.coffman@gmail.com

The spoken text to speech (tts) response by your phone is not simply repeating back what you tell
it. Whatever response it gives is based on what state changes occurred in the Premise server. To
reinforce this, already will be added to a response if the desired property state for an object
matches its current value.
Some example phrases
For each example, the colored words below show what is interpreted by Premise. Red is the
location, orange is the object type, green is the property value, and purple is the property name.
Blue indicates some text used for a user customization.
Turn off all lights in the home Premise interprets the word light and knows you want it to do
something with one or more light objects. The system then processes the words turn off and
interprets this to mean set the PowerState value to false for whatever object type was spoken (in
this case light). Lastly, the system sees the word Home and so it knows to find all lights in your
home and turn them off. The text to speech (tts) response given for this sentence if successful is:
All home lights are off.
Mute the TV in the living room or simply Mute the living room a typical tts response would
be: Living television is mute or Living is mute for the simpler phrase.
Set the upstairs thermostat to xx degrees where xx is a number. Notice no location is given;
this is because I have the following VoiceExpression defined for the upstairs thermostat:
((upstair|up stair|2nd|second).*(thermostat|temperature))|(temperature.*upstair). Also, note no
property is given. The module assumes you are talking about the CurrentSetPoint property for
the thermostat object.
Pause the theater or Pause the media center in the media room where the VoiceExpression
for Theater is (theat..|media room)(?! closet). The (?! closet) is to exclude theater closet or
media room closet from being processed as the theater location too.
Set the third bedroom light to 99 percent brightness or Set the third bedroom brightness to 99
percent or Set the third bedroom light to 99 percent A typical tts response would be: Third
bedroom brightness is 99 percent. If the brightness was already 99 percent brightness, the tts
response would be: Third bedroom brightness is already 99 percent.
Turn off the kitchen and the living room lights; also tell me what the weather will be like today.
Note that for this example, the weather will be spoken and as a consequence you will receive no
text to speech response for the action on the living room and kitchen lights. However, the living
and kitchen lights will go off.
Are the living room lights on Note this is interpreted as a question by the method
Modules.SpeechParser.Classes.SpeechParser.ProcessCmd_NaturalLanguage using a long regular
expression. Since the command begins with the word are, the module assumes it is a question.
Questions are appropriate when you want to know the state of a property, but it can also be used
with the UserCustomization method.

Details about the SpeechParser Home object


Upon installation, you should see the home object below.

2014 Ellery Coffman


If you like this module, please consider a Paypal donation to: ellery.coffman@gmail.com

You should note the folder labeled ActionsPerformed. This folder contains all of the actions
that were previously performed based on the received natural language command. The items
under this folder are also used to form the response phrase. If more than one location is used, the
Reponses folder will typically have multiple sentences, one for each location. However, the
response length can change, depending on the MaxSentencesPerLocation and AttemptToShorten
properties set under Home.SpeechParser.Responses.
The ActionsPerformed folder is very useful for debugging any issues you may experience. It will
take some practice to define VoiceExpressions for each room that do not overlap (e.g. are
mutually exclusive). You should probably study one of the many regular expressions tutorials on
the internet to learn more.

Notes About the Tasker Setup


The Tasker Javascript will send EVERY Google Voice Search/Google Now to your Premise
server. If the Premise server cannot process the Google Voice Search command (e.g. a command
like how tall is Obama), the script will continue and not minimize the Google search result
which will result in Google reading the response allowed. If the command is successfully
interpreted by Premise, the Google search result will be minimize by using the Home command
in Tasker.
This is a very elegant way to perform spoken commands, that doesnt break Google Nows
normal functionality. However, depending on your internet connection and/or cell provider, it
can mean theres a slight delay. Rest assured the javascript has a short failsafe timeout of 1000
milliseconds so no Google Now functionality is broken if your Premise server is offline.
One interesting nuance to Tasker is that whatever task you use to trigger following a Google
Voice Search, will cancel the previous task, even if it hasnt completed. To fix this, be sure to
change the Collision Handling Task setting to Run Both Together.
Setting Up Tasker
1. Add a new Task that will run the JavaScript included with this package by clicking the +
in Tasker while browsing the Tasks tab.
a. Name the task and click the check box.
b. Add a step to the task by clicking the + and typing javascript into the filter.
c. Point the JavaScript task to the correct file path.
2. Add a new profile that will be executed with AutoVoice receives any text.
a. In Tasker, click the + button under Profiles and add a new State profile.
b. Begin typing autovoice and select AutoVoice Recognized.
c. Edit the Configuration, leave Command Filter blank and click the check box,
set Do Google Now Search to true.
d. Point to the profile task created in step 1.

2014 Ellery Coffman


If you like this module, please consider a Paypal donation to: ellery.coffman@gmail.com

Using the Module with Android Wear


There are currently two ways to do this.
1. Subscribe to AutoWear and download the alpha version.
2. Root your Android 4.4.4 or earlier phone and install the xposed framework. Then install
the Google Search Now api found on xda developers.

Terms
The author shall not be responsible for any damage loss or implied by the use of this module.
Google Voice Search/Google Now may store sensitive information such as alarm and door lock
passcodes, and these maybe visible to others depending on who uses your device, Chrome
browser, etc...
You will not use this module for for profit use. This is strictly prohibited.
You are only authorized to install this module for your own personal use. If you are a business,
contact the author for licensing terms.
You will not distribute this module or any of its content.

2014 Ellery Coffman


If you like this module, please consider a Paypal donation to: ellery.coffman@gmail.com

You might also like