You are on page 1of 93

1

APPROVAL SHEET

​ ​ ​Crowdsourced Speech Recommendation System for


Title of Project: A
People with Aphasia

Name of Candidate: Ankita


M.S., 2015

Project and Abstract Approved:


_____________________________________
Shaun K. Kane
​Assistant Professor
​Department of Computer Science
University of Colorado Boulder

Marie desJardins
Professor
Department of Computer Science and
Electrical Engineering
University of Maryland Baltimore County

Date Approved: __​5/6/2015​_____________________________________


2

ABSTRACT

​ Crowdsourced Speech Recommendation System for People with


Title of Project: A
Aphasia

Ankita, M.S., 2015

Project directed by: Shaun K. Kane, Assistant Professor


Department of Computer Science

Marie desJardins, Professor


Department of Computer Science and
Electrical Engineering

Traditional communication aid technologies allow users to retrieve words and phrases

using built-in vocabulary lists. Although some pre-set vocabulary lists allow users to add

conversation topics, they require the user to manually input and pre-program the speech

options in advance. This especially raises concern for people with aphasia, who face

difficulty with vocabulary access and speech output, as it entails increased dependence on

others. Despite previous research indicating context awareness as an important factor for

improving usability of traditional vocabulary lists, less consideration has been given to
3

generating new context-relevant speech recommendations to support users’ needs each

time they make spontaneous visits.

Using Amazon’s Mechanical Turk (MTurk) crowdsourcing platform and Google Places

Application Programming Interface (API), we introduce the design of a crowdsourced

context-aware speech recommendation tool that generates speech suggestions for people

with aphasia by soliciting human contributions directly into the vocabulary lists.
4

A Crowdsourced Speech
Recommendation System for People with Aphasia

by
Ankita

Project submitted to the Faculty of the Graduate School


of the University of Maryland in partial fulllment
of the requirements for the degree of
Master of Science
2015
5

©​ ​Copyright Ankita 2015


6

To friends and family

i
7
8

Table of Contents

Dedication ..........................................................................................................................i
Table of Contents ..............................................................................................................ii
List of Tables ....................................................................................................................iv
List of Figures .................................................................................................................. v
List of Abbreviations and Acronyms .............................................................................. vi

CHAPTER 1: INTRODUCTION​................................................................................... 1

1.1 Defining Aphasia..........................................................................................................1


1.2 Importance of Understanding Aphasia and the Application of Technology............... 2
1.3 Problem Definition.......................................................................................................3

CHAPTER 2: RELATED WORK​................................................................................ 5

2.1 Augmentative and Alternative Communication (AAC) ………................................. 5


2.2 Advances in electronic AAC …….............................................................................. 6
2.3 Location Awareness.................................................................................................... 7
2.4 Crowdsourcing……………........................................................................................ 9
2.5 Crowdsourcing for Accessibility.................................................................................11
2.6 Amazon’s Mechanical Turk........................................................................................12

ii
9

CHAPTER 3: CROWDSOURCED SPEECH RECOMMENDATION SYSTEM.. 13

3.1 SpeakAhead…………………... ….……………………......................................... 13


3.2 System Description………………………………………………………………... 17
3.3 SpeakAhead Interface.............................................................................................. 18

CHAPTER 4: CONCLUSIONS …………………………............................................36

4.1 Discussion....................................................................................................................36
4.2 Conclusion ..................................................................................................................38
4.3 Future Work ................................................................................................................40

References ........................................................................................................................41
Appendices ...................................................................................................................... 51

iii
10

List of Tables

Table 1: Differences between Static and Dynamic Types of AAC devices ................. 5

Table 2: Overview of the tables with sample data…………………………………… 24

iv
11

List of Figures

Figure 1: A low-tech AAC like the Picture Exchange Communication System

(PECS) provides the user access to a limited number of printed

words to create simple requests .......................................................................6

Figure 2: A high tech AAC like IPad Application Proloquo2Go with dynamic display

gives the user access to multiple pages of vocabulary in order to

form sentences……........................................................................................7

Figure 3: System Walkthrough of SpeakAhead speech recommendation tool …….. 15

Figure 4: Preliminary HIT Design created during prior versions of SpeakAhead......17

Figure 5: Data Flow diagram Level 1......................................................................... 19

Figure 6: Data Flow diagram Level 2........................................................................ 20

Figure 7: tbl_Location Structure..................................................................................24

Figure 8: tbl_HIT Structure........................................................................................ 25

Figure 9: tbl_Assignment Structure........................................................................... 26

Figure 10: tbl_Answer Structure……………………………………………………..27

Figure 11: E-R Diagram for the database....................................................................28

Figure 12: Home Page of the context aware speech recommendation tool.................30

Figure 13: Location Page of the context aware speech recommendation tool.............31

Figure 14: Location Image showing Place Icons using column toggle button.............32

Figure 15: Column toggle button showing layout with different columns​…………….33

Figure 16: Response page showing phrases likely to be spoken at a location..............34


12

List of Abbreviations and Acronyms

1. AAC - Augmentative and Alternative Communication

2. API - Application Programming Interface

3. HCI - Human Computer Interaction

4. HIT - Human Intelligence Tasks

5. PECS – Picture Exchange Communication System

vii
13

Chapter 1

INTRODUCTION

This chapter begins by first defining aphasia, its impact on everyday life and why it is

important to understand the role and challenges of providing technological support for

individuals with aphasia, followed by an introduction to the problem definition of this

project work.

1.1 Defining Aphasia


 

According to National Aphasia Association (2011), “Aphasia is an​ ​impairment of

language, affecting the production and comprehension of speech and the ability to read

and write.” While aphasia is often seen in older people and is commonly caused by

stroke, it can occur in people of all ages, races, nationalities and gender and affects the

person’s fluidity in recalling nouns, reading or thinking of words to express a thought.

More than 200,000 Americans are diagnosed with the disorder every year. Some of the

negative consequences of aphasia include: ​diminished social and work relationships,

changes in self-image​, ​emotional distress, frustration, lack of confidence and increased

dependence on family members (National Aphasia Association, 2011).

There are different manifestations of aphasia with unique combinations of deficits.

However, two forms of aphasia that most directly stand to benefit from the work done in
14

my project are (1) Anomic aphasia and (2) Broca's Aphasia. In both Anomic aphasia and

Broca’s Aphasia, individuals can generally ​read efficiently but require assistance with

vocabulary access and speech output​. Anomic or nominal aphasia affects a person’s

ability to produce words in context of speech and causes frustration. Those with anomic

aphasia can usually comprehend speech and written language but cannot produce clear,

coherent thoughts. ​Broca's aphasia causes speech-production difficulties and those with

Broca’s aphasia often understand speech but have significant difficulty with writing and

producing vocabulary (Yavuzer, 2010​).

1.2 Importance of Understanding Aphasia and the Application of Technology

Stroke Association (2013) asserts that there is not one solution to make information

understandable by everyone affected with aphasia. Since aphasia affects people

differently, it is important to identify how and where technology can fill the gap to

improve communication needs. Dawe (2006) asserts that even with many forms of

assistive technologies that are available to people with aphasia, 35% of devices are

abandoned shortly after their introduction as communication aids. Tentori and Hayes

(2010) suggest many reasons for lack of use of communication aids such as high cost of

purchase, lack of portability and issues with usability. This project addresses an issue

related to speech recommendations provided by vocabulary lists in traditional assistive

technologies and suggests an alternative technology to improve usability.


15

1.3 Problem Definition

Vocabulary lists in traditional communication aid technologies allow the user to retrieve

words and phrases using a pre-stored vocabulary catalog. These pre-stored catalogs

contain words and phrases that come in-built e.g., a 'Food' category may always contain

fixed words like 'coffee', 'tea', 'burger', 'sandwich' etc. While customizable vocabulary

lists that allow users to add words or phrases to the existing lists exist, they impose an

increased burden on users. This especially raises concern for people with aphasia who

face difficulty in vocabulary access and speech output as it entails dependence on others

and increases user effort to recall words and phrases and construct sentences.

Additionally, current technologies that incorporate contextual information to pick

conversational topics do not consider users’ spontaneity to request new sets of phrases

anywhere and anytime. One possible solution is the use of an on-demand human

work-force.

Since previous research by Kane, Linam, Althoff and McCall (2012), and by Epp,

Campigotto, Levy and Baecker (2011) used contextual information only to retrieve

previously-added phrases and words, this project advances by introducing a novel method

that identifies the user's location-related contextual information like name, image, icon,

website. Human contributions by crowdworkers are integrated directly into the user

interface to generate and display new location-relevant words and phrases to help people

with aphasia with their daily needs of communication.


16

Chapter 2

RELATED WORK

This chapter explores the intersection of aphasia and technology in improving

communication support for people with aphasia or other related disorders. The chapter

explains the mechanism of traditional Augmentative and Alternative Communication

(AAC) Systems, discusses previous research on adaptive AAC systems, and describes

how location awareness and crowdsourcing techniques can improve the resources

available to people with disabilities.

2.1 Augmentative and Alternative Communication (AAC)

People with aphasia rely on AAC because they struggle with language difficulties

ranging from word retrieval, sentence formation and vocal articulation and often cannot

effectively express themselves without an AAC device. A communication aid can be

anything that makes communication quicker and easier ​and ​can include ​communication

books and ​speech-generating devices that are used to transmit and receive messages

(Supporting People who use AAC Strategies: In the Home, School, and Community,

2008). ​People with aphasia often store and organize a huge list of vocabulary words and

maps to aid their communication (Accessing AAC through Medicaid and EPSDT

Disability Rights, 2008).


17

According to Visvader (2013), low-tech AAC aids that do not need batteries, electricity

or electronics can include books or vocabulary banks through which the person can

communicate a thought or idea (see Figure 1). ​On the other hand, high-tech AAC aids

that utilize power through batteries or electricity have the ability to store and produce

electronic messages, allowing the user to communicate using speech output and to

personalize communication through photos, videos or custom prompts (Crais, 1991).

The following table distinguishes the types of AAC devices based on language display:

Static Dynamic
Low-tech High-tech
Limited customizable word bank Memory-based customizable word bank
Functional-based Language generation-based
Devices include: Picture Exchange Devices include: DynaVox Maestro, a Prentke
Communication System (PECS), Big Mack, Romich Accent, a Saltillo Nova-Chat or an
Rocking Plate Talker and Cheaptalk iPad (with an appropriate AAC app)

Table 1. Differences between Static and Dynamic Types of AAC devices

Traditional high-tech electronic aids use a static display to display a limited number of

pre-assigned words and letters. On the other hand, dynamic display aids use a

memory-based word bank that allows the user to electronically access pages of

vocabulary and provide for custom display options to be stored by the user. Dynamic

displays are also considered better than static displays for individuals who have difficulty

recalling and learning a large number of vocabulary words (Visvader, 2013).


18

Figure 1: A low-tech AAC like the Picture Exchange Communication System (PECS)
provides the user access to a limited number of printed words to create simple requests.
(http://www.iocresco.it/images/stories/Articoli/PECS/pecs.jpg)
19

Figure 2: A high tech AAC like IPad Application Proloquo2Go with dynamic display
gives the user access to multiple pages of vocabulary in order to form sentences.
(http://www.assistiveware.com/sites/default/files/ipad-mini-proloquo2go-3-core-home-6x
6-landscape-small.png)

2.2 Advances in Electronic AACs

People with aphasia identify computer-based therapy as a helpful way to improve

communication (Stroke Association, 2013). Software used for language formation helps

re-engage individuals with aphasia in social and community activities and make them feel

more confident about their overall communication skills (Golashesky, 2008). Recent

research on AAC devices and field studies collaborating with people with aphasia or
20

other related language disorders have helped to explore many avenues in which the

usability of AAC systems can be improved further. For example, Allen, McGrenere and

Purves (2007) introduced PhotoTalk, a mobile device application that allows people with

aphasia to capture and manage digital photographs to support face-to-face

communication. ​Voice4U, another AAC application, helps to ​provide portable and

customized communication aid to people with aphasia by the use of customizable icons,

images, photos and audio clips (Willkomm, 2011). Applications designed to assist

individuals with aphasia are moving from image-based support to context-based support

– Two applications that are currently available or in development include (1) MyVoice

and (2) TalkAbout. Both of these applications’ systems utilize context-sensitivity to

improve the usability of traditional AAC systems. MyVoice provides location-specific

adaptation for people with aphasia to suggest conversation topics (Epp, Campigotto,

Baecker & Marco Polo, 2011). Going a step further, TalkAbout would also provide users

with a word list that is adapted to their current location as well as suggest the topics of

conversation previously discussed with the same conversation partner (Kane, Jayant,

Wobbrock, & Ladner, 2012). Inspired by these evolving applications and resources, my

current research project designed an application that provides users with spontaneous

speech recommendations based on location context. While MyVoice and TalkAbout

both utilize pre-stored vocabulary lists, my SpeakAhead uses a crowd-sourcing platform

to integrate human contributions.


21

2.3 Location Awareness

While the use of location awareness is not new, the use of location awareness in AAC

systems is a relatively new area in Human Computer Interaction, which focuses on using

mobile technologies to anticipate the needs of the user and suggest relevant conversation

topics. Kane et al. (2012) pointed out that “growing ubiquity of mobile devices and

applications has resulted in the introduction and widespread adoption of AAC software in

mainstream mobile devices, tablets and PCs.” Recent work on context awareness in AAC

devices (Kane et al., 2012, Epp et al., 2011, Wisenburn et al., 2008 ) has shown how

people with aphasia can benefit from context awareness in AAC devices. These

approaches were better than traditional AAC approaches in a variety of domains: (1)

These methods provide support for sorting through words and symbols within a given

context; and (2) These methods track the user's geographic context through GPS sensors,

image or speech recognition of the speaking partner to suggest relevant conversation

topics. Those with Aphasia are most excited about applications that provide

context-based conversation prompts because these applications narrow down the scope of

phrases they may encounter on a specific subject (Kane et al., 2012).

Wisenburn and Higginbotham (2008) introduced a new AAC device for those with

communication difficulties known as voice-input voice-output communication aid or

VIVOCA. VIVOCA uses automatic speech recognition (ASR) techniques, which utilizes

statistical models using hidden markov model (HMM), in order to generate coherent
22

speech from disordered attempts (Wisenburn & Higginbotham, 2008) The software

MarcoPolo by Epp et al. (2011) provides a tailored vocabulary list that can be arranged

according to a category and/or a geographic location.

Since, the approaches above suggest conversation topics that are pre-programmed by

users in the word lists, my project focuses on designing an application that provides

users with spontaneously-generated new conversational phrases to AAC users and thus

reduces the burden placed on people with aphasia while using traditional vocabulary lists.

2.4 Crowdsourcing

Crowdsourcing solicits labor from an online community. Small sub-tasks that can be

easily solved by human workers are out-sourced to internet workers. Responses are

available on-demand from one of the many people in a given platform. This is an

especially important resource when required responses are specialized and not able to be

computed through algorithms.

Crowdsourcing applications can be used for a number of different needs and purposes. In

2009, Luis van Ahn introduced crowdsourcing as a game format. His “Games with a

Purpose” used crowdsourcing and the collective intelligence of the participants to

complete a task that would be extremely complex to solve with nonhuman intelligence

(​Ahn & Dabbish, 2004). Google applied a similar model of crowdsourcing to refine

search results for online images. Soylent, a word processing crowdsourcing interface,
23

calls on crowdsourced workers to edit a document with a Find-Fix-Verify pattern. The

initial worker locates errors in a document, the second worker edits the errors that were

identified and a third worker verifies the edits (​Bernstein et al., 2010).

As crowdsourcing applications become more varied, it is relevant to consider the

reliability and quality of the responses. ESP games used agreement technique to validate

responses. Another technique developed by Rzeszotarski (2011) recorded key presses,

delay time, accuracy and other features of the responses in order to predict quality and

potential for mistakes.

2.5 Crowdsourcing for Accessibility

While crowdsourcing platforms can be used for a variety of purposes like image labeling,

writing, surveys and categorization, crowdsourcing technology has more recently been

identified as a potential model for problem-solving ​(​Brabham, 2008). Crowdsourcing can

be utilized as a resource in improving accessibility for individuals with disabilities. One

application that improves accessibility for its user is VizWiz. One recent application of

crowdsourcing, VizWiz is a crowdsourcing application that allows blind users to receive

near real-time answers to their specific image-based questions. The quikTurkit algorithm

employed by VizWiz notifies workers in advance that a user is about to ask a question,

decreasing latency between when the question is posed and when an answer is received

(Bigham, 2010). My project introduces a novel method to outsource contextual

information like place names, images, website links, icons and types on Amazon
24

Mechanical Turk in order to create tasks that require crowd-workers to suggest specific

words and phrases likely to be used in the given location. The responses generated from

this approach are then used to generate vocabulary lists on our SpeakAhead interface and

thus recommend location-specific prompts to people with aphasia.

2.6 Amazon’s Mechanical Turk

Amazon’s Mechanical Turk (MTurk) and other crowdsourcing markets source online

human workers to complete tasks that computers cannot reliably finish accurately.

Amazon Mechanical Turk allows the Requester, a software application, an individual or

organization, to submit a task that human workers need to complete. The Requester uses

the website to post a task and to check the status of the task. Workers complete and

submit the Human Intelligence Task (HIT) within the designated amount of time and are

rewarded for their submissions (Amazon Web Services 2014, Requester UI Guide (API)).
25

CHAPTER 3

CROWDSOURCED SPEECH RECOMMENDATION SYSTEM

This chapter describes the design of our crowdsourced speech recommendation system,

SpeakAhead. It begins with an explanation of the system followed by a brief overview of

the system components. This section also describes the data flow diagrams and major

modules of the proposed recommendation system. It then provides a detailed description

of the database and algorithms. Lastly, we discuss the technical and functional

specifications of the prototype and the limitations of the user interface.

3.1 SpeakAhead

The SpeakAhead client is a mobile-friendly location-aware web application that allows

people with aphasia to generate and manage common conversational words and phrases

to support their daily needs for communication. SpeakAhead proceeds in three steps—

identifying a location, creating a question in the form of Human Intelligence Tasks

(HITs) for crowd workers requesting to answer likely phrases for that location, and

receiving responses (e.g. phrases, words) for the given place.

The system components include-


26

(1) a local database that holds user-specific history of locations previously visited and

conversational phrases for that location stored by the application,

(2) a location identification service that identifies the current location of the user and

suggests nearby locations,

(3) a task-generation module that submits tasks to generate location-specific responses

from crowd workers and collects worker responses,

(4) a vocabulary-integration module that integrates and sorts words and phrases derived

from the database with responses generated by web workers to produce speech

recommendations to the user.


27

Figure 3. System Walkthrough of the Crowdsourced Speech Recommendation System


(SpeakAhead). The user first navigates to the application and selects his current location
from the list of nearby locations. SpeakAhead gathers the details of this location using
28

google places API (name, image, type, website) and uses this information to ask
crowd-workers to suggest common conversational phrases for that specific location.
SpeakAhead then searches for previously-stored recommendations in the database for
that location and combines this information with newly generated responses submitted by
crowd-workers to spontaneously generate a vocabulary list of location-specific cues.

For instance, Adam is a person with aphasia who has trouble recalling words to use in
conversation. If Adam is present inside Subway Restaurant and wants to communicate
with others, he first navigates to the website on his personal device (e.g. mobile phone,
tablet, laptop) to start the application. The ​Location Module of the application then traces
his nearby locations and services. SpeakAhead suggests nearby places based on their
proximity and Adam sees location prompts (Image, Name, Icon) for Subway restaurant
on top of his list of suggested places.

Once Adam taps on Subway to select it as his current location, SpeakAhead’s ​Task
Module submits a question to crowd-workers to recommend likely phrases usable at that
location (e.g. ordering a specific food item, asking for refills etc.). Once this question is
successfully posted on Mechanical Turk website, Adam is redirected to the waiting
screen prompting him to wait while responses are generated. In the meantime, the
Vocabulary Integration Module of SpeakAhead collects related phrases previously
answered for that location from the database and displays them to Adam. As the new
location-specific cues arrive, they are appended to the response list to further improve
speech recommendations. While new phrases are added to Adam’s response list, he
clicks on “Can I get a refill for my drink?” from the list of phrases to make it audible to
his conversation partner.
29

Figure 4.​ ​Preliminary question layouts


Illustrating a preliminary question layout used by web-workers to answer
location-specific cues on MTurk.

3.2 System Description


This section describes the database table structures and different modules used in

SpeakAhead. The table structure describes the database schema and the implementation

logic of how the data is used by application.

3.2.1 ​Design

The SpeakAhead system has 3 modules. These three modules are –

a) Location Module – This module uses different APIs to gather information about the
location.
b) Task Module – This module creates tasks for the crowd workers for a given location.
30

c) Vocabulary Module – This module interacts with the database and stores new
responses from crowd workers.

a) Location Module

1
In this module, the HTML5 Geolocation API (see Appendix 7: API Glossary) is used to

track the longitude and latitude of the user's current location. The coordinates of the user
2
are then used to create an HTML request to Google Places to search for 20 nearby

places. The place name, image, icon website and type are requested for these locations.

Locations of the images are requested from Google Places; if Google Places does not
3
have an image, jpg.to is used to search for an image by passing the name of the location

(for example - Subway, Dunkin Donuts, Safeway etc).

4
The Location Module uses the Distance Matrix API for sorting the list of locations

displayed to the user. The Distance Matrix service computes the distance between the

current location of the user and 20 nearby locations. The list of locations is then sorted in

descending order of proximity so that the nearest locations are listed first in the interface.

Once the user selects an appropriate location, the details are passed to the Task Module

1
http://www.w3schools.com/html/html5_geolocation.asp
2
https://developers.google.com/places/documentation/
3
http://jpg.to/
4
https://developers.google.com/maps/documentation/distancematrix/
31

for further processing. The following flow-chart (see figure 12) displays the sequence of

steps performed by the Location Module:

Figure 5. Data Flow diagram Level 1 (Location Module)


​Showing logical components and flow of location identification.
32

The Location module takes the coordinates of the current location as input and traces two

nearest locations to the user. The Location Module then retrieves the contextual attributes

of these places from Google Places API viz. name, type, image, and icon. The Module

then displays these locations to the user showcasing the nearest location first. Upon user

click, the Location module sends the name, image and type of place to the Task Module.

b) Task Module

In this module, the attributes of the locations selected by the user (place id, name, type,
5
website, image and icon) are supplied as HITLayoutParameter to the ‘CreateHIT’

method available from MTurk SDK to create a new HIT. The Task Module also uses a

unique HIT layout Identifier (HITLayoutId), (see Appendix 6: MTurk Terminologies) to

use pre-designed HIT layouts managed on the requester's MTurk website account. Mturk

provides an interface to design HIT layout on MTurk requester’s website and it can be

accessed through different methods provided by MTurk SDK by using its unique

HITLayoutId.

The ‘CreateHIT’ method first assigns the location attributes values (location name,

website, and type of place) to the designated placeholder positioned inside the selected

HIT layout and then publishes the HIT on the Mechanical Turk website.

http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_HITLayoutParameterArt
icle.html
33

These HITs are then used to ask workers to write words or phrases for a specific location.

While SpeakAhead waits for the responses from MTurk, the Task Module calls the

GetAssignment operation every 3 minutes ​(see Appendix 6: MTurk Terminologies) ​to

retrieve newly submitted responses. Once the Vocabulary Integration Module returns the

list of responses, the incoming responses from MTurk are then appended to the response

list and displayed to the user. The response list consists of a textual list of

location-relevant phrases and words. The user can select a specific response from the list.

The particular worker gets approval for payment on MTurk. The selected response is

saved in the database for future use. ​The following flow-chart (see figure 13) displays

the sequence of steps performed by the Task Module and Vocabulary Module of

SpeakAhead:
34

Figure 6. Data Flow diagram Level 2 (Task Module, Vocabulary Module)


Showing logical components and flow for Generating and Integrating Responses.

The Task module (1.Creating Response Writing Task, 2. Searching for Responses in
Database) takes place related information as input from Location Module and creates
human intelligence tasks (HITs) on MTurk to generate responses submitted by turkers.
The Vocabulary Module takes location attributes from the Task Module as input and
searches for location relevant responses in the database. The Vocabulary Module (
3.Integrating pre-stored and new responses) then integrates existing responses retrieved
from the database with responses newly submitted by workers on MTurk.
35

c) Vocabulary Module

The Vocabulary Module queries the database engine to search for responses for a specific

location. The Vocabulary Module first queries the database using the unique location

identifier (LocationId). Each location has a unique Id given by Google Places API. The

Vocabulary Module then uses ‘LocationName’ to search for responses. If the database

search using ‘LocationId’ and ‘LocationName’ returns no results, then the Vocabulary

Module queries the database to find words and phrases specified for similar locations

(using LocationType). Meanwhile, a request is made to MTurk to get responses for the

HIT created for the particular location. The pre-stored responses and new responses

coming from MTurk are then sorted in an order where pre-stored responses come first

and then the new responses follow in the response list. If one or more responses are

retrieved from the local database, responses are sorted in the order of highest answer

counter value first. Answer counter is a count of the number of times the user has

selected that phrase from the response list. If two or more responses have the same

counter value, then the most recently selected response is listed first. Once all the

pre-stored responses are added to the response list, new responses coming from MTurk

are appended to the end of this list based on their arrival time. ​The user clicks on the

responses to speak the intended words or phrases. ​The answer counter value in the

database (AnswerCounter) is incremented each time the user selects a pre-stored

response. Selected responses are made audible to others using .NET's built-in speech

synthesizer. Speech synthesizer class is part of .Net assembly and it is used in

SpeakAhead to read the selected phrase from the response list.


36

3.2.2 Database

As words and phrases are generated from Mechanical Turk, a local database is used to

store this data and retrieve previously collected data from user selections from the
6
SpeakAhead interface. For this purpose, SpeakAhead uses SQLite database engine.

SQLite database provides a ​self-contained​, ​server-less​, ​zero-configuration​,

transactional​ SQL database engine ideal for mobile-friendly device applications.

3.2.2.1 Database Schema

The database contains four tables (see Table 1). The tables which are used to store and

retrieve information for making speech recommendations are:

a. tbl_Location
b. tbl_HIT
c. tbl_Assignment
d. tbl_Answer

The following chart provides the summary of the tables with sample data.

​Table Name Table Description Sample data

Stores all the location Subway


details for which Restaurant,
tbl_Location
responses are selected Arbutus Church,
by the user Giant Foods

6
http://www.sqlite.org/
37

Stores the list of HITs


with unique HIT IDs
tbl_HIT generated by MTurk 3E22YV8G..,
system for a given 3G1D61SJ8…,
location 4RUGX8T….

Stores the list of unique


3G1D61SJ8….,
tbl_Assignment assignments retrieved
6R9UGX8T….
from each worker.

Stores the list of


responses selected by
"Welcome all",
the user from the
tbl_Answer "Praise", "Church"
SpeakAhead interface
for a given location.

Table 2. Overview of the tables with sample data

a. tbl_Location - This table contains the list of locations for which responses are
generated. The primary key for the table is ‘LocationId’. The structure of the table is as
follows –

Figure 7. tbl_Location Structure

The table column details stored in the above table (tbl_Location)​ a​ re as follows:
a) ​LocationId​ – (Primary key). This column stores a fixed ID

​to uniquely identify each location.

​b) ​LocationName​ - Stores the name of the places selected by the user.
38

7 8
c) ​LocationType​ - Stores Place Type listed by Google Places API .

When multiple values are available, place types are stored using

comma separated value.

d) ​CreatedDateTime​ - Timestamp when table entry is made. A table

entry is made every time the user selects a new response from the

response list.

b. tbl_HIT - This table gives the relationship between HITs created and locations

selected. The table contains the list of HITs submitted by the application on MTurk

for a particular location. The primary key for the table is HITId. The structure of

the table is as follows:

Figure 8. tbl_HIT Structure


Descriptions of the HIT details stored in the table column are as follows:
a) ​HITId​ – (Primary key). Stores unique ID for each HIT from MTurk

​response.

b) ​CreatedDate​ - Timestamp when HIT was created. An HIT is

created when the user selects a specific location.

c) ​LocationId​ – (Foreign Key). Stores LocationId referenced from

tbl_Location.

7
https://developers.google.com/places/documentation/supported_types
8
https://developers.google.com/places/
39

c. tbl_Assignment - This table is a relationship table between HITId and


AssignmentId. The table contains the list of assignments completed by workers for
a particular HIT. The primary key for the table is AssignmentId. The structure of
the tables is as follows -

Figure 9. tbl_Assignment Structure

Description of the table columns of ‘tbl_Assignment’ are as follows –

a) ​AssignmentId​ – (Primary key) Stores a unique identifier

​for each assignment received from Mturk.

b) ​ApprovedDate​ - Stores the timestamp when Assignment

was approved for payment.

c) ​SubmittedDate​ - Stores timestamp when assignment was submitted

by the worker.

d) ​HITId​ – (Foreign Key) Stores HIT ID referenced by tbl_HIT.

g) ​CreatedDateTime​ - Timestamp when table entry was made.

d. tbl_Answer –

This table contains the list of answer texts from the worker assignments as

selected by the user from the response page. When the user selects a response
40

which doesn’t exist in the local database, a subsequent entry is inserted into the

table and the answer counter for that response is assigned to 1. When the user taps

a pre-existing response to speak, the answer counter for that response is

incremented by one and the table is updated. The Answer counter (see Figure

10) value is used to keep track of the number of times a phrase was selected for a

given location. This value is later used by the Vocabulary Module for sorting

responses. The system sorts answers for a particular response based on Answer

Counter in decreasing order and shows the top 5 answers.

Description of the Assignment details stored in the table columns are as follows –

a) ​AnswerId – (Primary key) Stores unique identifiers for each Answer or

response.

b) ​AnswerText​ – Stores text of the response present in the assignment.

c) ​AnswerCounter ​– Counts the number of times the user has selected the

phrase. New answer is initialized to 1. Increments by one each time the user

selects the same​ ​answer again.

c) ​AssignmentId – (Foreign Key) Stores AssignmentId as referenced by

tbl_Assignment.

d) ​CreatedDateTime - ​Timestamps when a response is saved for the first time.


41

Figure 10. Structure of tbl_Answer

3.2.2.2 Entity Relationship Diagram for the Database

The following E-R diagram (see figure 12) illustrates the relationship between the four

tables in the database (tbl_Location, tbl_HIT, tbl_Assignment, tbl_Answer):

Figure 11. E-R Diagram for the database


Illustrating the relationships between tables. In this data model, all the tables have one to
many relationships.

3.3 SpeakAhead Interface

This section provides the functional and technical specifications of the SpeakAhead

prototype, followed by a brief description of its limitations.


42

3.3.1 Software and Technologies

The SpeakAhead software is developed in C# using the ASP.net application framework.

The database is maintained in SQLite database engine (see section 3.2.2 for details on the

database implementation). The user interface is developed using HTML5, CSS3,

JavaScript and JQuery Mobile. The ​Google Places API is used to obtain contextual

information regarding nearby locations of the user and MTurk SDK is used for retrieving

responses from workers on Mechanical Turk website.

3.3.2 User Interface

Similar to many other AAC systems, SpeakAhead's purpose is to enable users to store,

organize, browse, and speak stored words and phrases. The current prototype enables

users to browse, add and sort items based on the current location context.

3.3.3 Starting the Application

The user first navigates to the application URL and then clicks the start button on the

home page to start the application (see figure 14). If location-sensing is not enabled in the

browser, the user is asked to enable it before the application is started. Currently, the

web-based version of the prototype does not support user profiles. However, current

mobile devices (e.g., iPhone 5s, Samsung Galaxy phone S5) which come pre-installed

with fingerprint authentication and biometric security systems for personal identification

can be used effectively with future versions of the prototype. These fingerprint-reading
43

and scanner techniques may also be a good alternative for people with language disorders

as compared to typical text-based log-in approaches.

Figure 14. SpeakAhead (Home Page)

3.3.4 Browsing Items

​SpeakAhead's main interface comprises 2 scrollable lists. First, a list of prompts

showcasing nearby locations and second, a textual list of words and phrase prompts for

that location. Touching an on-screen item speaks the associated text using the .Net
9
Speech Synthesizer assembly . While many AAC systems organize words by topic (e.g.,

9
http://msdn.microsoft.com/en-us/library/system.speech.synthesis.speechsynthesizer.aspx
44

food, vehicles, parts of the body), SpeakAhead organizes words and phrases by the

location of the user. In its most basic form, the interface is similar to commercially

available AAC software such as MyVoice in the sense that both applications use location

awareness, but features the ability to recommend new phrases and words collected from

crowdworkers. SpeakAhead also stores phrases selected by the user to the local database.

Figure 15. Location Selection Page of the context-aware speech recommendation tool

The location selection provides a list of nearby locations along with their attributes like
name, image, website, icon and type. To make the interface user friendly and easy to
understand, the default setting only displays place name and place image. To select a
location from the nearby places list, the user either taps on the place image or inside the
45

column displaying the place name. (The screenshot was taken on a web browser using
10
ripple mobile emulator to test the look and feel of the interface on an IPad.)

​Figure 16. Location Image showing Place Icons using column toggle button

A column toggle button is provided on the top right corner. When the user clicks this
button, a dialog box displaying checkboxes to choose the location attributes to be
displayed is shown. Figure 16. displays the location page showing the list of nearby
locations taken near a recreational spot in Baltimore. (The screenshot was taken on a
web browser using ripple mobile emulator to test the look and feel of the interface on an
IPad.)

10
http://emulate.phonegap.com/
46

Figure 17​. ​ Column toggle button showing layout with different columns

3.3.5 Adding New Items

Users may add new words or phrases to SpeakAhead's database by themselves or with

the assistance of a companion. Adding a new item requires the user to select the text to be

spoken from the response list which is generated by adding responses from local

databases and crowdworkers into a list. ​The current web-based prototype uses a single

database to gather the responses for all users. A fully functional mobile application would

allow a personalized database to be stored in each user's device.


47

Figure 17. Response page showing phrases likely to be spoken at Subway.

The above response page provides a list of words and phrases for Subway Restaurants.

When the user selects a location on the location page, the vocabulary module sorts and

integrates the pre-stored responses in the database and new responses recently submitted

by crowdworkers. The module then displays the recommended location-specific prompts

on the interface.

3.3.6 User interface limitations


48

Limitations of the prototype include associating images with phrases and prompts on the

response page. Although this may increase the time consumed to answer the human

intelligence tasks and the algorithmic complexity for tagging images with responses, it

can greatly improve the recognizability of responses. The second limitation includes

using the same color schemes for word structures such as nouns, verbs etc. By

introducing different color schemes for distinguishing words, phrases, and nouns, the

ease of use can be simplified. Another limitation of the UI is that it does not support the

per-user login feature. This can be addressed by maintaining user sessions and using

touch-based login techniques.

3.3.7 Current status of the prototype

One of the major limitations with the SpeakAhead prototype testing is that current design

strategies used for building the UI have not been tested with people with aphasia or other

related language disorders. Kane et al. (2012) identified several design activities that are

especially helpful for designing ​context-aware communication ​devices with people with

aphasia. These design activities can be used to solicit information and feedback on how

the target users might want to use the application.

By combining the existing interactive prototype with low level prototypes and

storyboards, we can collaborate more effectively with people with aphasia for future

design improvements.
49

CHAPTER 4

CONCLUSION

This chapter summarizes the contribution and findings made in this project and describes

its relation to previous work. It explains its current limitations and provides opportunities

for future work.

4.1​ ​Discussion

Previous research on AAC systems (Epp et al., 2011), (Kane et al., 2011), (McGrenere et

al. (2008) had shown how context-aware computing can improve the usability of

traditional AAC systems. Existing communication aid technologies only consist of

vocabulary catalogs that are either inbuilt or manually pre-stored by the user. In this

project work, we set out to support the earlier research by introducing the design of a

context-aware AAC framework that also explores the use of contextual information

leveraged from the location context. We further extend the related research by utilizing a

method that outsources contextual information associated with the user's current location

to internet workers. Our system interface produces Human Intelligence Tasks (HITs) on

Amazon’s MTurk website on user’s request, to spontaneously generate location-specific

phrase prompts. The interface continuously updates vocabulary lists with new responses

for that location.

4.2 Future Work


50

This project work demonstrates the design and implementation of a crowdsourced speech

recommendation system, SpeakAhead. Future studies could focus on field studies by

collaborating with real users. The current version of SpeakAhead leverages place name,

type, website, image and icon to supply contextual information to the interface and to

crowd-workers. Future research could also explore the usability of the responses on a

broader context by carefully incorporating contextual information like place reviews and

place menus in the HIT designs as well. It may also be beneficial to include filtering

algorithms to further narrow responses that do not serve the communication intent of the

user (e.g. using word-patterns or keywords for filtering responses not related to food in a

restaurant). Also, the worker tasks could be further broken down in easily verifiable

steps. For example, one worker's task could be to find related images on the web and

another could be to verify that responses are not the same as one already submitted.

Currently, a local database is utilized to reduce the latency in feedback times so that the

user is not left waiting. However, future studies focusing on algorithms like one

introduced in Bigham et al., (2010) (quikTurkit) could be devised to further improve

feedback mechanisms on the worker end.

4.3 Conclusion

In this project, we have explored how location-aware computing and crowdsourcing may

improve the usability of current AAC systems. We have presented the design of a

framework for that utilizes a method to outsource contextual information like the name,
51

image, type, icon and website associated with the user's current location and create HITs

on MTurk, to spontaneously generate daily conversational phrases for that location in the

user’s vocabulary lists. Lastly, we described the implementation details of the

SpeakAhead prototype that tracks the current location of the user and generates response

lists suggesting location-specific words and phrases.

-End-
52

REFERENCES

[1] Ahn L.V, & Dabbish, L. (2004). Labeling images with a computer game. Proceedings
of the 2004 Conference on Human Factors in Computing Systems - ​CHI ’04​,
319–326. doi:10.1145/985692.985733

[2] Ahn, L.V, and Dabbish, L. Designing games with a purpose. ACM 51, 8 (2008), 58–
67.

[3] Ahn, L.V., Liu R., and Blum, M. Peekaboom: a game for locating objects in
Images. In CHI ’06: Proceedings of the SIGCHI conference on Human Factors in
Computing systems (New York, NY, USA, 2006), ACM, pp. 55–64.

[4] Ahn, L.V, Ginosar S, Kedia, M., Liu R., and Blum M. Improving accessibility of
The web with a computer game. In Conference on Human Factors in Computing
Systems (CHI’06), pages 79–82, 2006.

[5] Ahn, L.V, Kedia M., and Blum, M. Verbosity: a game for collecting common-
Sense facts. In Conference on Human Factors in Computing Systems (CHI’06).

[6] Al Mahmud, A. and Martens, J.-B. Re-connect: designing accessible email


Communication support for persons with aphasia. Proc. CHI EA ’10, ACM Press
(2010), 3505-3510.

[7] Allen, M., McGrenere, J., and Purves, B. The design and field evaluation of
PhotoTalk: a digital image communication application for people with aphasia.
Proc. ASSETS '07, ACM Press (2007), 187-194.

[8] Alonso, O., Rose, D.E., and Stewart, B. Crowdsourcing for relevance evaluation.
53

SIGIR Forum 42, 2 (2008), 9-15

[9] Amazon Mechanical Turk Guide for Social Scientists (updated 1-18-12),
Buhrmester M., Swan Lab, http://homepage.psy.utexas.edu/HomePage/
Students/Buhrmester/MTurk%20Guide.htm

[11] Banajee, M., DiCarlo, C. & B-Stricklin S, (2003). Core Vocabulary Determination
For Toddlers, Augmentative and Alternative Communication​,​ 2, 67 - 73.

[12] Bernstein, M.S., Little, G., Miller, R.C, Hartmann, B., Ackerman, M.S, Karger,
D.R, Crowell D., Panovich K. Soylent: A Word Processor with a Crowd Inside,
UIST’10.

[13] Beukelman, D.R. and Mirenda, P. Augmentative & Alternative Communication:


Supporting Children & Adults with Complex Communication Needs​.​ Paul H
Brookes Pub Co, 2006

[14] Bigham J.P, Rika Jayant, Hanjie Ji, Greg Little, Andrew Millerγ, Robert C.
Miller, Robin Miller, Aubrey Tatarowicz, Yn White, Samuel White, Tom Yeh,
VizWiz: Nearly Real-time Answers to Visual Questions, CHI 2013

[15]​ ​Bigham​ ​J.P​, ​Lasecki W. S., Interactive Crowdsourcing,​ ​P. Michelucci (ed.),
Handbook of Human Computation, 509, Springer Science+Business Media New
York 2013

[16] Brabham, D.C., Moving the crowd at iStockphoto: The composition of the crowd
And motivations for participation in a crowdsourcing application. First Monday
54

[17] Brabham, D. C. (2008). Crowdsourcing as a Model for Problem Solving: An


Introduction and Cases. Convergence: The International Journal of Research into
New Media Technologies, 14(1), 75–90.

​ aid Crowdsourcing Current State & Progress toward Mainstream


[18] Breint, F.,​ P
​ ersion 1.00.00, SmartSheet.com
Business Use​, V

[19] Callison-Burch, C.Fast, cheap, and creative: Evaluating translation quality using
Amazon's Mechanical Turk. Proceedings of the 2009 Conference on Empirical
Methods in Natural Language Processing: Volume 1-Volume 1, Association for
Computational Linguistics (2009), 286-295

[20]​ ​Communication aids and computer-based therapy after stroke, Stroke


Association’s Information Service, 2013.

[21] Clark, H. H. (1996). Structure of Conversation (1984), 820–823.

[22] Clarke, M., Wilkinson, R. (December 2007). "Interaction between children with
cerebral palsy and their Peers". augmentative and alternative communication 23 (4):
336–348.
[23] Crais, E. (1991). “Moving from “parent involvement” to family centered services”.
American Journal of Language Pathology 1:5-8

[24] Dawe, M. Desperately Seeking Simplicity: How Young Adults with Cognitive
​ HI 2006,
Disabilities and their Families Adopt Assistive Technologies,​ C
Montreal, Canada, pp. 1143-1152, 2006.
[25] Dempster, M., & Alm, N. (2010). Automatic generation of conversational
Utterances and narrative for Augmentative and Alternative Communication : a
Prototype system, (June), 10–18.
55

[26] Eaton, E., & Wagstaff, K. (2005). A context-sensitive and user-centric approach to
developing personal assistants.

[27] Eidelman, V., Huang, Z., & Harper, M. (2010). Lessons Learned in Part-of-Speech
Tagging of Conversational Speech, (2007).

​ ontext-Sensitive
[28] Epp, CD., Campigotto R, Levy A, Baecker R​,​ Marco Polo:​ C
Mobile Communication Support. Proc. FICCDAT: RESNA/ICTA, 2011

[29] Erikson, T., Some Thoughts on a Framework for Crowdsourcing​,​ Position


Paper for the CHI 2011 Workshop on Crowdsourcing and Human Computation,
IBM T. J. Watson Research Center

[30] Erickson, T. (2011). Some Thoughts on a Framework for Crowdsourcing, 1–4.

[32] Evans, B.M. and Chi, E.H. Towards a model of understanding social search. Proc.
CSCW 2008, ACM (2008), 485-494.

[33] Experimenting on Mechanical Turk: How Tos, Jacobson M.


http://blogs.parc.com/blog/2009/07/experimenting-on-mechanical-turk-5-how-tos/
Experiments using Mechanical Turk. http://www.playgraph.com/2010/tutorials/
Experiments-using-mechanical-turk-part-1

[34] Fenwick, K., Massimi, M., Baecker, R., Black, S., Tonon, K., Munteanu, C.,
Rochon, E., and Ryan, D. Cell phone software aiding name recall.
Proc. CHI '09 EA, ACM Press (2009), 4279-4284.

[35] Gena, C. (2005). Methods and techniques for the evaluation of user-adaptive
systems.

[36] Geolocation API http://dev.w3.org/geo/api/spec-source.html


56

[37] ​Golashesky, C. Technology Applications at the Adler Aphasia Center, Top


Stroke Rehabil 2008; 15(6):580–585

[38] Google Places Library https://developers.google.com/places/documentation/

[39] Google Distance Matrix Library


https://developers.google.com/maps/documentation/distancematrix/

[40] Hacker, S., and Ahn L.V. Matching: eliciting user preferences with an online
Game. CHI ’09: Proceedings of the 27th international conference on Human
Factors in computing systems (New York, NY, USA, 2009), ACM, pp.1207–1216

[41] Hansen, D. L., and Golbeck, J. mixing it up: recommending collections of items.
CHI ’09: Proceedings of the 27th international conference on Human factors in
Computing systems (New York, NY, USA, 2009), ACM, pp. 1217–1226.
Howe, J. The rise of crowdsourcing. Wired Magazine 14, 6 (2006).

[42] Ipeirotis, P. Mechanical Turk: The Demographics. A Computer Scientist in a


Business School, 2008. http://behind-the-enemy-lines.blogspot.com/2008/03/
Mechanical-turk-demographics.html.

[43] Irani, L. and Silberman, S. Turkopticon: The Sourced Crowd is Made of People.
Presentation given at Dolores Labs, 10 June 2009.

[44] Kane S. K., Linam-C, B., Althoff, K., & McCall, D. (2012). What We Talk About :
Designing a Context-Aware Communication Tool for People with Aphasia, 1-8

[45] Kane S. K., Bigham J. P., and Wobbrock J. O. Slide rule: making mobile touch
Screens accessible to blind people using multi-touch interaction techniques.
57

ASSETS 2008, 73–80, 2008.

[46] Kane S. K., Jayant C., Wobbrock J. O., and Ladner R. E. Freedom to roam: a
Study of mobile device adoption and accessibility for people with visual and
Motor disabilities. ASSETS 2009

[47] Kittur A., Chi E.H., Suh B. Crowdsourcing for Usability: Using Micro-
Task Markets for Rapid, Remote, and Low-Cost User Measurements, Palo Alto
Research Center
[48] Kittur A., Chi E.H., Suh B. Crowdsourcing User Studies with Mechanical Turk,
Palo Alto Research Center, CHI 2008

[49] Law, E., Ahn, L.V, R. Dannenberg, and Crawford M. Tagatune: A game for
Music and sound annotation. International Conference on Music Information
Retrieval (ISMIR’07), pages 361–364, 2007

[50] Law E., and Ahn, L.V. Input-agreement: a new mechanism for collecting data
Using human computation games. In Proceedings of CHI ’09 (New York, NY,
USA, 2009), ACM, pp. 1197–1206.

[51] Levelt, W. J. M. (1999). Models of word production, ​3​(6), 223–232

[52] Luger, G. F., & Chakraborty, C. (1984). Knowledge-Based Probabilistic


Reasoning From Expert Systems to Graphical Models

[53] Mason W., Suri, S. Conducting behavioral research on Amazon’s MTurk,


Behavior Research Methods (2012)

[54] Matikainen, P., Sukthankar, R., Hebert M. Model Recommendation for Action
Recognition the Robotics Institute, Carnegie Mellon University, Google Research
58

[55] Mcquiggan, S. W., Lee S., & Lester, J. C. Predicting User Physiological
Response For Interactive Environments : An Inductive Approach, 60–65.

[56] McGrenere, J., Davies R., Findlater, L., Graf P., Klawe, M., Moffatt, K., Purves,
B., and Yang, S. Insights from the aphasia project: designing technology for and
With people who have aphasia. Proc. CUU '03, ACM Press (2003), 112-118.

[57] Microsoft MSDN Library http://msdn.microsoft.com/library/

[58] MTurk Blog ​http://www.behind-the-enemy-lines.com/

[59] Moffat, A. & Tan, R Language Representation on Dynamic AAC Devices: How
Do you choose? Independent Living Center Curtin University, Bentley, July 2013
Nilsson, M. (2002). Speech Recognition using Hidden Markov Model performance
Evaluation in a noisy environment.

[60] NAA National Aphasia Association www.aphasia.org

[61] Notes from the amazon mechanical Turk tutorial, Brad Lab, Kesebir S., University
of Virginia,
http://www.darden.virginia.edu/web/uploadedFiles/Darden/BRAD/BRAD%20Lab%
20-%20Amazon%20Mechanical%20Turk%20Guidelines.pdf

[62] Nottale, M., & Baillie, J. Talking Robots : grounding a shared lexicon in an
Unconstrained environment, in Proceedings of the Seventh International
Conference on Epigenetic Robotics, 2007

[63] Olleros, F. Learning to Trust the Crowd: Some Lessons from Wikipedia. 2008
International MCETECH Conference on e-Technologies, (2008), 212-216.
59

[64] Oni, A., Lucas P., Druzdel M.J Comparison of Rule-Based and Bayesian Network
Approaches in Medical Diagnostic, 283–292. Springer-Verlag Berlin Heidelberg
2001

[65] Parent, G., Maxine Eskenazi. Toward Better Crowdsourced Transcription :


Transcription of a year of the let's go bus information system data, IEEE 2010

[66] Pat, M.,​ ​Toward Functional Augmentative and Alternative Communication


For Students with Autism: Manual Signs, Graphic Symbols, and Voice Output
Communication Aids, Vol. 34 • 203–216 • July 2003, American Speech-
Language-Hearing Association

[67] Paul Visvader AAC Basics and Implementation: How to Teach Students who
“Talk with Technology” MA CCC-SLP © 2013 Assistive Technology Team,
Boulder Valley School District, Boulder, CO

[68] Pentland, A., & Liu, A. (1999). Modeling and prediction of human behavior.
​ omputation, 11(1), 229–42. http://www.ncbix.nlm.nih.gov/pubmed
Neural​ C
/9950731

[69] Putze, F., Meyer, J., Borné J., Schultz T., Holt D.V, & Funke J. (2006).
Combining Cognitive modeling and EEG to predict user behavior in a search task,
303–304.

[70] Ross J., Zaldivar, A., Irani, L., & Tomlinson B. (2010). Who are the Turkers ?
Worker Demographics in Amazon Mechanical Turk. ​ACM CHI Conference 2010,​
1–5.
60

[71] Running studies on MTurk:


http://blogs.parc.com/blog/2009/07/experimenting-on-mechanical-turk-5-how-tos/

[72] Rzeszotarski M. Jeffrey, Kittur, Aniket, Instrumenting the Crowd: Using Implicit
Behavior measures to predict task performance UIST’11, October 16–19, 2011,
Santa Barbara, CA, USA. Copyright © 2011 ACM

[73] Sarukkai, R. R. Real-time User Modeling and Prediction : Examples from


YouTube, 1600, Google Inc.

[74] Schilit, B., Adams N., and Want, R. Context-aware computing applications. IEEE
Workshop on Mobile Computing Systems and Applications, IEEE (1994), 85-90.

[75] Schweer, A., Zealand, N., & Hinze, A. (2008). Combining Context-Awareness and
Semantics to Augment Memory, (April).

[76] Small Talk: www.aphasia.com

[77] Snow, R., O’Connor, B., Jurafsky, D., and Ng, A.Y. Cheap and Fast—But is it
Good?

[78] Evaluating Non- Expert Annotations for Natural Language Tasks. Proc. EMNLP
2008, ACL (2008), 254-263.

[79] Sorokin, A., Forsyth, D., & Goodwin, N. (2008). Utility data annotation with
Amazon Mechanical Turk, (c), 1–8.

[80] Tentori, M. And Hayes, G., Designing for Interaction immediacy to enhance social
Skills of children with autism​, ​Ubicomp, Copenhagen, Denmark, pp 51-60, 2010.

[81] Vojnovic, M., Cruise J., Gunawardena, D., and Marbach, P. Ranking and
61

Suggesting tags in collaborative tagging applications. Technical Report MSR-TR-


2007-06, Microsoft Research, February 2007

[82] Voice4u: www.voice4uaac.com

[83] Voice4U Flyer: http://static.voice4uaac.com/voice4u-flyer.pdf

[84] Wais, P., Lingamneni S., Cook, D., Fennell J., Goldenberg B., Lubarov D., Simons
H. towards Building a High-Quality Workforce with Mechanical Turk, 1–5.

[85] Walsh, G., & Golbeck, J. (2010). Curator : A Game with a Purpose for Collection
Recommendation, 2079–2082.

[86] Weber I., & Robertson S. (2008). Rethinking the ESP Game, 11. Retrieved from
http://research.microsoft.com/pubs/70638/tr-2008-132.pdf

[87] Williams, J. D. (2003). A Probabilistic Model of Human / Computer Dialogue with


Application to a Partially Observable Markov Decision Process, (August).

[88] Willkomm, T. (2011). Apps that benefit individuals with disabilities. Retrieved from
http://www.auburn.edu/outreach/opce/alatec/documents/2013presentations/iPad%20
apps%20Listing%20ALATEC.pdf

[89] Wisenburn, B. and Higginbotham, D. An AAC application using speaking partner


speech recognition to automatically produce contextually relevant utterances:
objective results. Augmentative and Alternative Communication 24, 2, (2008),
100-109.

[90] Wobbrock, J. O., & Ph, D. (2010). Research Contribution Types in Human -
Computer Interaction, ​23.​
62

[91] Yeh, T., Lee J. J., and Darrell T. Photo-based question answering. MM 2008,
389–398, 2008.

[92] Yamagishi, J. An Introduction to HMM-Based Speech Synthesis, (October, 2006).

[93] Yavuzer G. 2010. Aphasia. In: JH Stone, M Blouin, editors. International

Encyclopedia of Rehabilitation.
63

APPENDICES

Appendix 1: HIT Designs (Within-turk experiment)……………………………...… 52

Appendix 2: Other MTurk Terminologies….…….………………….……………......55

Appendix 3: API Glossary ……………………………...…………………………….57

Appendix 4: SpeakAhead Code….……………………...…………………………….60


64

Appendix 1:​ HIT Designs


65
66
67

Appendix 2:​ ​Other MTurk Terminologies

Following terminologies and descriptions have been derived from Amazon


11
Mechanical Turk Website : -

1. Operation - ​Name of the operation being performed.

2. Title - ​The title of the HIT. A title is short and descriptive and describes the kind

​of tasks the HIT contains. On the Amazon Mechanical Turk website, the HIT title

appears in search results, and everywhere the HIT is mentioned.

3. Description - ​A general description of the HIT. A description includes detailed

​information about the kind of task the HIT contains. On the Amazon Mechanical

Turk web site, the HIT description appears in the expanded view of search results,

and in the HIT and assignment screens. A good description gives the user enough

information to evaluate the HIT before accepting it​.

4. HITLayoutID ​- The HITLayoutId allows us to use a pre-existing HIT design

​with placeholder values and create an additional HIT by providing those values

as 'HITLayoutParameters'.

5. HITLayoutParameter - ​If the HITLayoutId is provided, any placeholder values

11
​http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurkRequester/
Concepts_HITsArticle.html
68

must be filled in with values using the 'HITLayoutParameter' structure.

6. Reward - ​The amount of money the Requester will pay a worker for successfully

completing the HIT.

7. AssignmentDurationsInSeconds - ​The amount of time, in seconds, that a

worker has to complete the HIT after accepting it. If a Worker does not complete

the assignment within the specified duration, the assignment is considered

abandoned. If the HIT is still active, that is, its lifetime has not elapsed, the

assignment becomes available for other users to find and accept.

8. LifetimeInSeconds (HIT Expiry Date) - ​The​ ​amount of time, in seconds, after

which the HIT is no longer available for users to accept. After the lifetime of the

HIT elapses; the HIT no longer appears in HIT searches, even if some

assignments for the HIT are left to be accepted.

9. Keywords - ​One or more words or phrases that describe the HIT, separated by

commas. These words are used in searches to find HITs.

10. MaxAssignments - ​The number of times the HIT can be accepted and completed

before the HIT becomes unavailable.

11. AutoApprovalDelayInSeconds - ​The number of seconds after an assignment for


69

the HIT has been submitted, after which the assignment is

considered approved automatically unless the Requester explicitly rejects it.

12. QualificationRequirement - ​A condition that a Worker's Qualifications must

meet before the Worker is allowed to accept and complete the HIT.

Appendix 3​: ​ ​API Glossary

Geolocation API

The HTML5 Geolocation APIis used to get the geographical position of the user to

identify the current location of the user. Considering the privacy concerns of the user, the

position is not made available to the system unless the user approves it. The geolocation

API is supported by most modern browsers on desktop and mobile devices. The latitude

and longitude of the user are made available using JavaScript on the page. These

coordinates are then used to create requests to Google Places API.

Google Place API

12
The Google Places API is a service that returns information about Places defined within

the API. The Google Places API service provides access to a set of place requests. We

use the Place Search Request of Google Places API to identify the nearby locations of the

user. Following is the list of the place requests available to the application.

12
https://developers.google.com/places/documentation/
70

● Place Search Request - Place Search request returns a list of places based on the

user's location. The Place Search Request gives access to a set of operations. We

use the Nearby Search operation and Place Details operation to identify the

nearby locations of the user and gather the information for the location context -

i. Nearby Search Requests - Nearby Search Request searches for places

within a specified area. This request is used to find twenty nearby

locations near the current geographical position. To ​use the Nearby Search

request, the location module supplies the geographical location of the

place whose information is to be retrieved. This must be specified as

latitude and longitude.

● Place Details ​Request ​- Place Details request returns more detailed information

about a specific Place. This request is used by the location module to provide

users with the contextual information related to the locations suggested. The place

attributes are also used by task modules while posting tasks on MTurk to supply

place information to crowd-workers. Following is the list of place attributes

retrieved by the application.

(a)​ Place ID -​ A ​textual identifier that uniquely identifies a place,

returned from place search.

(b)​ Place types​ - ​An array indicating the type of the address.
71

This is used to help users and workers identify the

type of place e.g., Subway is a type of Restaurant.

(c)​ Place icon - ​The URL​ of the suggested icon. Place icon is used to

provide users with an image prompt based on the

type of place.

(d)​ ​Place​ ​name -​ The human-readable name for the returned result.

(​e) ​Place photo - ​A photo object containing a reference to an image of

the place. ​Place photo is used to provide users

with an image prompt of the location.

​(f) ​Place website - ​The URL of the website of the place. This link is

used to provide further reference to crowd-

workers to suggest likely phrases for any given

location.

Distance Matrix Service

The Google Distance Matrix service is used to compute travel distance between

original location and nearby destinations. The information returned is based on the

recommended route between start and end points, as calculated by the Google Maps

API. The distance values are used by the location module to sort the nearby locations

based on proximity. As a result, the nearest location is listed first and so on.
72

.Net SDK for MTurk:

13
The Amazon Mechanical Turk SDK for .NET is an open source project containing a set

of libraries and tools designed to build solutions leveraging Amazon Mechanical Turk in

.NET. When the user selects a location from the list of Nearby Locations, available

contextual details for that location are passed to Task Module to create an HIT. The Task

Module uses Amazon Mechanical Turk to post tasks on MTurk worker platform, where

workers can find and accept HITs to submit their answers. Task Module also uses the

requester account on MTurk to monitor incoming responses.

Following is the list of operations used by Task Module for creating and monitoring

Human Intelligence tasks on MTurk.

14
(a) CreateHIT - ​The​ CreateHIT operation creates a new Human Intelligence Task

(HIT). The new HIT is made available for workers to find and accept on the

Amazon Mechanical Turk Website.

(b)​ GetAssignment - ​The​ GetAssignment operation retrieves an assignment with

an Assignment Status value of ​Submitted,​ ​Approved,​ or ​Rejected​, using the

assignment's ID. Task module uses this operation to retrieve answers to the

13
http://aws.amazon.com/code/Amazon-Mechanical-Turk/923
15​
http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurk
GettingStartedGuide/CreatingAHIT.html
14
73

assignments for the created HITs. This operation is also used by task module for

monitoring purposes. When the response page is active, periodic calls are made

to this operation to check for incoming responses.


74

Appendix 4:​ ​SpeakAhead Code

● MTurk.aspx.cs

using​ System;
using​ System.Data;
using​ System.Collections.Generic;
using​ System.Linq;
using​ System.Web;
using​ System.Text;
using​ System.Xml;
using​ System.Xml.Linq;
using​ Amazon.WebServices.MechanicalTurk;
using​ Amazon.WebServices.MechanicalTurk.Domain;
using​ System.IO;
using​ System.Globalization;
using​ System.Net;
using​ System.Net.Http;
using​ System.Web.UI.WebControls;
using​ System.Web.UI.HtmlControls;
using​ Newtonsoft.Json;
using​ HtmlAgilityPack;
using​ System.Threading.Tasks;
using​ System.ComponentModel;
using​ System.Configuration;
using​ System.Web.UI;
using​ System.Data.SQLite;

namespace​ SpeakAhead
{
​public​ ​partial​ ​class​ ​Mturk​ : ​Page
{
​SimpleClient​ client;
​WebClient​ wc;
​string​ jsonStr;
​string​ url = ​string​.Empty;
​Dictionary​<​int​, ​string​> apiKeys;
​int​ keyCount = 0;
​private​ ​string​ apiKey;

​private​ ​const​ ​string​ bingApiKey =


"Apbp5fLLgGjE94d00aDTlId6b29b6DysYbNv9njE2sSfbMeyTevJR0SH8r-tMFYc"​;

​// string hitId;


​protected​ ​void​ Page_Load(​object​ sender, ​EventArgs​ e)
{
apiKeys = ​new​ ​Dictionary​<​int​, ​string​>()
{
75

{1, ​"AIzaSyCXxIEUeFJj-Be_n2ZNL2Bzo9nSsh4FgF4"​},
{2, ​"AIzaSyDih5BeqMBVUyf3G05TPUVMJKkIdccH6Q4"​},
{3, ​"AIzaSyC65i3XExV_RzsXFJoOc6YnuI--92Q6Q7U"​},
{4, ​"AIzaSyAc37mce4zHKNtwHTEYnEh5F4pwnrSV7aE"​},
{5, ​"AIzaSyABYqJFn6ShFuQ4VENDKH2CPE1jHdv3JjY"​},
{6, ​"AIzaSyD73Hhsw7lzs_myXsqNP9er_ccXPATbG6k"​}
};

​HtmlAnchor​ btnBack = (​HtmlAnchor​)​this​.Master.FindControl(​"btnBack"​);


btnBack.ServerClick += btnBack_ServerClick;

​if​ ((!​string​.IsNullOrEmpty (Request.QueryString


[​QueryStringParams​.Latitude]))
&&
!​string​.IsNullOrEmpty(Request.QueryString[​QueryStringParams​.Longitude]))
{
hdnLat.Value = Request.QueryString[​QueryStringParams​.Latitude];
hdnLong.Value = Request.QueryString[​QueryStringParams​.Longitude];

​if​ (!IsPostBack)
{
​var​ i = 1;
GetCurrentLoc(i);
}

client = ​new​ ​SimpleClient​();


​/* if (HasEnoughFunds())
{
// HIT h = new HIT();
//CreateHIT();
}*/
​//if (!string.IsNullOrEmpty(Session["ShowTable"] as string))
​// GetCurrentLoc();
}

​///​ ​<summary>
​///​ Client used to create single operations for MTurk
​///​ Check if there are enough funds in your account in order to
​///​ create the HIT on Mechanical Turk.
​///​ ​</summary>
​///​ ​<returns>​ True if there are sufficient funds. False if not. ​</returns>

​public​ ​bool​ HasEnoughFunds()


{
​return​ (client.GetAvailableAccountBalance() > 0);
}

​///​ < ​ summary>


​///​ Creates the simple HIT
​///​ <​ /summary>

​private​ ​void​ GetCurrentLoc(​int​ KeyCounter)


{
​// Gets 20 nearby places.
​//apiKey = apiKeys[keyCount];
76

wc = ​new​ ​WebClient​();
jsonStr = wc.DownloadString(​Utilities​.CreatePlaceUrl(hdnLat.Value,
hdnLong.Value, 1000, ​string​.Empty, ​false​, apiKeys[KeyCounter]));
​GooglePlacesResponse​ gpr =
(​GooglePlacesResponse​)​JsonConvert​.DeserializeObject<​GooglePlacesResponse​>(jsonStr)
;
​if​ (gpr.status.Equals(​"OK"​, ​StringComparison​.OrdinalIgnoreCase ))
{
apiKey = apiKeys[KeyCounter];
​if​ (gpr.results.Count() > 0)
CreateTable(gpr);
}
​else
{
​if​ (!​string​.IsNullOrEmpty(gpr.error_message))
{
​if​ (KeyCounter <= apiKeys.Keys.Count())
{
KeyCounter = KeyCounter + 1;
apiKey = apiKeys[KeyCounter];
GetCurrentLoc(KeyCounter);

}
​else
{

((​Label​)​this​.Master.FindControl(​"lblErrorMessage"​)).Visible = ​true​;
((​Label​)​this​.Master.FindControl(​"lblErrorMessage"​)).Text =
gpr.error_message;
}
}
}
}

​private​ ​void​ CreateTable(​GooglePlacesResponse​ gpr)


{
​DataTable​ dt = ​new​ ​DataTable​();
dt.Columns.Add(​"Location"​, ​typeof​(​string​));
dt.Columns.Add(​"Distance"​, ​typeof​(​string​));
dt.Columns.Add(​"IconUrl"​, ​typeof​(​string​));
dt.Columns.Add(​"Types"​, ​typeof​(​string​));
dt.Columns.Add(​"Website"​, ​typeof​(​string​));
dt.Columns.Add(​"ImageUrl"​, ​typeof​(​string​));
dt.Columns.Add(​"LocationId"​, ​typeof​(​string​));
​string​ distanceJsonstr;

​foreach​ (​var​ result ​in​ gpr.results)


{
​string​ strLoc = ​string​.Empty;
​string​ strDistance = ​string​.Empty;
​string​ strIconUrl = ​string​.Empty;
​string​ strTypes = ​string​.Empty;
​string​ strWebsite = ​string​.Empty;
​string​ strImageUrl = ​string​.Empty;
​string​ strLocId = ​string​.Empty;
wc = ​new​ ​WebClient​();
77

jsonStr =
wc.DownloadString(​Utilities​.CreatePlaceDetailUrl(result.reference, ​false​,
apiKey));
​GooglePlaceDetailResult​ gpdr =
(​GooglePlaceDetailResult​)​JsonConvert​.DeserializeObject<​GooglePlaceDetailResult​>(js
onStr);

wc = ​new​ ​WebClient​();
distanceJsonstr =
wc.DownloadString(​Utilities​.CreateDistanceUrl(hdnLat.Value, hdnLong.Value,
result.geometry.location.lat, result.geometry.location.lng, ​false​));
​GooglePlaceDistance​ gpd =
(​GooglePlaceDistance​)​JsonConvert​.DeserializeObject<​GooglePlaceDistance​>(distanceJs
onstr);

​if​ (gpdr.status.Equals(​"OK"​, ​StringComparison​.OrdinalIgnoreCase))


{
strLoc = result.name;
strLocId = result.id;

​if​ (gpd != ​null​ && gpd.rows != ​null​ && gpd.rows.Count() > 0


&& gpd.rows[0].elements != ​null​ &&
gpd.rows[0].elements.Count() > 0)
strDistance = gpd.rows[0].elements[0].distance.text;

strIconUrl = result.icon;

strTypes = ​String​.Join(​", "​,


result.types.ToArray()).ToUpper();
​//jus get m the first page for now
​if​ (gpdr.result != ​null​ &&
!​string​.IsNullOrEmpty(gpdr.result.website))
strWebsite = gpdr.result.website.ToLower();

​List​<​address_components​> addressComps =
gpdr.result.address_components;

​string​ cityName = ​string​.Empty;


​string​ stateName = ​string​.Empty;
​bool​ isLocality = ​false​;
​bool​ getLogo = ​false​;
​HtmlWeb​ web = ​new​ ​HtmlWeb​();
​HtmlDocument​ doc = ​null​;
​HtmlNode​ body = ​null​;
​string​ src = ​string​.Empty;

​//Check if the location is Locality


​if​ (strTypes.Contains(​"LOCALITY"​))
isLocality = ​true​;
​if​ (strTypes.Contains(​"RESTAURANT"​) ||
strTypes.Contains(​"MEAL_TAKEAWAY"​) || strTypes.Contains(​"FOOD"​))
getLogo = ​true​;

​foreach​ (​var​ comp ​in​ addressComps)


{
78

​if​ (comp.types.Count() > 0)


{
​// Get name of the city from Address Component
​if​ (comp.types[0].ToString().Equals(​"locality"​,
StringComparison​.OrdinalIgnoreCase))
cityName = comp.long_name;

​// Get name of the state from Address Component


​if
(comp.types[0].ToString().Equals(​"administrative_area_level_1"​,
StringComparison​.OrdinalIgnoreCase))
stateName = comp.long_name;
}
}

​// if GooglePlaceDetailResult has images


​if​ (gpdr.result.photos != ​null​ && gpdr.result.photos.Count >
0)
strImageUrl =
Utilities​.CreatePhotoUrl(gpdr.result.photos[0].photo_reference, ​false​, 200, 100,
apiKey);

​else
{
​// if the location is Locality (city), then pass locality
+ Statename to the url http://{}.jpg.to
​if​ (isLocality)
strImageUrl = ​"http://"​ + result.name.Trim().Replace(​"
"​, ​string​.Empty) + stateName + ​".jpg.to"​;

​// Else, use website name to get image url from


http://{}.jpg.to
​else​ ​if​ (!​string​.IsNullOrEmpty(strWebsite))
{

​string​ newUrl = strWebsite.Replace(​"http://"​,


""​).Replace(​"https://"​, ​""​).Replace(​"www."​, ​""​);
​int​ index = newUrl.IndexOf(​".com"​);
​if​ (index > 0)
newUrl = newUrl.Substring(0, index);
​if​ (newUrl.Trim().EndsWith(​@"/"​))
newUrl = newUrl.Remove(newUrl.Length - 1, 1) + ​""​;

​if​ (getLogo)
{
​HttpWebRequest​ request =
(​HttpWebRequest​)​HttpWebRequest​.Create(​"http://"​ + newUrl + ​"logo"​ + ​".jpg.to"​);
request.AllowAutoRedirect = ​false​; / ​ / find out if
this site is up and don't follow a redirector
request.Method = ​"HEAD"​;
​var​ response = request.GetResponse();
​var​ str = response.Headers[​"Location"​];
​if​ (str == ​null​ || (!​string​.IsNullOrEmpty(str) &&
!str.Trim().Equals(​"http://jpg.to/image404.php"​,
StringComparison​.OrdinalIgnoreCase)))
newUrl = newUrl + ​"logo"​;
79

}
strImageUrl = ​"http://"​ + newUrl + ​".jpg.to"​;

​else
{
strImageUrl = ​"http://"​ + result.name.Trim().Replace(​"
​ tring​.Empty) + cityName + ​".jpg.to"​;
"​, s
}

​try
{
​// To get the image src from .jpg.to link
doc = web.Load(strImageUrl);
​if​ (doc != ​null​)
body = doc.DocumentNode.SelectSingleNode(​"./img"​);

​if​ (body != ​null​)


{
src = body.Attributes[​"src"​].Value;
strImageUrl = src;
}
}
​catch
{
}

dt.Rows.Add( strLoc, strDistance, strIconUrl, strTypes,


strWebsite, strImageUrl, strLocId);
}
​else
{
​if​ (!​string​.IsNullOrEmpty(gpdr.error_message))
{

((​Label​)​this​.Master.FindControl(​"lblErrorMessage"​)).Visible = ​true​;
((​Label​)​this​.Master.FindControl(​"lblErrorMessage"​)).Text =
gpdr.error_message;
}
}
}

dt.DefaultView.Sort = ​"Distance Asc"​;


dt = dt.DefaultView.ToTable();
Session[​"LocationTable"​] = dt;
gvLocations.DataSource = dt;
gvLocations.DataBind();
HideColumns();
}

​protected​ ​void​ HideColumns()


{
gvLocations.Columns[3].Visible = ​true​;
80

gvLocations.Columns[4].Visible = ​true​;
gvLocations.Columns[5].Visible = t​ rue​;
gvLocations.Columns[6].Visible = f ​ alse​;
}

​protected​ ​void​ gvLocations_Sorting(​object​ sender, ​GridViewSortEventArgs​ e)


{
​DataTable​ dt = ((​DataTable​)Session[​"LocationTable"​]);
dt.DefaultView.Sort = e.SortExpression + ​" "​ +
SortDir(e.SortExpression);
gvLocations.DataSource = dt;
gvLocations.DataBind();
HideColumns();
}

​protected​ ​void​ gvLocations_OnRowCreated(​Object​ sender,


GridViewRowEventArgs​ e)
{
​if​ (e.Row.RowType == ​DataControlRowType​.DataRow)
{
e.Row.Attributes.Add(​"OnMouseOver"​, ​"this.style.cursor =
'hand';"​);

​var​ LinkLocationName =
(​LinkButton​)e.Row.FindControl(​"LocationName"​);
LinkLocationName.CommandArgument = e.Row.RowIndex.ToString();

​var​ LinkimgUrl = (​ImageButton​)e.Row.FindControl(​"imgUrl"​);


LinkimgUrl.CommandArgument = e.Row.RowIndex.ToString();

​var​ LinkIconUrl = (​ImageButton​)e.Row.FindControl(​"iconUrl"​);


LinkIconUrl.CommandArgument = e.Row.RowIndex.ToString();
}

​protected​ ​void​ gvLocations_OnDataBound(​object​ sender, E


​ ventArgs​ e)
{

​if​ (gvLocations.Rows.Count > 0)


{
gvLocations.UseAccessibleHeader = ​true​;

gvLocations.HeaderRow.TableSection = ​TableRowSection​.TableHeader;
gvLocations.HeaderRow.CssClass = ​"ui-bar-d"​; ​//
table header Row
gvLocations.Attributes.Add(​"data-role"​, ​"table"​);
​//gvLocations.CssClass = "ui-body-e ui-bar-e u-bar-hover-f
ui-shadow ui-responsive"; // table background (silver theme)

gvLocations.Attributes.Add(​"data-mode"​, ​"columntoggle"​);
// toggle icon column in too small view

​var​ headerCells = gvLocations.HeaderRow.Cells;


headerCells[1].CssClass = ​"ui-li-heading "​;
headerCells[2].CssClass = ​"ui-li-heading "​;
headerCells[3].CssClass = ​"ui-li-heading "​;
81

headerCells[4].CssClass = ​"ui-li-heading "​;


headerCells[5].CssClass = "​ ui-li-heading "​;

headerCells[2].Attributes.Add(​"data-priority"​, ​ 2"​);
" ​// icon
headerCells[3].Attributes.Add(​"data-priority"​, "​ 3"​); ​// types
headerCells[4].Attributes.Add(​"data-priority"​, " ​ 4"​); ​// website
headerCells[5].Attributes.Add(​"data-priority"​, " ​ 5"​); ​// distance

headerCells[1].Attributes.Add(​"data-class"​, ​"expand"​);

​/* Note: The responsive table feature is built with a core table
plugin (table.js)
* that initializes when the data-role="table" attribute is added
to the markup. This plugin is very lightweight
* and adds ui-table class,
* parses the table headers and generates information on the
columns of data, and fires a tablecreate event. Both the table modes
*/
}
}

​protected​ ​void​ gvLocations_RowCommand(​object​ sender,


GridViewCommandEventArgs​ e)
{
((​Label​)​this​.Master.FindControl(​"lblErrorMessage"​)).Text = "​ "​;
​HIT​ h = ​new​ ​HIT​();
​string​ locName = ​string​.Empty;
​if​ (e.CommandName == ​"SelectName"​ || e.CommandName == ​"SelectImage"​ ||
e.CommandName == ​"SelectIcon"​ )
{
​// Retrieve the row index stored in the
​// CommandArgument property.
​int​ index = ​Convert​.ToInt32(e.CommandArgument);

​// Retrieve the row that contains the button


​// from the Rows collection.
​GridViewRow​ row = gvLocations.Rows[index];

Session.Remove(​"HitId"​);

​SimpleClient​ client = ​new​ ​SimpleClient​(); ​// Client


​// HIT
​HTMLQuestion​ que = ​new​ ​HTMLQuestion​(); ​// HTML
Questionx
​string​[] responseGroup = ​null​;
​// Add more qualifications.
​List​<​QualificationRequirement​> qualList = ​new
List​<​QualificationRequirement​>();
​QualificationRequirement​ qualTypeId = ​new
QualificationRequirement​();
​QualificationRequirement​ qualApprovalRate = ​new
QualificationRequirement​();
​QualificationRequirement​ qualNumHits = ​new
QualificationRequirement​();
82

​// QualificationRequirement.

​//// Setup the qualification for approval rate.


​//qualApprovalRate.QualificationTypeId =
MTurkSystemQualificationTypes.ApprovalRateQualification;
​//qualApprovalRate.Comparator = Comparator.GreaterThan;
​//qualApprovalRate.IntegerValue = 0;
​//qualApprovalRate.IntegerValueSpecified = true;
​//qualList.Add(qualApprovalRate);

​//// setup the qualificatioxn for number of HITs approved

​//qualNumHits.Comparator = Comparator.GreaterThan;
​//qualNumHits.QualificationTypeId =
Constants.Worker_NumberHITsApproved;
​//qualNumHits.IntegerValue = 0;
​//qualNumHits.IntegerValueSpecified = true;
​//qualList.Add(qualNumHits);

​// register the HIT Type, so that it can be used in later calls to
CreateHIT
​string​ hitTypeId = client.RegisterHITType(​Constants​.Title,
​Constants​.Description, ​null​, ​Constants​.AssignmentDuration,
Constants​.Reward, ​Constants​.Keywords, qualList);

​Dictionary​<​string​, ​string​> layoutParams = ​new​ D


​ ictionary​<​string​,
string​>();

​ImageButton​ img = (​ImageButton​)row.Cells[0].FindControl(​"imgUrl"​);


​string​ ImageUrl = img.ImageUrl;
​if​ (!​string​.IsNullOrEmpty(ImageUrl))
ImageUrl = ImageUrl.Replace(​"&"​, ​"&amp;"​);

​ImageButton​ icon =
(​ImageButton​)row.Cells[2].FindControl(​"iconUrl"​);

​string​ iconUrl = icon.ImageUrl;


​if​ (!​string​.IsNullOrEmpty(iconUrl))
iconUrl = iconUrl.Replace(​"&"​, ​"&amp;"​);

​LinkButton​ loc =
(​LinkButton​)row.Cells[1].FindControl(​"LocationName"​);

locName = loc.Text;
​//Utilities.SpeakIt(locName);
​string​ locType = row.Cells[4].Text;
​string​ locId = row.Cells[6].Text;

​string​ websiteUrl = row.Cells[5].Text;


​if​ (!​string​.IsNullOrEmpty(websiteUrl))
websiteUrl = websiteUrl.Replace(​"&"​, ​"&amp;"​);

​//image_url, place_name, place_type, place_website


layoutParams.Add(​"image_url"​, ImageUrl);
layoutParams.Add(​"link"​, websiteUrl);
83

layoutParams.Add(​"name"​, locName);
​ /
/ layoutParams.Add("phrase_1", "test 1");
​// layoutParams.Add("phrase_2", "test 2");
​// layoutParams.Add("phrase_3", "test 3");
layoutParams.Add(​"type"​, locType);

Session.Remove(​"LocationDetail"​);
​Dictionary​<​string​, ​string​> dictLoc = ​new​ ​Dictionary​<​string​,
string​>();
dictLoc.Add(​"LocationName"​, locName);
dictLoc.Add(​"LocationId"​, locId);
dictLoc.Add(​"LocationType"​, locType);
Session[​"LocationDetail"​] = dictLoc;
​if​ (HasEnoughFunds())
h = client.CreateHIT(hitTypeId, ​Constants​.Title,
Constants​.Description,
​Constants​.Keywords, ​Constants​.LayoutId, layoutParams,
Constants​.Reward, ​Constants​.AssignmentDuration,
​null​, ​Constants​.LifeTime, ​Constants​.MaxAssignment,
Constants​.RequesterAnnotation, qualList, responseGroup);
​else
{
​// write error message here
}

Session.Remove(​"HitDetail"​);
​Dictionary​<​string​, ​string​> dictHit = ​new​ ​Dictionary​<​string​,
string​>();
dictHit.Add(​"HitId"​, h.HITId);
dictHit.Add(​"HitCreatedDateTime"​, ​DateTime​.Now.ToString());
dictHit.Add(​"HitStatus"​, h.HITStatus.ToString());
Session[​"HitDetail"​] = dictHit;
Session[​"HitId"​] = h.HITId;
}
​if​ (!​string​.IsNullOrEmpty(h.HITId) && !​string​.IsNullOrEmpty(locName))
​//Response.Redirect("Categories.aspx?" +
QueryStringParams.Latitude + "=" + hdnLat.Value + "&" +
QueryStringParams.Longitude + "=" + hdnLong.Value, true);
Response.Redirect(​"ResponseList.aspx?"​ +
QueryStringParams​.Latitude + ​"="​ + hdnLat.Value + ​"&"​ +
QueryStringParams​.Longitude + ​"="​ + hdnLong.Value, ​true​);

​private​ ​string​ SortDir(​string​ sField)


{
​string​ sDir = ​"asc"​;
​string​ sPrevField = (ViewState[​"SortField"​] != ​null​ ?
ViewState[​"SortField"​].ToString() : ​""​);
​if​ (sPrevField == sField)
​ asc"​ ? ​"desc"​ :
sDir = (ViewState[​"SortDir"​].ToString() == "
"asc"​);
​else
ViewState[​"SortField"​] = sField;
84

ViewState[​"SortDir"​] = sDir;
​return​ sDir;
}

​private​ ​void​ btnBack_ServerClick(​object​ sender, ​EventArgs​ e)


{
Response.Redirect(​"StartApplication.aspx"​, t ​ rue​);
}

}
}

● StartApplication.aspx.cs

using​ System;
using​ System.Drawing;
using​ System.Collections.Generic;
using​ System.Linq;
using​ System.Web;
using​ System.Web.UI;
using​ System.Net;
using​ System.Net.Http;
using​ System.Web.UI.WebControls;
using​ Amazon.WebServices.MechanicalTurk;
using​ Amazon.WebServices.MechanicalTurk.Domain;
using​ HtmlAgilityPack;
using​ System.IO;
using​ System.Speech.Synthesis;

namespace​ SpeakAhead
{
​public​ ​partial​ ​class​ ​StartApplication​ : System.Web.UI.​Page
{
​string​ _lat;
​string​ _long;
​protected​ ​void​ Page_Load(​object​ sender, ​EventArgs​ e)
{
btnStart.Click += btnStart_Click; ​// Go to
btnStart_Click event on 'Start' Button click
lblTitle.BackColor = ​Color​.Transparent;
}
​// Button start gets lat-long value.
​void​ btnStart_Click(​object​ sender, ​ImageClickEventArgs​ e)
{
_lat = hdnLat.Value;
_long = hdnLong.Value;
​//Response.Redirect("TestPage.aspx", true);
​if​ (!​string​.IsNullOrEmpty(_lat) && !​string​.IsNullOrEmpty(_long))
85

Response.Redirect(​"Mturk.aspx"​ + ​"?"​ + ​QueryStringParams​.Latitude +


"="​ + _lat + ​"&"​ + ​QueryStringParams​.Longitude + ​"="​ + _long, ​true​);

}
}
}

● ResponseList.aspx.cs

using​ System;
using​ System.Collections.Generic;
using​ System.Linq;
using​ System.Web;
using​ System.Xml;
using​ System.Xml.Linq;
using​ System.Web.UI;
using​ System.Data;
using​ System.Globalization;
using​ System.Web.UI.WebControls;
using​ System.Web.UI.HtmlControls;
using​ Amazon.WebServices.MechanicalTurk;
using​ Amazon.WebServices.MechanicalTurk.Domain;

namespace​ SpeakAhead
{
​public​ ​partial​ ​class​ ​ResponseList​ : System.Web.UI.​Page
{
​SimpleClient​ client;
​private​ ​string​ _lat;
​private​ ​string​ _long;
​private​ ​string​ _queryStringLatLong;
​string​ HitId = ​string​.Empty;
​string​ HitCreatedDateTime = ​string​.Empty;
​string​ HitStatus = ​string​.Empty;
​string​ AssignmentStatus = ​string​.Empty;
​string​ LocationName = ​string​.Empty;
​string​ LocationId = ​string​.Empty;
​string​ LocationType = ​string​.Empty;
​string​ AssignmentId = ​string​.Empty;
​string​ AssignmentApprovedDate = ​string​.Empty;
​string​ AssignmentSubmittedDate = ​string​.Empty;

​protected​ ​void​ Page_Load(​object​ sender, ​EventArgs​ e)


{
​HtmlAnchor​ btnBack = (​HtmlAnchor​)​this​.Master.FindControl(​"btnBack"​);
btnBack.ServerClick += btnBack_ServerClick;
​if
((!​string​.IsNullOrEmpty(Request.QueryString[​QueryStringParams​.Latitude]))
&&
!​string​.IsNullOrEmpty(Request.QueryString[​QueryStringParams​.Longitude]))
{
_lat = Request.QueryString[​QueryStringParams​.Latitude];
_long = Request.QueryString[​QueryStringParams​.Longitude];
86

_queryStringLatLong = ​"?"​ + ​QueryStringParams​.Latitude + ​"="​ +


_lat + ​"&"​ + ​QueryStringParams​.Longitude + ​"="​ + _long;
}

​Dictionary​<​string​, ​string​> dictHit = (​Dictionary​<​string​,


string​>)Session[​"HitDetail"​];
​Dictionary​<​string​, ​string​> dictLocation = (​Dictionary​<s
​ tring​,
string​>)Session[​"LocationDetail"​];

​if​ (dictHit != ​null​)


{
dictHit.TryGetValue(​"HitId"​, ​out​ HitId);
dictHit.TryGetValue(​"HitCreatedDateTime"​, ​out​ HitCreatedDateTime);
dictHit.TryGetValue(​"HitStatus"​, ​out​ HitStatus);
}

​if​ (dictLocation != ​null​)


{
dictLocation.TryGetValue(​"LocationName"​, ​out​ LocationName);
​ ut​ LocationId);
dictLocation.TryGetValue(​"LocationId"​, o
dictLocation.TryGetValue(​"LocationType"​, ​out​ LocationType);
}

​if​ (!IsPostBack)
{
GetResultsFromDb(LocationId, LocationName);
GetResult();
}

​protected​ ​void​ gvResults_OnDataBound(​object​ sender, ​EventArgs​ e)


{
​if​ (gvResults.Rows.Count > 0)
{
gvResults.UseAccessibleHeader = ​true​;

gvResults.HeaderRow.TableSection = ​TableRowSection​.TableHeader;
gvResults.HeaderRow.CssClass = ​"ui-bar-d"​; ​ /
/
table header Row (blue theme)
gvResults.CssClass = ​"ui-shadow ui-responsive"​; ​// table
background (silver theme)
gvResults.Attributes.Add(​"data-role"​, ​"table"​);

gvResults.Attributes.Add(​"data-mode"​, ​"columntoggle"​);
// toggle icon column in too small view

​var​ headerCells = gvResults.HeaderRow.Cells;


headerCells[1].CssClass = ​"ui-li-heading "​;
​//headerCells[2].CssClass = "ui-li-heading ";
​//headerCells[2].Attributes.Add("data-priority", "2");
headerCells[1].Attributes.Add(​"data-class"​, ​"expand"​);
}
​else
87

((​Label​)​this​.Master.FindControl(​"lblErrorMessage"​)).Text = "
​ No
suggestions found. The page will try to find suggestions in 3 minutes. Please
wait.."​;
}

​protected​ ​void​ gvResults_OnRowCreated(​Object​ sender, ​GridViewRowEventArgs


e)
{

​if​ (e.Row.RowType == ​DataControlRowType​.DataRow)


{
​//var selectLinkButton = e.Row.FindControl("SelectRow") as
LinkButton;
​// e.Row.Cells[0].Style["display"] = "none";
e.Row.Attributes.Add(​"OnMouseOver"​, ​"this.style.cursor =
'hand';"​);

​var​ LinkLocationName = (​LinkButton​)e.Row.FindControl(​"Result"​);


LinkLocationName.CommandArgument = e.Row.RowIndex.ToString();
}
}

​private​ ​void​ GetResultsFromDb(​string​ LocationId, ​string​ LocationName)


{
​Dictionary​<​string​, ​string​> _results = ​new​ D
​ ictionary​<​string​,
string​>();
Session.Remove(​"ResultFromDb"​);

​string​ sql = ​" SELECT asgn.AssignmentId, ans.AnswerText From


tbl_Assignment asgn "
+ ​" JOIN tbl_Answer ans ON ans.AssignmentId =
asgn.AssignmentId "
+ ​" JOIN tbl_Hit hit ON hit.HitId = asgn.HitId "
+ ​" JOIN tbl_Location loc ON loc.LocationId =
hit.LocationId "
+ ​" Where loc.LocationId = '"​ + LocationId + ​"'"
+ ​" ORDER BY AnswerCounter DESC, .CreatedDateTime DESC "
+ ​" LIMIT 5"​;
​DataTable​ dt = ​Utilities​.SelectFromTable(sql);

​if​ (dt == n
​ ull​ || (dt != ​null​ && dt.Rows.Count == 0))
{
sql = "​ SELECT asgn.AssignmentId, ans.AnswerText From
tbl_Assignment asgn "
+ ​" JOIN tbl_Answer ans ON ans.AssignmentId =
asgn.AssignmentId "
+ ​" JOIN tbl_Hit hit ON hit.HitId = asgn.HitId "
+ ​" JOIN tbl_Location loc ON loc.LocationId =
hit.LocationId "
+ ​" Where loc.LocationId = "
+ ​" (SELECT LocationId FROM tbl_Location WHERE
LocationName = '"​ + LocationName + ​"'"​ + ​" ORDER BY LocationCounter DESC,
CreatedDateTime DESC LIMIT 1) "
+ ​" ORDER BY AnswerCounter DESC, ans.CreatedDateTime DESC
"
+ ​"LIMIT 5 "​;
88

dt = ​Utilities​.SelectFromTable(sql);
}

​if​ (dt != ​null​ && dt.Rows.Count > 0)


{
​foreach​ (​DataRow​ row ​in​ dt.Rows)
{
_results.Add(row[​"AssignmentId"​].ToString(),
row[​"AnswerText"​].ToString());
}
gvResults.DataSource = _results;
gvResults.DataBind();
gvResults.Columns[1].Visible = ​true​;
gvResults.Columns[2].Visible = ​false​;
}

Session[​"ResultFromDb"​] = _results;
}

​protected​ ​void​ GetResult()

{
​Dictionary​<​string​, ​string​> results = ​new​ ​Dictionary​<​string​, ​string​>();
​Dictionary​<​string​, ​string​> resultsFromTurk = TurkResults();
​Dictionary​<​string​, ​string​> resultsFromDb = (​Dictionary​<​string​,
string​>)Session[​"ResultFromDb"​];

​if​ (resultsFromDb != ​null​ && resultsFromDb.Keys.Count() > 0)


{
​foreach​ (​var​ resultFromDb ​in​ resultsFromDb)
results.Add(resultFromDb.Key, resultFromDb.Value);
}

​if​ (resultsFromTurk != ​null​ && resultsFromTurk.Keys.Count() > 0)


{
​foreach​ (​var​ resultFromTurk ​in​ resultsFromTurk)
​try
{
results.Add(resultFromTurk.Key, resultFromTurk.Value);
}
​catch
{

}
}

​if​ (results.Count() > 0)


{
gvResults.DataSource = results;
gvResults.DataBind();
gvResults.Columns[1].Visible = ​true​;
gvResults.Columns[2].Visible = ​false​;
gvResults.Columns[0].Visible = ​false​;
gvResults.Columns[3].Visible = ​false​;

}
​else
89

{
​DataTable​ dt = ​new​ ​DataTable​();
gvResults.DataSource = dt;
gvResults.DataBind();
}
}

​private​ ​Dictionary​<​string​, ​string​> TurkResults()


{
​IList​<​Assignment​> assignments = ​new​ ​List​<​Assignment​>();
client = ​new​ ​SimpleClient​();
​// Pass the created HITID here.
​if​ (Session[​"HitId"​] != ​null​)
assignments =
client.GetAllAssignmentsForHIT((​string​)Session[​"HitId"​]);

​Dictionary​<​string​, ​string​> _hitResults = ​new​ ​Dictionary​<​string​,


string​>();
((​Label​)​this​.Master.FindControl(​"lblErrorMessage"​)).Text = " ​ "​;
​if​ (assignments != ​null​ && assignments.Count > 0)
{
​foreach​ (​var​ assignment ​in​ assignments)
{
​XmlDocument​ answer = ​new​ ​XmlDocument​();
​//assignment.answer is a xml in string format.
answer.LoadXml(assignment.Answer);
​// Passing xml
​XmlNamespaceManager​ ns = ​new
XmlNamespaceManager​(answer.NameTable);
ns.AddNamespace(​"mturk"​,
"http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2005-10-01/Quest
ionFormAnswers.xsd"​);
​XmlNode​ xnode = answer.SelectSingleNode(​"//mturk:FreeText"​,
ns);
​if​ (xnode != ​null​ && !​string​.IsNullOrEmpty(xnode.InnerText))
_hitResults.Add(assignment.AssignmentId, xnode.InnerText);
}
}
​//else
​// ((Label)this.Master.FindControl("lblErrorMessage")).Text = "No
response has been received yet.";

​return​ _hitResults;
}

​protected​ ​void​ gvResults_RowCommand(​object​ sender,


GridViewCommandEventArgs​ e)
{
​try
{
​if​ (e.CommandName == ​"Select"​)
{
​// Retrieve the row index stored in the
​// CommandArgument property.
​int​ index = ​Convert​.ToInt32(e.CommandArgument);
​// Retrieve the row that contains the button
​// from the Rows collection.
90

​GridViewRow​ row = gvResults.Rows[index];


​LinkButton​ result =
(​LinkButton​)row.Cells[1].FindControl(​"Result"​);

​//Utilities.SpeakIt(result.Text);

​string​[] responseGroup = ​new​ ​string​[] { };

client = ​new​ ​SimpleClient​();


​ FormatProvider​ enUsDateFormat = ​new
I
CultureInfo​(​"en-US"​).DateTimeFormat;

​GetAssignmentResult​ asgn =
client.GetAssignment(row.Cells[3].Text, responseGroup);
​if​ (asgn != ​null​)
{
AssignmentId = asgn.Assignment.AssignmentId;
AssignmentApprovedDate =
asgn.Assignment.ApprovalTime.ToString(​"yyyy-MM-dd HH:mm:ss"​);
AssignmentSubmittedDate =
asgn.Assignment.SubmitTime.ToString(​"yyyy-MM-dd HH:mm:ss"​);
AssignmentStatus =
asgn.Assignment.AssignmentStatus.ToString();
}

​if​ (​string​.IsNullOrEmpty(HitId) ||
string​.IsNullOrEmpty(AssignmentId) || ​string​.IsNullOrEmpty(LocationId))
​return​;
​else
{
​bool​ valuesInserted = ​false​;
​string​ sql = ​"INSERT OR REPLACE INTO tbl_Location
(LocationId, LocationName, LocationType, LocationCounter) "
+ ​" SELECT X.LocationId, X.LocationName,
X.LocationType, X.LocationCounter + COALESCE(l.LocationCounter, 0) FROM "
+ ​" (SELECT "
+ ​"'"​ + LocationId + ​"' AS LocationId, '"​ +
LocationName + ​"' AS LocationName, '"​ + LocationType + ​"' AS LocationType, "​ + ​"1
AS LocationCounter"​ + ​") X "
+ ​" LEFT join tbl_Location L ON L.LocationId =
X.LocationId "​;

valuesInserted = ​Utilities​.InsertIntoTable(sql);

sql = ​" INSERT INTO tbl_Hit "


+ ​" (HitId, CreatedDate, LocationId) "
+ ​" VALUES "
+ ​"( '"​ + HitId + ​"', '"​ + HitCreatedDateTime + ​"', '"​ +
LocationId + ​"') "​;

​if​ (valuesInserted)
valuesInserted = ​Utilities​.InsertIntoTable(sql);

sql = ​"INSERT OR REPLACE INTO tbl_Assignment


(AssignmentId, ApprovedDate, SubmittedDate, HitId) "
91

​ SELECT X.AssignmentId, X.ApprovedDate,


+ "
X.SubmittedDate, X.HitId FROM "
+ "​ (SELECT "

+ ​"'"​ + AssignmentId + ​"' AS AssignmentId, '"​ +


AssignmentApprovedDate + ​"' AS ApprovedDate, '"​ + AssignmentSubmittedDate + ​"' AS
SubmittedDate, '"​ + HitId + ​"' AS HitId ') X "
+ ​" LEFT join tbl_Assignment A ON A.AssignmentId =
X.AssignmentId "​;

​if​ (valuesInserted)
valuesInserted = ​Utilities​.InsertIntoTable(sql);

sql = ​"INSERT OR REPLACE INTO tbl_Answer ( AnswerText,


AssignmentId, AnswerCounter) "
+ ​" SELECT X.Answertext, X.AssignmentId,
X.AssignmentCounter + COALESCE(A.AssignmentCounter, 0) FROM "
+ ​" (SELECT '"
+ result.Text + ​"' AS Answertext, "​ + ​"'"​ +
AssignmentId + ​"' AS AssignmentId, "​ + ​"1 AS AssignmentCounter"​ + ​") X "
+ ​" LEFT join tbl_Assignment A ON A.AssignmentId =
X.AssignmentId "​;

​if​ (valuesInserted)
valuesInserted = ​Utilities​.InsertIntoTable(sql);
}
}
}
​catch​ (​Exception​ ex)
{
((​Label​)​this​.Master.FindControl(​"lblErrorMessage"​)).Text = "
​ Error
occured. Please try again."​;
}

​private​ ​void​ btnBack_ServerClick(​object​ sender, ​EventArgs​ e)


{
Response.Redirect(​"Categories.aspx"​ + _queryStringLatLong, ​true​);
}

● Web.Config

<?​xml​ ​version​=​"​1.0​"​ ​encoding​=​"​utf-8​"​?>


<!--
For more information on how to configure your ASP.NET application, please visit
http://go.microsoft.com/fwlink/?LinkId=169433
​-->
<​configuration​>
<​configSections​>
92

<!--​ For more information on Entity Framework configuration, visit


http://go.microsoft.com/fwlink/?LinkID=237468 ​-->
<​section​ ​name​=​"​entityFramework​"
type​=​"​System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection,
EntityFramework, Version=6.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089​"​ ​requirePermission​=​"​false​"​ />
</​configSections​>
<​system.web​>
<​compilation​ ​debug​=​"​true​"​ ​targetFramework​=​"​4.5​"​ />
<​httpRuntime​ ​targetFramework​=​"​4.5​"​ />
</​system.web​>
<​appSettings​>
<!--
You can find your access keys by going to aws.amazon.com,
hovering your mouse over "Your Web Services Account" in the top-right
corner and selecting View Access Key Identifiers. Be sure to
log-in with the same username and password you registered with your
Mechanical Turk Requester account.

If you don't yet have a Mechanical Turk Requester account, you


can create one by visiting http://requester.mturk.com/
-->
<!--
<add key="MechanicalTurk.ServiceEndpoint"
value="https://mechanicalturk.amazonaws.com?Service=AWSMechanicalTurkRequester"/>
<add key="MechanicalTurk.AccessKeyId" value="AKIAIEPAFME6QD5OPLNQ"/>
<add key="MechanicalTurk.SecretAccessKey"
value="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"/>
​-->
<​add​ ​key​=​"​MechanicalTurk.ServiceEndpoint​"
value​=​"​https://mechanicalturk.sandbox.amazonaws.com?Service=AWSMechanicalTurkReque
ster​"​ />
<​add​ ​key​=​"​MechanicalTurk.AccessKeyId​"​ ​value​=​"​AKIAJ6YJ3KGQWMYOXADA​"​ />
<​add​ ​key​=​"​MechanicalTurk.SecretAccessKey​"
value​=​"​9Xv/QFlhpxZQxCUIjYPB8isYuUGFNDEuxs+ZKqFc​"​ />

<!--​ Keys used for running the unit tests ​-->


<​add​ ​key​=​"​MechanicalTurk.Test.WorkerID​"​ ​value​=​"​[Worker ID here]​"​ />
<!--​ By default the unit tests will refuse to talk to a non-sandbox endpoint.
If you *really* want to run the unit tests with real HITs and real money then you
can set this value to 'Use real money'. You probably don't want this. ​-->
<​add​ ​key​=​"​MechanicalTurk.Test.UseNonSandboxEndpoint​"​ ​value​=​"​No​"​ />
<!--​ Setting this value to 'yes' will do a client-side validation of your
QuestionXML. Client-side validation can be helpful when authoring your
QuestionXML but is unnecessary when running your application in production.
This should be set to 'yes' if you run the SDK tests. ​-->
<​add​ ​key​=​"​MechanicalTurk.MiscOptions.EnsureQuestionValidity​"​ ​value​=​"​no​"​ />
</​appSettings​>
<​system.data​>
<​DbProviderFactories​>
<​remove​ ​invariant​=​"​System.Data.SQLite​"​ />
<​add​ ​name​=​"​SQLite Data Provider​"​ ​invariant​=​"​System.Data.SQLite​"
description​=​"​.Net Framework Data Provider for SQLite​"
type​=​"​System.Data.SQLite.SQLiteFactory, System.Data.SQLite​"​ />
<​remove​ ​invariant​=​"​System.Data.SQLite.EF6​"​ />
<​add​ ​name​=​"​SQLite Data Provider (Entity Framework 6)​"
invariant​=​"​System.Data.SQLite.EF6​"​ ​description​=​"​.Net Framework Data Provider for
93

SQLite (Entity Framework 6)​"​ ​type​=​"​System.Data.SQLite.EF6.SQLiteProviderFactory,


System.Data.SQLite.EF6​"​ />
</​DbProviderFactories​>
</​system.data​>
<​entityFramework​>
<​defaultConnectionFactory
type​=​"​System.Data.Entity.Infrastructure.LocalDbConnectionFactory,
EntityFramework​"​>
<​parameters​>
<​parameter​ ​value​=​"​v11.0​"​ />
</​parameters​>
</​defaultConnectionFactory​>
<​providers​>
<​provider​ ​invariantName​=​"​System.Data.SqlClient​"
type​=​"​System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer​"
/>
<​provider​ ​invariantName​=​"​System.Data.SQLite.EF6​"
type​=​"​System.Data.SQLite.EF6.SQLiteProviderServices, System.Data.SQLite.EF6​"​ />
</​providers​>
</​entityFramework​>

</​configuration​>

-- End of Document --

You might also like