Professional Documents
Culture Documents
APPROVAL SHEET
Marie desJardins
Professor
Department of Computer Science and
Electrical Engineering
University of Maryland Baltimore County
ABSTRACT
Traditional communication aid technologies allow users to retrieve words and phrases
using built-in vocabulary lists. Although some pre-set vocabulary lists allow users to add
conversation topics, they require the user to manually input and pre-program the speech
options in advance. This especially raises concern for people with aphasia, who face
difficulty with vocabulary access and speech output, as it entails increased dependence on
others. Despite previous research indicating context awareness as an important factor for
improving usability of traditional vocabulary lists, less consideration has been given to
3
Using Amazon’s Mechanical Turk (MTurk) crowdsourcing platform and Google Places
context-aware speech recommendation tool that generates speech suggestions for people
with aphasia by soliciting human contributions directly into the vocabulary lists.
4
A Crowdsourced Speech
Recommendation System for People with Aphasia
by
Ankita
i
7
8
Table of Contents
Dedication ..........................................................................................................................i
Table of Contents ..............................................................................................................ii
List of Tables ....................................................................................................................iv
List of Figures .................................................................................................................. v
List of Abbreviations and Acronyms .............................................................................. vi
CHAPTER 1: INTRODUCTION................................................................................... 1
ii
9
4.1 Discussion....................................................................................................................36
4.2 Conclusion ..................................................................................................................38
4.3 Future Work ................................................................................................................40
References ........................................................................................................................41
Appendices ...................................................................................................................... 51
iii
10
List of Tables
Table 1: Differences between Static and Dynamic Types of AAC devices ................. 5
iv
11
List of Figures
Figure 2: A high tech AAC like IPad Application Proloquo2Go with dynamic display
form sentences……........................................................................................7
Figure 12: Home Page of the context aware speech recommendation tool.................30
Figure 13: Location Page of the context aware speech recommendation tool.............31
Figure 14: Location Image showing Place Icons using column toggle button.............32
Figure 15: Column toggle button showing layout with different columns…………….33
vii
13
Chapter 1
INTRODUCTION
This chapter begins by first defining aphasia, its impact on everyday life and why it is
important to understand the role and challenges of providing technological support for
project work.
language, affecting the production and comprehension of speech and the ability to read
and write.” While aphasia is often seen in older people and is commonly caused by
stroke, it can occur in people of all ages, races, nationalities and gender and affects the
More than 200,000 Americans are diagnosed with the disorder every year. Some of the
However, two forms of aphasia that most directly stand to benefit from the work done in
14
my project are (1) Anomic aphasia and (2) Broca's Aphasia. In both Anomic aphasia and
Broca’s Aphasia, individuals can generally read efficiently but require assistance with
vocabulary access and speech output. Anomic or nominal aphasia affects a person’s
ability to produce words in context of speech and causes frustration. Those with anomic
aphasia can usually comprehend speech and written language but cannot produce clear,
coherent thoughts. Broca's aphasia causes speech-production difficulties and those with
Broca’s aphasia often understand speech but have significant difficulty with writing and
Stroke Association (2013) asserts that there is not one solution to make information
differently, it is important to identify how and where technology can fill the gap to
improve communication needs. Dawe (2006) asserts that even with many forms of
assistive technologies that are available to people with aphasia, 35% of devices are
abandoned shortly after their introduction as communication aids. Tentori and Hayes
(2010) suggest many reasons for lack of use of communication aids such as high cost of
purchase, lack of portability and issues with usability. This project addresses an issue
Vocabulary lists in traditional communication aid technologies allow the user to retrieve
words and phrases using a pre-stored vocabulary catalog. These pre-stored catalogs
contain words and phrases that come in-built e.g., a 'Food' category may always contain
fixed words like 'coffee', 'tea', 'burger', 'sandwich' etc. While customizable vocabulary
lists that allow users to add words or phrases to the existing lists exist, they impose an
increased burden on users. This especially raises concern for people with aphasia who
face difficulty in vocabulary access and speech output as it entails dependence on others
and increases user effort to recall words and phrases and construct sentences.
conversational topics do not consider users’ spontaneity to request new sets of phrases
anywhere and anytime. One possible solution is the use of an on-demand human
work-force.
Since previous research by Kane, Linam, Althoff and McCall (2012), and by Epp,
Campigotto, Levy and Baecker (2011) used contextual information only to retrieve
previously-added phrases and words, this project advances by introducing a novel method
that identifies the user's location-related contextual information like name, image, icon,
website. Human contributions by crowdworkers are integrated directly into the user
interface to generate and display new location-relevant words and phrases to help people
Chapter 2
RELATED WORK
communication support for people with aphasia or other related disorders. The chapter
(AAC) Systems, discusses previous research on adaptive AAC systems, and describes
how location awareness and crowdsourcing techniques can improve the resources
People with aphasia rely on AAC because they struggle with language difficulties
ranging from word retrieval, sentence formation and vocal articulation and often cannot
anything that makes communication quicker and easier and can include communication
books and speech-generating devices that are used to transmit and receive messages
(Supporting People who use AAC Strategies: In the Home, School, and Community,
2008). People with aphasia often store and organize a huge list of vocabulary words and
maps to aid their communication (Accessing AAC through Medicaid and EPSDT
According to Visvader (2013), low-tech AAC aids that do not need batteries, electricity
or electronics can include books or vocabulary banks through which the person can
communicate a thought or idea (see Figure 1). On the other hand, high-tech AAC aids
that utilize power through batteries or electricity have the ability to store and produce
electronic messages, allowing the user to communicate using speech output and to
The following table distinguishes the types of AAC devices based on language display:
Static Dynamic
Low-tech High-tech
Limited customizable word bank Memory-based customizable word bank
Functional-based Language generation-based
Devices include: Picture Exchange Devices include: DynaVox Maestro, a Prentke
Communication System (PECS), Big Mack, Romich Accent, a Saltillo Nova-Chat or an
Rocking Plate Talker and Cheaptalk iPad (with an appropriate AAC app)
Traditional high-tech electronic aids use a static display to display a limited number of
pre-assigned words and letters. On the other hand, dynamic display aids use a
memory-based word bank that allows the user to electronically access pages of
vocabulary and provide for custom display options to be stored by the user. Dynamic
displays are also considered better than static displays for individuals who have difficulty
Figure 1: A low-tech AAC like the Picture Exchange Communication System (PECS)
provides the user access to a limited number of printed words to create simple requests.
(http://www.iocresco.it/images/stories/Articoli/PECS/pecs.jpg)
19
Figure 2: A high tech AAC like IPad Application Proloquo2Go with dynamic display
gives the user access to multiple pages of vocabulary in order to form sentences.
(http://www.assistiveware.com/sites/default/files/ipad-mini-proloquo2go-3-core-home-6x
6-landscape-small.png)
communication (Stroke Association, 2013). Software used for language formation helps
re-engage individuals with aphasia in social and community activities and make them feel
more confident about their overall communication skills (Golashesky, 2008). Recent
research on AAC devices and field studies collaborating with people with aphasia or
20
other related language disorders have helped to explore many avenues in which the
usability of AAC systems can be improved further. For example, Allen, McGrenere and
Purves (2007) introduced PhotoTalk, a mobile device application that allows people with
customized communication aid to people with aphasia by the use of customizable icons,
images, photos and audio clips (Willkomm, 2011). Applications designed to assist
individuals with aphasia are moving from image-based support to context-based support
– Two applications that are currently available or in development include (1) MyVoice
adaptation for people with aphasia to suggest conversation topics (Epp, Campigotto,
Baecker & Marco Polo, 2011). Going a step further, TalkAbout would also provide users
with a word list that is adapted to their current location as well as suggest the topics of
conversation previously discussed with the same conversation partner (Kane, Jayant,
Wobbrock, & Ladner, 2012). Inspired by these evolving applications and resources, my
current research project designed an application that provides users with spontaneous
While the use of location awareness is not new, the use of location awareness in AAC
systems is a relatively new area in Human Computer Interaction, which focuses on using
mobile technologies to anticipate the needs of the user and suggest relevant conversation
topics. Kane et al. (2012) pointed out that “growing ubiquity of mobile devices and
applications has resulted in the introduction and widespread adoption of AAC software in
mainstream mobile devices, tablets and PCs.” Recent work on context awareness in AAC
devices (Kane et al., 2012, Epp et al., 2011, Wisenburn et al., 2008 ) has shown how
people with aphasia can benefit from context awareness in AAC devices. These
approaches were better than traditional AAC approaches in a variety of domains: (1)
These methods provide support for sorting through words and symbols within a given
context; and (2) These methods track the user's geographic context through GPS sensors,
topics. Those with Aphasia are most excited about applications that provide
context-based conversation prompts because these applications narrow down the scope of
Wisenburn and Higginbotham (2008) introduced a new AAC device for those with
VIVOCA. VIVOCA uses automatic speech recognition (ASR) techniques, which utilizes
statistical models using hidden markov model (HMM), in order to generate coherent
22
speech from disordered attempts (Wisenburn & Higginbotham, 2008) The software
MarcoPolo by Epp et al. (2011) provides a tailored vocabulary list that can be arranged
Since, the approaches above suggest conversation topics that are pre-programmed by
users in the word lists, my project focuses on designing an application that provides
users with spontaneously-generated new conversational phrases to AAC users and thus
reduces the burden placed on people with aphasia while using traditional vocabulary lists.
2.4 Crowdsourcing
Crowdsourcing solicits labor from an online community. Small sub-tasks that can be
easily solved by human workers are out-sourced to internet workers. Responses are
available on-demand from one of the many people in a given platform. This is an
especially important resource when required responses are specialized and not able to be
Crowdsourcing applications can be used for a number of different needs and purposes. In
2009, Luis van Ahn introduced crowdsourcing as a game format. His “Games with a
complete a task that would be extremely complex to solve with nonhuman intelligence
(Ahn & Dabbish, 2004). Google applied a similar model of crowdsourcing to refine
search results for online images. Soylent, a word processing crowdsourcing interface,
23
initial worker locates errors in a document, the second worker edits the errors that were
identified and a third worker verifies the edits (Bernstein et al., 2010).
reliability and quality of the responses. ESP games used agreement technique to validate
delay time, accuracy and other features of the responses in order to predict quality and
While crowdsourcing platforms can be used for a variety of purposes like image labeling,
writing, surveys and categorization, crowdsourcing technology has more recently been
application that improves accessibility for its user is VizWiz. One recent application of
near real-time answers to their specific image-based questions. The quikTurkit algorithm
employed by VizWiz notifies workers in advance that a user is about to ask a question,
decreasing latency between when the question is posed and when an answer is received
information like place names, images, website links, icons and types on Amazon
24
Mechanical Turk in order to create tasks that require crowd-workers to suggest specific
words and phrases likely to be used in the given location. The responses generated from
this approach are then used to generate vocabulary lists on our SpeakAhead interface and
Amazon’s Mechanical Turk (MTurk) and other crowdsourcing markets source online
human workers to complete tasks that computers cannot reliably finish accurately.
organization, to submit a task that human workers need to complete. The Requester uses
the website to post a task and to check the status of the task. Workers complete and
submit the Human Intelligence Task (HIT) within the designated amount of time and are
rewarded for their submissions (Amazon Web Services 2014, Requester UI Guide (API)).
25
CHAPTER 3
This chapter describes the design of our crowdsourced speech recommendation system,
the system components. This section also describes the data flow diagrams and major
of the database and algorithms. Lastly, we discuss the technical and functional
3.1 SpeakAhead
people with aphasia to generate and manage common conversational words and phrases
to support their daily needs for communication. SpeakAhead proceeds in three steps—
(HITs) for crowd workers requesting to answer likely phrases for that location, and
(1) a local database that holds user-specific history of locations previously visited and
(2) a location identification service that identifies the current location of the user and
(4) a vocabulary-integration module that integrates and sorts words and phrases derived
from the database with responses generated by web workers to produce speech
google places API (name, image, type, website) and uses this information to ask
crowd-workers to suggest common conversational phrases for that specific location.
SpeakAhead then searches for previously-stored recommendations in the database for
that location and combines this information with newly generated responses submitted by
crowd-workers to spontaneously generate a vocabulary list of location-specific cues.
For instance, Adam is a person with aphasia who has trouble recalling words to use in
conversation. If Adam is present inside Subway Restaurant and wants to communicate
with others, he first navigates to the website on his personal device (e.g. mobile phone,
tablet, laptop) to start the application. The Location Module of the application then traces
his nearby locations and services. SpeakAhead suggests nearby places based on their
proximity and Adam sees location prompts (Image, Name, Icon) for Subway restaurant
on top of his list of suggested places.
Once Adam taps on Subway to select it as his current location, SpeakAhead’s Task
Module submits a question to crowd-workers to recommend likely phrases usable at that
location (e.g. ordering a specific food item, asking for refills etc.). Once this question is
successfully posted on Mechanical Turk website, Adam is redirected to the waiting
screen prompting him to wait while responses are generated. In the meantime, the
Vocabulary Integration Module of SpeakAhead collects related phrases previously
answered for that location from the database and displays them to Adam. As the new
location-specific cues arrive, they are appended to the response list to further improve
speech recommendations. While new phrases are added to Adam’s response list, he
clicks on “Can I get a refill for my drink?” from the list of phrases to make it audible to
his conversation partner.
29
SpeakAhead. The table structure describes the database schema and the implementation
3.2.1 Design
a) Location Module – This module uses different APIs to gather information about the
location.
b) Task Module – This module creates tasks for the crowd workers for a given location.
30
c) Vocabulary Module – This module interacts with the database and stores new
responses from crowd workers.
a) Location Module
1
In this module, the HTML5 Geolocation API (see Appendix 7: API Glossary) is used to
track the longitude and latitude of the user's current location. The coordinates of the user
2
are then used to create an HTML request to Google Places to search for 20 nearby
places. The place name, image, icon website and type are requested for these locations.
Locations of the images are requested from Google Places; if Google Places does not
3
have an image, jpg.to is used to search for an image by passing the name of the location
4
The Location Module uses the Distance Matrix API for sorting the list of locations
displayed to the user. The Distance Matrix service computes the distance between the
current location of the user and 20 nearby locations. The list of locations is then sorted in
descending order of proximity so that the nearest locations are listed first in the interface.
Once the user selects an appropriate location, the details are passed to the Task Module
1
http://www.w3schools.com/html/html5_geolocation.asp
2
https://developers.google.com/places/documentation/
3
http://jpg.to/
4
https://developers.google.com/maps/documentation/distancematrix/
31
for further processing. The following flow-chart (see figure 12) displays the sequence of
The Location module takes the coordinates of the current location as input and traces two
nearest locations to the user. The Location Module then retrieves the contextual attributes
of these places from Google Places API viz. name, type, image, and icon. The Module
then displays these locations to the user showcasing the nearest location first. Upon user
click, the Location module sends the name, image and type of place to the Task Module.
b) Task Module
In this module, the attributes of the locations selected by the user (place id, name, type,
5
website, image and icon) are supplied as HITLayoutParameter to the ‘CreateHIT’
method available from MTurk SDK to create a new HIT. The Task Module also uses a
use pre-designed HIT layouts managed on the requester's MTurk website account. Mturk
provides an interface to design HIT layout on MTurk requester’s website and it can be
accessed through different methods provided by MTurk SDK by using its unique
HITLayoutId.
The ‘CreateHIT’ method first assigns the location attributes values (location name,
website, and type of place) to the designated placeholder positioned inside the selected
HIT layout and then publishes the HIT on the Mechanical Turk website.
http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_HITLayoutParameterArt
icle.html
33
These HITs are then used to ask workers to write words or phrases for a specific location.
While SpeakAhead waits for the responses from MTurk, the Task Module calls the
retrieve newly submitted responses. Once the Vocabulary Integration Module returns the
list of responses, the incoming responses from MTurk are then appended to the response
list and displayed to the user. The response list consists of a textual list of
location-relevant phrases and words. The user can select a specific response from the list.
The particular worker gets approval for payment on MTurk. The selected response is
saved in the database for future use. The following flow-chart (see figure 13) displays
the sequence of steps performed by the Task Module and Vocabulary Module of
SpeakAhead:
34
The Task module (1.Creating Response Writing Task, 2. Searching for Responses in
Database) takes place related information as input from Location Module and creates
human intelligence tasks (HITs) on MTurk to generate responses submitted by turkers.
The Vocabulary Module takes location attributes from the Task Module as input and
searches for location relevant responses in the database. The Vocabulary Module (
3.Integrating pre-stored and new responses) then integrates existing responses retrieved
from the database with responses newly submitted by workers on MTurk.
35
c) Vocabulary Module
The Vocabulary Module queries the database engine to search for responses for a specific
location. The Vocabulary Module first queries the database using the unique location
identifier (LocationId). Each location has a unique Id given by Google Places API. The
Vocabulary Module then uses ‘LocationName’ to search for responses. If the database
search using ‘LocationId’ and ‘LocationName’ returns no results, then the Vocabulary
Module queries the database to find words and phrases specified for similar locations
(using LocationType). Meanwhile, a request is made to MTurk to get responses for the
HIT created for the particular location. The pre-stored responses and new responses
coming from MTurk are then sorted in an order where pre-stored responses come first
and then the new responses follow in the response list. If one or more responses are
retrieved from the local database, responses are sorted in the order of highest answer
counter value first. Answer counter is a count of the number of times the user has
selected that phrase from the response list. If two or more responses have the same
counter value, then the most recently selected response is listed first. Once all the
pre-stored responses are added to the response list, new responses coming from MTurk
are appended to the end of this list based on their arrival time. The user clicks on the
responses to speak the intended words or phrases. The answer counter value in the
response. Selected responses are made audible to others using .NET's built-in speech
3.2.2 Database
As words and phrases are generated from Mechanical Turk, a local database is used to
store this data and retrieve previously collected data from user selections from the
6
SpeakAhead interface. For this purpose, SpeakAhead uses SQLite database engine.
The database contains four tables (see Table 1). The tables which are used to store and
a. tbl_Location
b. tbl_HIT
c. tbl_Assignment
d. tbl_Answer
The following chart provides the summary of the tables with sample data.
6
http://www.sqlite.org/
37
a. tbl_Location - This table contains the list of locations for which responses are
generated. The primary key for the table is ‘LocationId’. The structure of the table is as
follows –
The table column details stored in the above table (tbl_Location) a re as follows:
a) LocationId – (Primary key). This column stores a fixed ID
b) LocationName - Stores the name of the places selected by the user.
38
7 8
c) LocationType - Stores Place Type listed by Google Places API .
When multiple values are available, place types are stored using
entry is made every time the user selects a new response from the
response list.
b. tbl_HIT - This table gives the relationship between HITs created and locations
selected. The table contains the list of HITs submitted by the application on MTurk
for a particular location. The primary key for the table is HITId. The structure of
response.
tbl_Location.
7
https://developers.google.com/places/documentation/supported_types
8
https://developers.google.com/places/
39
by the worker.
d. tbl_Answer –
This table contains the list of answer texts from the worker assignments as
selected by the user from the response page. When the user selects a response
40
which doesn’t exist in the local database, a subsequent entry is inserted into the
table and the answer counter for that response is assigned to 1. When the user taps
incremented by one and the table is updated. The Answer counter (see Figure
10) value is used to keep track of the number of times a phrase was selected for a
given location. This value is later used by the Vocabulary Module for sorting
responses. The system sorts answers for a particular response based on Answer
Description of the Assignment details stored in the table columns are as follows –
response.
c) AnswerCounter – Counts the number of times the user has selected the
phrase. New answer is initialized to 1. Increments by one each time the user
tbl_Assignment.
The following E-R diagram (see figure 12) illustrates the relationship between the four
This section provides the functional and technical specifications of the SpeakAhead
The database is maintained in SQLite database engine (see section 3.2.2 for details on the
JavaScript and JQuery Mobile. The Google Places API is used to obtain contextual
information regarding nearby locations of the user and MTurk SDK is used for retrieving
Similar to many other AAC systems, SpeakAhead's purpose is to enable users to store,
organize, browse, and speak stored words and phrases. The current prototype enables
users to browse, add and sort items based on the current location context.
The user first navigates to the application URL and then clicks the start button on the
home page to start the application (see figure 14). If location-sensing is not enabled in the
browser, the user is asked to enable it before the application is started. Currently, the
web-based version of the prototype does not support user profiles. However, current
mobile devices (e.g., iPhone 5s, Samsung Galaxy phone S5) which come pre-installed
with fingerprint authentication and biometric security systems for personal identification
can be used effectively with future versions of the prototype. These fingerprint-reading
43
and scanner techniques may also be a good alternative for people with language disorders
showcasing nearby locations and second, a textual list of words and phrase prompts for
that location. Touching an on-screen item speaks the associated text using the .Net
9
Speech Synthesizer assembly . While many AAC systems organize words by topic (e.g.,
9
http://msdn.microsoft.com/en-us/library/system.speech.synthesis.speechsynthesizer.aspx
44
food, vehicles, parts of the body), SpeakAhead organizes words and phrases by the
location of the user. In its most basic form, the interface is similar to commercially
available AAC software such as MyVoice in the sense that both applications use location
awareness, but features the ability to recommend new phrases and words collected from
crowdworkers. SpeakAhead also stores phrases selected by the user to the local database.
Figure 15. Location Selection Page of the context-aware speech recommendation tool
The location selection provides a list of nearby locations along with their attributes like
name, image, website, icon and type. To make the interface user friendly and easy to
understand, the default setting only displays place name and place image. To select a
location from the nearby places list, the user either taps on the place image or inside the
45
column displaying the place name. (The screenshot was taken on a web browser using
10
ripple mobile emulator to test the look and feel of the interface on an IPad.)
Figure 16. Location Image showing Place Icons using column toggle button
A column toggle button is provided on the top right corner. When the user clicks this
button, a dialog box displaying checkboxes to choose the location attributes to be
displayed is shown. Figure 16. displays the location page showing the list of nearby
locations taken near a recreational spot in Baltimore. (The screenshot was taken on a
web browser using ripple mobile emulator to test the look and feel of the interface on an
IPad.)
10
http://emulate.phonegap.com/
46
Figure 17. Column toggle button showing layout with different columns
Users may add new words or phrases to SpeakAhead's database by themselves or with
the assistance of a companion. Adding a new item requires the user to select the text to be
spoken from the response list which is generated by adding responses from local
databases and crowdworkers into a list. The current web-based prototype uses a single
database to gather the responses for all users. A fully functional mobile application would
The above response page provides a list of words and phrases for Subway Restaurants.
When the user selects a location on the location page, the vocabulary module sorts and
integrates the pre-stored responses in the database and new responses recently submitted
on the interface.
Limitations of the prototype include associating images with phrases and prompts on the
response page. Although this may increase the time consumed to answer the human
intelligence tasks and the algorithmic complexity for tagging images with responses, it
can greatly improve the recognizability of responses. The second limitation includes
using the same color schemes for word structures such as nouns, verbs etc. By
introducing different color schemes for distinguishing words, phrases, and nouns, the
ease of use can be simplified. Another limitation of the UI is that it does not support the
per-user login feature. This can be addressed by maintaining user sessions and using
One of the major limitations with the SpeakAhead prototype testing is that current design
strategies used for building the UI have not been tested with people with aphasia or other
related language disorders. Kane et al. (2012) identified several design activities that are
especially helpful for designing context-aware communication devices with people with
aphasia. These design activities can be used to solicit information and feedback on how
By combining the existing interactive prototype with low level prototypes and
storyboards, we can collaborate more effectively with people with aphasia for future
design improvements.
49
CHAPTER 4
CONCLUSION
This chapter summarizes the contribution and findings made in this project and describes
its relation to previous work. It explains its current limitations and provides opportunities
4.1 Discussion
Previous research on AAC systems (Epp et al., 2011), (Kane et al., 2011), (McGrenere et
al. (2008) had shown how context-aware computing can improve the usability of
vocabulary catalogs that are either inbuilt or manually pre-stored by the user. In this
project work, we set out to support the earlier research by introducing the design of a
context-aware AAC framework that also explores the use of contextual information
leveraged from the location context. We further extend the related research by utilizing a
method that outsources contextual information associated with the user's current location
to internet workers. Our system interface produces Human Intelligence Tasks (HITs) on
phrase prompts. The interface continuously updates vocabulary lists with new responses
This project work demonstrates the design and implementation of a crowdsourced speech
collaborating with real users. The current version of SpeakAhead leverages place name,
type, website, image and icon to supply contextual information to the interface and to
crowd-workers. Future research could also explore the usability of the responses on a
broader context by carefully incorporating contextual information like place reviews and
place menus in the HIT designs as well. It may also be beneficial to include filtering
algorithms to further narrow responses that do not serve the communication intent of the
user (e.g. using word-patterns or keywords for filtering responses not related to food in a
restaurant). Also, the worker tasks could be further broken down in easily verifiable
steps. For example, one worker's task could be to find related images on the web and
another could be to verify that responses are not the same as one already submitted.
Currently, a local database is utilized to reduce the latency in feedback times so that the
user is not left waiting. However, future studies focusing on algorithms like one
4.3 Conclusion
In this project, we have explored how location-aware computing and crowdsourcing may
improve the usability of current AAC systems. We have presented the design of a
framework for that utilizes a method to outsource contextual information like the name,
51
image, type, icon and website associated with the user's current location and create HITs
on MTurk, to spontaneously generate daily conversational phrases for that location in the
SpeakAhead prototype that tracks the current location of the user and generates response
-End-
52
REFERENCES
[1] Ahn L.V, & Dabbish, L. (2004). Labeling images with a computer game. Proceedings
of the 2004 Conference on Human Factors in Computing Systems - CHI ’04,
319–326. doi:10.1145/985692.985733
[2] Ahn, L.V, and Dabbish, L. Designing games with a purpose. ACM 51, 8 (2008), 58–
67.
[3] Ahn, L.V., Liu R., and Blum, M. Peekaboom: a game for locating objects in
Images. In CHI ’06: Proceedings of the SIGCHI conference on Human Factors in
Computing systems (New York, NY, USA, 2006), ACM, pp. 55–64.
[4] Ahn, L.V, Ginosar S, Kedia, M., Liu R., and Blum M. Improving accessibility of
The web with a computer game. In Conference on Human Factors in Computing
Systems (CHI’06), pages 79–82, 2006.
[5] Ahn, L.V, Kedia M., and Blum, M. Verbosity: a game for collecting common-
Sense facts. In Conference on Human Factors in Computing Systems (CHI’06).
[7] Allen, M., McGrenere, J., and Purves, B. The design and field evaluation of
PhotoTalk: a digital image communication application for people with aphasia.
Proc. ASSETS '07, ACM Press (2007), 187-194.
[8] Alonso, O., Rose, D.E., and Stewart, B. Crowdsourcing for relevance evaluation.
53
[9] Amazon Mechanical Turk Guide for Social Scientists (updated 1-18-12),
Buhrmester M., Swan Lab, http://homepage.psy.utexas.edu/HomePage/
Students/Buhrmester/MTurk%20Guide.htm
[11] Banajee, M., DiCarlo, C. & B-Stricklin S, (2003). Core Vocabulary Determination
For Toddlers, Augmentative and Alternative Communication, 2, 67 - 73.
[12] Bernstein, M.S., Little, G., Miller, R.C, Hartmann, B., Ackerman, M.S, Karger,
D.R, Crowell D., Panovich K. Soylent: A Word Processor with a Crowd Inside,
UIST’10.
[14] Bigham J.P, Rika Jayant, Hanjie Ji, Greg Little, Andrew Millerγ, Robert C.
Miller, Robin Miller, Aubrey Tatarowicz, Yn White, Samuel White, Tom Yeh,
VizWiz: Nearly Real-time Answers to Visual Questions, CHI 2013
[15] Bigham J.P, Lasecki W. S., Interactive Crowdsourcing, P. Michelucci (ed.),
Handbook of Human Computation, 509, Springer Science+Business Media New
York 2013
[16] Brabham, D.C., Moving the crowd at iStockphoto: The composition of the crowd
And motivations for participation in a crowdsourcing application. First Monday
54
[19] Callison-Burch, C.Fast, cheap, and creative: Evaluating translation quality using
Amazon's Mechanical Turk. Proceedings of the 2009 Conference on Empirical
Methods in Natural Language Processing: Volume 1-Volume 1, Association for
Computational Linguistics (2009), 286-295
[22] Clarke, M., Wilkinson, R. (December 2007). "Interaction between children with
cerebral palsy and their Peers". augmentative and alternative communication 23 (4):
336–348.
[23] Crais, E. (1991). “Moving from “parent involvement” to family centered services”.
American Journal of Language Pathology 1:5-8
[24] Dawe, M. Desperately Seeking Simplicity: How Young Adults with Cognitive
HI 2006,
Disabilities and their Families Adopt Assistive Technologies, C
Montreal, Canada, pp. 1143-1152, 2006.
[25] Dempster, M., & Alm, N. (2010). Automatic generation of conversational
Utterances and narrative for Augmentative and Alternative Communication : a
Prototype system, (June), 10–18.
55
[26] Eaton, E., & Wagstaff, K. (2005). A context-sensitive and user-centric approach to
developing personal assistants.
[27] Eidelman, V., Huang, Z., & Harper, M. (2010). Lessons Learned in Part-of-Speech
Tagging of Conversational Speech, (2007).
ontext-Sensitive
[28] Epp, CD., Campigotto R, Levy A, Baecker R, Marco Polo: C
Mobile Communication Support. Proc. FICCDAT: RESNA/ICTA, 2011
[32] Evans, B.M. and Chi, E.H. Towards a model of understanding social search. Proc.
CSCW 2008, ACM (2008), 485-494.
[34] Fenwick, K., Massimi, M., Baecker, R., Black, S., Tonon, K., Munteanu, C.,
Rochon, E., and Ryan, D. Cell phone software aiding name recall.
Proc. CHI '09 EA, ACM Press (2009), 4279-4284.
[35] Gena, C. (2005). Methods and techniques for the evaluation of user-adaptive
systems.
[40] Hacker, S., and Ahn L.V. Matching: eliciting user preferences with an online
Game. CHI ’09: Proceedings of the 27th international conference on Human
Factors in computing systems (New York, NY, USA, 2009), ACM, pp.1207–1216
[41] Hansen, D. L., and Golbeck, J. mixing it up: recommending collections of items.
CHI ’09: Proceedings of the 27th international conference on Human factors in
Computing systems (New York, NY, USA, 2009), ACM, pp. 1217–1226.
Howe, J. The rise of crowdsourcing. Wired Magazine 14, 6 (2006).
[43] Irani, L. and Silberman, S. Turkopticon: The Sourced Crowd is Made of People.
Presentation given at Dolores Labs, 10 June 2009.
[44] Kane S. K., Linam-C, B., Althoff, K., & McCall, D. (2012). What We Talk About :
Designing a Context-Aware Communication Tool for People with Aphasia, 1-8
[45] Kane S. K., Bigham J. P., and Wobbrock J. O. Slide rule: making mobile touch
Screens accessible to blind people using multi-touch interaction techniques.
57
[46] Kane S. K., Jayant C., Wobbrock J. O., and Ladner R. E. Freedom to roam: a
Study of mobile device adoption and accessibility for people with visual and
Motor disabilities. ASSETS 2009
[47] Kittur A., Chi E.H., Suh B. Crowdsourcing for Usability: Using Micro-
Task Markets for Rapid, Remote, and Low-Cost User Measurements, Palo Alto
Research Center
[48] Kittur A., Chi E.H., Suh B. Crowdsourcing User Studies with Mechanical Turk,
Palo Alto Research Center, CHI 2008
[49] Law, E., Ahn, L.V, R. Dannenberg, and Crawford M. Tagatune: A game for
Music and sound annotation. International Conference on Music Information
Retrieval (ISMIR’07), pages 361–364, 2007
[50] Law E., and Ahn, L.V. Input-agreement: a new mechanism for collecting data
Using human computation games. In Proceedings of CHI ’09 (New York, NY,
USA, 2009), ACM, pp. 1197–1206.
[54] Matikainen, P., Sukthankar, R., Hebert M. Model Recommendation for Action
Recognition the Robotics Institute, Carnegie Mellon University, Google Research
58
[55] Mcquiggan, S. W., Lee S., & Lester, J. C. Predicting User Physiological
Response For Interactive Environments : An Inductive Approach, 60–65.
[56] McGrenere, J., Davies R., Findlater, L., Graf P., Klawe, M., Moffatt, K., Purves,
B., and Yang, S. Insights from the aphasia project: designing technology for and
With people who have aphasia. Proc. CUU '03, ACM Press (2003), 112-118.
[59] Moffat, A. & Tan, R Language Representation on Dynamic AAC Devices: How
Do you choose? Independent Living Center Curtin University, Bentley, July 2013
Nilsson, M. (2002). Speech Recognition using Hidden Markov Model performance
Evaluation in a noisy environment.
[61] Notes from the amazon mechanical Turk tutorial, Brad Lab, Kesebir S., University
of Virginia,
http://www.darden.virginia.edu/web/uploadedFiles/Darden/BRAD/BRAD%20Lab%
20-%20Amazon%20Mechanical%20Turk%20Guidelines.pdf
[62] Nottale, M., & Baillie, J. Talking Robots : grounding a shared lexicon in an
Unconstrained environment, in Proceedings of the Seventh International
Conference on Epigenetic Robotics, 2007
[63] Olleros, F. Learning to Trust the Crowd: Some Lessons from Wikipedia. 2008
International MCETECH Conference on e-Technologies, (2008), 212-216.
59
[64] Oni, A., Lucas P., Druzdel M.J Comparison of Rule-Based and Bayesian Network
Approaches in Medical Diagnostic, 283–292. Springer-Verlag Berlin Heidelberg
2001
[67] Paul Visvader AAC Basics and Implementation: How to Teach Students who
“Talk with Technology” MA CCC-SLP © 2013 Assistive Technology Team,
Boulder Valley School District, Boulder, CO
[68] Pentland, A., & Liu, A. (1999). Modeling and prediction of human behavior.
omputation, 11(1), 229–42. http://www.ncbix.nlm.nih.gov/pubmed
Neural C
/9950731
[69] Putze, F., Meyer, J., Borné J., Schultz T., Holt D.V, & Funke J. (2006).
Combining Cognitive modeling and EEG to predict user behavior in a search task,
303–304.
[70] Ross J., Zaldivar, A., Irani, L., & Tomlinson B. (2010). Who are the Turkers ?
Worker Demographics in Amazon Mechanical Turk. ACM CHI Conference 2010,
1–5.
60
[72] Rzeszotarski M. Jeffrey, Kittur, Aniket, Instrumenting the Crowd: Using Implicit
Behavior measures to predict task performance UIST’11, October 16–19, 2011,
Santa Barbara, CA, USA. Copyright © 2011 ACM
[74] Schilit, B., Adams N., and Want, R. Context-aware computing applications. IEEE
Workshop on Mobile Computing Systems and Applications, IEEE (1994), 85-90.
[75] Schweer, A., Zealand, N., & Hinze, A. (2008). Combining Context-Awareness and
Semantics to Augment Memory, (April).
[77] Snow, R., O’Connor, B., Jurafsky, D., and Ng, A.Y. Cheap and Fast—But is it
Good?
[78] Evaluating Non- Expert Annotations for Natural Language Tasks. Proc. EMNLP
2008, ACL (2008), 254-263.
[79] Sorokin, A., Forsyth, D., & Goodwin, N. (2008). Utility data annotation with
Amazon Mechanical Turk, (c), 1–8.
[80] Tentori, M. And Hayes, G., Designing for Interaction immediacy to enhance social
Skills of children with autism, Ubicomp, Copenhagen, Denmark, pp 51-60, 2010.
[81] Vojnovic, M., Cruise J., Gunawardena, D., and Marbach, P. Ranking and
61
[84] Wais, P., Lingamneni S., Cook, D., Fennell J., Goldenberg B., Lubarov D., Simons
H. towards Building a High-Quality Workforce with Mechanical Turk, 1–5.
[85] Walsh, G., & Golbeck, J. (2010). Curator : A Game with a Purpose for Collection
Recommendation, 2079–2082.
[86] Weber I., & Robertson S. (2008). Rethinking the ESP Game, 11. Retrieved from
http://research.microsoft.com/pubs/70638/tr-2008-132.pdf
[88] Willkomm, T. (2011). Apps that benefit individuals with disabilities. Retrieved from
http://www.auburn.edu/outreach/opce/alatec/documents/2013presentations/iPad%20
apps%20Listing%20ALATEC.pdf
[90] Wobbrock, J. O., & Ph, D. (2010). Research Contribution Types in Human -
Computer Interaction, 23.
62
[91] Yeh, T., Lee J. J., and Darrell T. Photo-based question answering. MM 2008,
389–398, 2008.
Encyclopedia of Rehabilitation.
63
APPENDICES
2. Title - The title of the HIT. A title is short and descriptive and describes the kind
of tasks the HIT contains. On the Amazon Mechanical Turk website, the HIT title
information about the kind of task the HIT contains. On the Amazon Mechanical
Turk web site, the HIT description appears in the expanded view of search results,
and in the HIT and assignment screens. A good description gives the user enough
with placeholder values and create an additional HIT by providing those values
as 'HITLayoutParameters'.
11
http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurkRequester/
Concepts_HITsArticle.html
68
6. Reward - The amount of money the Requester will pay a worker for successfully
worker has to complete the HIT after accepting it. If a Worker does not complete
abandoned. If the HIT is still active, that is, its lifetime has not elapsed, the
which the HIT is no longer available for users to accept. After the lifetime of the
HIT elapses; the HIT no longer appears in HIT searches, even if some
9. Keywords - One or more words or phrases that describe the HIT, separated by
10. MaxAssignments - The number of times the HIT can be accepted and completed
meet before the Worker is allowed to accept and complete the HIT.
Geolocation API
The HTML5 Geolocation APIis used to get the geographical position of the user to
identify the current location of the user. Considering the privacy concerns of the user, the
position is not made available to the system unless the user approves it. The geolocation
API is supported by most modern browsers on desktop and mobile devices. The latitude
and longitude of the user are made available using JavaScript on the page. These
12
The Google Places API is a service that returns information about Places defined within
the API. The Google Places API service provides access to a set of place requests. We
use the Place Search Request of Google Places API to identify the nearby locations of the
user. Following is the list of the place requests available to the application.
12
https://developers.google.com/places/documentation/
70
● Place Search Request - Place Search request returns a list of places based on the
user's location. The Place Search Request gives access to a set of operations. We
use the Nearby Search operation and Place Details operation to identify the
nearby locations of the user and gather the information for the location context -
locations near the current geographical position. To use the Nearby Search
● Place Details Request - Place Details request returns more detailed information
about a specific Place. This request is used by the location module to provide
users with the contextual information related to the locations suggested. The place
attributes are also used by task modules while posting tasks on MTurk to supply
(b) Place types - An array indicating the type of the address.
71
(c) Place icon - The URL of the suggested icon. Place icon is used to
type of place.
(d) Place name - The human-readable name for the returned result.
(f) Place website - The URL of the website of the place. This link is
location.
The Google Distance Matrix service is used to compute travel distance between
original location and nearby destinations. The information returned is based on the
recommended route between start and end points, as calculated by the Google Maps
API. The distance values are used by the location module to sort the nearby locations
based on proximity. As a result, the nearest location is listed first and so on.
72
13
The Amazon Mechanical Turk SDK for .NET is an open source project containing a set
of libraries and tools designed to build solutions leveraging Amazon Mechanical Turk in
.NET. When the user selects a location from the list of Nearby Locations, available
contextual details for that location are passed to Task Module to create an HIT. The Task
Module uses Amazon Mechanical Turk to post tasks on MTurk worker platform, where
workers can find and accept HITs to submit their answers. Task Module also uses the
Following is the list of operations used by Task Module for creating and monitoring
14
(a) CreateHIT - The CreateHIT operation creates a new Human Intelligence Task
(HIT). The new HIT is made available for workers to find and accept on the
assignment's ID. Task module uses this operation to retrieve answers to the
13
http://aws.amazon.com/code/Amazon-Mechanical-Turk/923
15
http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurk
GettingStartedGuide/CreatingAHIT.html
14
73
assignments for the created HITs. This operation is also used by task module for
monitoring purposes. When the response page is active, periodic calls are made
● MTurk.aspx.cs
using System;
using System.Data;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Text;
using System.Xml;
using System.Xml.Linq;
using Amazon.WebServices.MechanicalTurk;
using Amazon.WebServices.MechanicalTurk.Domain;
using System.IO;
using System.Globalization;
using System.Net;
using System.Net.Http;
using System.Web.UI.WebControls;
using System.Web.UI.HtmlControls;
using Newtonsoft.Json;
using HtmlAgilityPack;
using System.Threading.Tasks;
using System.ComponentModel;
using System.Configuration;
using System.Web.UI;
using System.Data.SQLite;
namespace SpeakAhead
{
public partial class Mturk : Page
{
SimpleClient client;
WebClient wc;
string jsonStr;
string url = string.Empty;
Dictionary<int, string> apiKeys;
int keyCount = 0;
private string apiKey;
{1, "AIzaSyCXxIEUeFJj-Be_n2ZNL2Bzo9nSsh4FgF4"},
{2, "AIzaSyDih5BeqMBVUyf3G05TPUVMJKkIdccH6Q4"},
{3, "AIzaSyC65i3XExV_RzsXFJoOc6YnuI--92Q6Q7U"},
{4, "AIzaSyAc37mce4zHKNtwHTEYnEh5F4pwnrSV7aE"},
{5, "AIzaSyABYqJFn6ShFuQ4VENDKH2CPE1jHdv3JjY"},
{6, "AIzaSyD73Hhsw7lzs_myXsqNP9er_ccXPATbG6k"}
};
if (!IsPostBack)
{
var i = 1;
GetCurrentLoc(i);
}
/// <summary>
/// Client used to create single operations for MTurk
/// Check if there are enough funds in your account in order to
/// create the HIT on Mechanical Turk.
/// </summary>
/// <returns> True if there are sufficient funds. False if not. </returns>
wc = new WebClient();
jsonStr = wc.DownloadString(Utilities.CreatePlaceUrl(hdnLat.Value,
hdnLong.Value, 1000, string.Empty, false, apiKeys[KeyCounter]));
GooglePlacesResponse gpr =
(GooglePlacesResponse)JsonConvert.DeserializeObject<GooglePlacesResponse>(jsonStr)
;
if (gpr.status.Equals("OK", StringComparison.OrdinalIgnoreCase ))
{
apiKey = apiKeys[KeyCounter];
if (gpr.results.Count() > 0)
CreateTable(gpr);
}
else
{
if (!string.IsNullOrEmpty(gpr.error_message))
{
if (KeyCounter <= apiKeys.Keys.Count())
{
KeyCounter = KeyCounter + 1;
apiKey = apiKeys[KeyCounter];
GetCurrentLoc(KeyCounter);
}
else
{
((Label)this.Master.FindControl("lblErrorMessage")).Visible = true;
((Label)this.Master.FindControl("lblErrorMessage")).Text =
gpr.error_message;
}
}
}
}
jsonStr =
wc.DownloadString(Utilities.CreatePlaceDetailUrl(result.reference, false,
apiKey));
GooglePlaceDetailResult gpdr =
(GooglePlaceDetailResult)JsonConvert.DeserializeObject<GooglePlaceDetailResult>(js
onStr);
wc = new WebClient();
distanceJsonstr =
wc.DownloadString(Utilities.CreateDistanceUrl(hdnLat.Value, hdnLong.Value,
result.geometry.location.lat, result.geometry.location.lng, false));
GooglePlaceDistance gpd =
(GooglePlaceDistance)JsonConvert.DeserializeObject<GooglePlaceDistance>(distanceJs
onstr);
strIconUrl = result.icon;
List<address_components> addressComps =
gpdr.result.address_components;
else
{
// if the location is Locality (city), then pass locality
+ Statename to the url http://{}.jpg.to
if (isLocality)
strImageUrl = "http://" + result.name.Trim().Replace("
", string.Empty) + stateName + ".jpg.to";
if (getLogo)
{
HttpWebRequest request =
(HttpWebRequest)HttpWebRequest.Create("http://" + newUrl + "logo" + ".jpg.to");
request.AllowAutoRedirect = false; / / find out if
this site is up and don't follow a redirector
request.Method = "HEAD";
var response = request.GetResponse();
var str = response.Headers["Location"];
if (str == null || (!string.IsNullOrEmpty(str) &&
!str.Trim().Equals("http://jpg.to/image404.php",
StringComparison.OrdinalIgnoreCase)))
newUrl = newUrl + "logo";
79
}
strImageUrl = "http://" + newUrl + ".jpg.to";
else
{
strImageUrl = "http://" + result.name.Trim().Replace("
tring.Empty) + cityName + ".jpg.to";
", s
}
try
{
// To get the image src from .jpg.to link
doc = web.Load(strImageUrl);
if (doc != null)
body = doc.DocumentNode.SelectSingleNode("./img");
((Label)this.Master.FindControl("lblErrorMessage")).Visible = true;
((Label)this.Master.FindControl("lblErrorMessage")).Text =
gpdr.error_message;
}
}
}
gvLocations.Columns[4].Visible = true;
gvLocations.Columns[5].Visible = t rue;
gvLocations.Columns[6].Visible = f alse;
}
var LinkLocationName =
(LinkButton)e.Row.FindControl("LocationName");
LinkLocationName.CommandArgument = e.Row.RowIndex.ToString();
gvLocations.HeaderRow.TableSection = TableRowSection.TableHeader;
gvLocations.HeaderRow.CssClass = "ui-bar-d"; //
table header Row
gvLocations.Attributes.Add("data-role", "table");
//gvLocations.CssClass = "ui-body-e ui-bar-e u-bar-hover-f
ui-shadow ui-responsive"; // table background (silver theme)
gvLocations.Attributes.Add("data-mode", "columntoggle");
// toggle icon column in too small view
headerCells[2].Attributes.Add("data-priority", 2");
" // icon
headerCells[3].Attributes.Add("data-priority", " 3"); // types
headerCells[4].Attributes.Add("data-priority", " 4"); // website
headerCells[5].Attributes.Add("data-priority", " 5"); // distance
headerCells[1].Attributes.Add("data-class", "expand");
/* Note: The responsive table feature is built with a core table
plugin (table.js)
* that initializes when the data-role="table" attribute is added
to the markup. This plugin is very lightweight
* and adds ui-table class,
* parses the table headers and generates information on the
columns of data, and fires a tablecreate event. Both the table modes
*/
}
}
Session.Remove("HitId");
// QualificationRequirement.
//qualNumHits.Comparator = Comparator.GreaterThan;
//qualNumHits.QualificationTypeId =
Constants.Worker_NumberHITsApproved;
//qualNumHits.IntegerValue = 0;
//qualNumHits.IntegerValueSpecified = true;
//qualList.Add(qualNumHits);
// register the HIT Type, so that it can be used in later calls to
CreateHIT
string hitTypeId = client.RegisterHITType(Constants.Title,
Constants.Description, null, Constants.AssignmentDuration,
Constants.Reward, Constants.Keywords, qualList);
ImageButton icon =
(ImageButton)row.Cells[2].FindControl("iconUrl");
LinkButton loc =
(LinkButton)row.Cells[1].FindControl("LocationName");
locName = loc.Text;
//Utilities.SpeakIt(locName);
string locType = row.Cells[4].Text;
string locId = row.Cells[6].Text;
layoutParams.Add("name", locName);
/
/ layoutParams.Add("phrase_1", "test 1");
// layoutParams.Add("phrase_2", "test 2");
// layoutParams.Add("phrase_3", "test 3");
layoutParams.Add("type", locType);
Session.Remove("LocationDetail");
Dictionary<string, string> dictLoc = new Dictionary<string,
string>();
dictLoc.Add("LocationName", locName);
dictLoc.Add("LocationId", locId);
dictLoc.Add("LocationType", locType);
Session["LocationDetail"] = dictLoc;
if (HasEnoughFunds())
h = client.CreateHIT(hitTypeId, Constants.Title,
Constants.Description,
Constants.Keywords, Constants.LayoutId, layoutParams,
Constants.Reward, Constants.AssignmentDuration,
null, Constants.LifeTime, Constants.MaxAssignment,
Constants.RequesterAnnotation, qualList, responseGroup);
else
{
// write error message here
}
Session.Remove("HitDetail");
Dictionary<string, string> dictHit = new Dictionary<string,
string>();
dictHit.Add("HitId", h.HITId);
dictHit.Add("HitCreatedDateTime", DateTime.Now.ToString());
dictHit.Add("HitStatus", h.HITStatus.ToString());
Session["HitDetail"] = dictHit;
Session["HitId"] = h.HITId;
}
if (!string.IsNullOrEmpty(h.HITId) && !string.IsNullOrEmpty(locName))
//Response.Redirect("Categories.aspx?" +
QueryStringParams.Latitude + "=" + hdnLat.Value + "&" +
QueryStringParams.Longitude + "=" + hdnLong.Value, true);
Response.Redirect("ResponseList.aspx?" +
QueryStringParams.Latitude + "=" + hdnLat.Value + "&" +
QueryStringParams.Longitude + "=" + hdnLong.Value, true);
ViewState["SortDir"] = sDir;
return sDir;
}
}
}
● StartApplication.aspx.cs
using System;
using System.Drawing;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Net;
using System.Net.Http;
using System.Web.UI.WebControls;
using Amazon.WebServices.MechanicalTurk;
using Amazon.WebServices.MechanicalTurk.Domain;
using HtmlAgilityPack;
using System.IO;
using System.Speech.Synthesis;
namespace SpeakAhead
{
public partial class StartApplication : System.Web.UI.Page
{
string _lat;
string _long;
protected void Page_Load(object sender, EventArgs e)
{
btnStart.Click += btnStart_Click; // Go to
btnStart_Click event on 'Start' Button click
lblTitle.BackColor = Color.Transparent;
}
// Button start gets lat-long value.
void btnStart_Click(object sender, ImageClickEventArgs e)
{
_lat = hdnLat.Value;
_long = hdnLong.Value;
//Response.Redirect("TestPage.aspx", true);
if (!string.IsNullOrEmpty(_lat) && !string.IsNullOrEmpty(_long))
85
}
}
}
● ResponseList.aspx.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Xml;
using System.Xml.Linq;
using System.Web.UI;
using System.Data;
using System.Globalization;
using System.Web.UI.WebControls;
using System.Web.UI.HtmlControls;
using Amazon.WebServices.MechanicalTurk;
using Amazon.WebServices.MechanicalTurk.Domain;
namespace SpeakAhead
{
public partial class ResponseList : System.Web.UI.Page
{
SimpleClient client;
private string _lat;
private string _long;
private string _queryStringLatLong;
string HitId = string.Empty;
string HitCreatedDateTime = string.Empty;
string HitStatus = string.Empty;
string AssignmentStatus = string.Empty;
string LocationName = string.Empty;
string LocationId = string.Empty;
string LocationType = string.Empty;
string AssignmentId = string.Empty;
string AssignmentApprovedDate = string.Empty;
string AssignmentSubmittedDate = string.Empty;
if (!IsPostBack)
{
GetResultsFromDb(LocationId, LocationName);
GetResult();
}
gvResults.HeaderRow.TableSection = TableRowSection.TableHeader;
gvResults.HeaderRow.CssClass = "ui-bar-d"; /
/
table header Row (blue theme)
gvResults.CssClass = "ui-shadow ui-responsive"; // table
background (silver theme)
gvResults.Attributes.Add("data-role", "table");
gvResults.Attributes.Add("data-mode", "columntoggle");
// toggle icon column in too small view
((Label)this.Master.FindControl("lblErrorMessage")).Text = "
No
suggestions found. The page will try to find suggestions in 3 minutes. Please
wait..";
}
if (dt == n
ull || (dt != null && dt.Rows.Count == 0))
{
sql = " SELECT asgn.AssignmentId, ans.AnswerText From
tbl_Assignment asgn "
+ " JOIN tbl_Answer ans ON ans.AssignmentId =
asgn.AssignmentId "
+ " JOIN tbl_Hit hit ON hit.HitId = asgn.HitId "
+ " JOIN tbl_Location loc ON loc.LocationId =
hit.LocationId "
+ " Where loc.LocationId = "
+ " (SELECT LocationId FROM tbl_Location WHERE
LocationName = '" + LocationName + "'" + " ORDER BY LocationCounter DESC,
CreatedDateTime DESC LIMIT 1) "
+ " ORDER BY AnswerCounter DESC, ans.CreatedDateTime DESC
"
+ "LIMIT 5 ";
88
dt = Utilities.SelectFromTable(sql);
}
Session["ResultFromDb"] = _results;
}
{
Dictionary<string, string> results = new Dictionary<string, string>();
Dictionary<string, string> resultsFromTurk = TurkResults();
Dictionary<string, string> resultsFromDb = (Dictionary<string,
string>)Session["ResultFromDb"];
}
}
}
else
89
{
DataTable dt = new DataTable();
gvResults.DataSource = dt;
gvResults.DataBind();
}
}
return _hitResults;
}
//Utilities.SpeakIt(result.Text);
GetAssignmentResult asgn =
client.GetAssignment(row.Cells[3].Text, responseGroup);
if (asgn != null)
{
AssignmentId = asgn.Assignment.AssignmentId;
AssignmentApprovedDate =
asgn.Assignment.ApprovalTime.ToString("yyyy-MM-dd HH:mm:ss");
AssignmentSubmittedDate =
asgn.Assignment.SubmitTime.ToString("yyyy-MM-dd HH:mm:ss");
AssignmentStatus =
asgn.Assignment.AssignmentStatus.ToString();
}
if (string.IsNullOrEmpty(HitId) ||
string.IsNullOrEmpty(AssignmentId) || string.IsNullOrEmpty(LocationId))
return;
else
{
bool valuesInserted = false;
string sql = "INSERT OR REPLACE INTO tbl_Location
(LocationId, LocationName, LocationType, LocationCounter) "
+ " SELECT X.LocationId, X.LocationName,
X.LocationType, X.LocationCounter + COALESCE(l.LocationCounter, 0) FROM "
+ " (SELECT "
+ "'" + LocationId + "' AS LocationId, '" +
LocationName + "' AS LocationName, '" + LocationType + "' AS LocationType, " + "1
AS LocationCounter" + ") X "
+ " LEFT join tbl_Location L ON L.LocationId =
X.LocationId ";
valuesInserted = Utilities.InsertIntoTable(sql);
if (valuesInserted)
valuesInserted = Utilities.InsertIntoTable(sql);
if (valuesInserted)
valuesInserted = Utilities.InsertIntoTable(sql);
if (valuesInserted)
valuesInserted = Utilities.InsertIntoTable(sql);
}
}
}
catch (Exception ex)
{
((Label)this.Master.FindControl("lblErrorMessage")).Text = "
Error
occured. Please try again.";
}
● Web.Config
</configuration>
-- End of Document --