You are on page 1of 8

USER-OPERATED AUDIOMETRY

MOBILE APP
MICHELLE COTTRELL
DNIEL GR
ZOLTAN FISCHER
ARTRS GRAUMANIS

HIGHER EDUCATIONBACKEND AND API FILES:


HTTPS://GOO.GL/8ALPVG
BACKEND LIVE:HTTP://GOO.GL/E7S8LP
FRONTEND FILES:HTTPS://GOO.GL/DSDXNF
FRONTEND DEMO VIDEO:
HTTPS://GOO.GL/MJL6D0

LILLEBAELT ACADEMY OF PROFESSIONAL

ABSTRACT
The purpose of this paper is to explain the development
life cycle process that has been used by the BsQUARE
group to to create a working prototype web application
that that can be used by a patient to take an accurate
measurement of their hearing capacity in their own time
and at a place of their convenience. To do this our aim is
to combine a simple and intuitive visually pleasing
design together with a solid functionality that can be
readily used users irrespective of age. This will be done
using Titanium Appcelerator which will enable it to be
used on multiple device platforms.

INTRODUCTION
The challenge was to develop a user operated
audiometry test that could be conducted alone by the
patient via a mobile application. The intended platform
was a mobile telephone or a tablet. At a minimum it had
to be to be just as efficient and comparable an
alternative to existing traditional audiometry methods.
The main advantages of this were that it would be time
saving, portable and cost effective. From the outset we
had decided that a good design and simplicity were just
as important as the functionality. The client group would
range from the very young to the elderly and each of
these groups would present a different set of challenges
involving user design and functionality. Our initial
criteria for a functional solution were that it gave a
portable way of presenting the test. It had to offer a
better time efficiency for both the hospital and patients.
by saving time it would allow a better use of the often
limited resources of the hospital and in being so it
would be cost effective compared to traditional
methods. Our initial challenges included coming up

with a login system that was easy to use. We decided


that a natural choice would be the Danish national
registration number (CPR), which is already widespread
and well known. We are aware that this could present
some legal issues with the storage of confidential
information, but at this point the application is still a
prototype and could eventually be replaced by the
encrypted Nem-ID. It was our intention to incorporate
an automated hearing test algorithm that would
determine which sound to play, but due to the relatively
short development time given for the project, we decide
upon a random sound being played for the demo
purposes of the prototype application.
LITERATURE

DEVELOPMENT

Since the audiometry device has a wide target audience


we couldnt specify our design on a specific age group
or any specific behavior. We had to provide a generally
good understandable user experience which can be
understand by both younger and older audience. To
achieve these, we tried to minimize the interface
elements, follow recommended design guidelines and
add meaningful description for the app to help the users
as much as we can.

Information Architecture & Flow Diagram


Our first step was to assemble the minimum required
functions that the app had to provide. These required
functions were the conduction of a hearing test, send the
results to an external service and retrieve from there and
finally to add some kind of authentication. By
assembling and agreeing on the list we could create our
flow diagram to examine how the user would

accomplish the required steps and how and where the


app and the external services will communicate and
synchronize data with each other. For displaying our
flow diagram, we used Lucidcharts. With the help of the
flow diagram we could separate our features and steps
into screens and based on these we could continue to
create our first wireframes.

Figure 2 Two example wireframes from the app

Design

Figure 1 The overall flow diagram of the app

We planned to develop our prototype and the first


version of the app on Android platform, since it
Wireframes
We draw our first wireframes on paper. There are a lot
of good wire framing tools on the internet but the real
benefit of drawing our wireframes that we could change
them quickly and add notes and additional sketches to it
which improved the overall speed of the wireframes.
After agreeing on the off-line wireframes we created
our high fidelity wireframes. These wireframes are still
lacking of colors, icons and other graphics but they
could represent our concept more accurately and also
we could test the wireframes as interactive prototypes.
For the high fidelity prototypes, we used Sketch.
We applied the wireframes to our flow diagram so we
could test if our flow would be appropriate for our users.
To create clickable, interactive prototypes based on our
wireframes we used MarvelApp. After testing and
applying changes based on the teams feedbacks we
could move on to the design phase.

was easier us to test on our devices. Deploying to iOS


would be also possible by using Appcelerator but
considering the time and the deadlines we decided to
target the Android platform first and later on optimize
the app for iOS. Since we decided to go with Android
we examined the design guidelines that are currently
available and recommended for Android. Google
recently released its new design guideline, Material
Design. Material Design is a collection of design
(colors, shapes, animations, textures, icons), and user
experience guidelines and recommendations. By using
this guideline, we can provide a user experience that
will be consistent with the Android OS and probably
with other applications also.

AMT Semester Project 2015

Based on Material Designs recommendations we


assembled what colors, textures, effects we will apply
on the app. One of the hardest part of choosing between
the appropriate colors was to ensure we are not using
colors that could lead the users to give false answers.
For example, by using a red color for the didnt hear
anything answer could suggest the user that the answer
would be an error or it would be wrong, since red colour
is often associated with warnings or deleting, stopping
something. (stop signs, red lights, etc.) Therefore, we
decided to use a different, neutral colour for the third
answer and avoid to confuse the users. We also used
icons to help the users understand the task and what to
press in order to give an appropriate answer. For
example, we are indicating the left and right ears with
separate icons in different directions or indicating with
an icon that the user needs to move to a quieter are in
order to conduct a proper test. By supporting the text
with the icons we could improve the overall user
experience of the application.

Figure 3 Typography, colours and iconography used in the app

FUNCTIONALITY

The person starts up the application and is presented


with an opening screen which contains an LOGIN and
HELP button along with room to enter a valid CPR
number. The HELP button leads them to a FAQ (not yet
implemented) containing the basic terminology of the
device. Once the CPR number has been successfully
entered and the LOGIN button pressed, they are shown
a welcome screen, where they are given the choice of
START TEST, TEST HISTORY or LOGOUT. The
history screen gives the user a general overview of
previous tests such as date and test results. On pressing
START TEST, a brief introduction about whats to come
is shown and the user can finally start the actual test
flow. The next steps involve the application testing the
current sound levels of the current environment for
noise which would distort the test results and if found to
be too high the user is instructed to find another
location. Once the background noise is measured to be a
suitable level the user can then proceed onto the test.
This is first displays itself as TAKE TUTORIAL or
SKIP TUTORIAL. This is aimed at the first time user
and would give series of static images to explain the use
of the application. One completed the user can then
BEGIN TEST or resume the TAKE TUTORIAL. The
test itself is consists of a screen split into 3 areas. At the
top of the screen is a visual image of an ear that vibrates
when a sound is played (fake or real). The user can then
respond by choosing either LEFT EAR, RIGHT EAR or
DIDNT HEAR buttons or they can choose to PAUSE
the test. The sound is played for 20 seconds and in our
prototype is a random tone of which we have for
statistical purposes included a fake tone. The random
sound would eventually be replaced by an algorithm
that would determine the tone played according to the
first input by the user. If the test is PAUSED then the
user is given the screen option of either CONTINUE or
CANCEL. Once the test is completed the user can send
their results to an external data source or in doubt, they
can restart the test. The user data is stored on an external
database, allowing it to be viewed for further analysis
by the relevant hospital department.
TECHNICAL SPECIFICATIONS

Figure 4 Two example screen designs from the app

The Android based application prototype was created


with Appcelerator Titanium framework1. Appcelerator is
a mobile app framework that comes with an integrated
MVC framework, an own IDE and a special framework
designed for building APIs. The framework allows the
creation of cross-platform mobile applications like:
Android, iOS, Windows and HTML5. For the
audiometry application our build target was Android
since it was easier for us to test and install it on the
devices directly. Appcelerator Platform works as native
application built with alloy2 and Titanium SDK3. The
Android applications built is done by Java and the
1 Appcelerator Studion - http://www.appcelerator.com/
3

Android APIs. The Appcelerator framework adapts the


model view controller (MVC) architecture pattern. The
user interface is made by XML mark-up which is
located within views. This allows us to create the
necessary display that is going to be visible to the user
while all the backend and other stuff is done behind the
curtain. The behind the curtains stuff is located within
controllers folder where JavaScript is added to benefit
the functionality.
As an example we have done the login page by
Javascript. In the index.xml file we define the GUI,
giving the layout some <TextField> and <Button>s
(See figure 1) at the core of index.js, we define the
function of these fields and buttons, which is where the
definition of the user interaction is designed. By
defining the buttons and fields we then give them some
validations and function by checking if the string given
within the field is valid CPR number or not (See Figure
2), it is, we then give it some few more validations to be
sure that it is indeed a CPR number (See figure 3)
consisting of legitimate length of numbers etc.

Figure 3. index.js validation code snippet

Afterw the pre-validation we send the CPR number to


an external CPR validator which will validate the CPR
number. After the user logs in we save his session so he
doesnt have to log in next time. This is convenient
because it seems irritating to have the same login popup
every single time the person clicks on the app. If the
application is closed permanently and launched again
the user will be still logged in. (See figure 4).

Figure 4. index.js if statement to see if the user is logged

When it comes to operating the app after login, the


principle is pretty simple. We are giving the buttons
different global Alloy functions to open the different
pages. (See Figure 5).
Figure 1. code snippet of index.xml file

Figure 2. index.js cpr number validation code snippet

Figure 5. main.js file, switching between pages

The same will happen if the user intends to logout, the js


requests will show a confirmation on the button press.
2 Alloy http://docs.appcelerator.com/platform/latest/#!/guide/Alloy_Framewor
k

This navigational structure is pretty basic and its laid


out for every button that requires switching between
pages. For the page main.xml to start the test it will

3 Titanium SDK http://docs.appcelerator.com/platform/latest/#!/guide/Titanium_SDK

AMT Semester Project 2015

redirect the user to the instruction.xml file with a chance


of continuing to the test.
For the hearing test the js fetches the sound files from
the assets/android/images folder. This data is then
organized by randomly looping through the test. By the
given sound, when it is played a JavaScript code then
varies to define (if) it is played. If the sound is played
we are animating the sound indicator icon to indicate
that were playing a sound. (Else) another image is
retrieved which would represent that the sound not
playing.
Afterwards it is the matter of managing the whole
sound/animations to stick together. By setting sound
function and giving variables to define if the sound is
indeed played then show frame animation, flickering
every 500 milliseconds, representing the sound and
defining the buttons to work as intended (choice
between left ear right ear or not hearing the sound
at all) (See figure 6).

Figure 6. test.js file, code snippet of if statement for the sound and
icons.

Afterwards else statement is made to simply show the


icon representing the sound as not being played and
hiding the test buttons of each ear or I cant hear the
sound option.

Figure 7. test.js file, code snippet of window on open/close.

For the buttons functionality is the matter of defining


the last_test_sounds answer L or R also start and
update the content if not finished. To cancel/stop the test
an alert dialog is given to confirm if yes or no. This is
done by making an object for the alert box instance also
pausing the test.
After finishing the test, the user is redirected to the
finish.xml, where a string of results is displayed. The
user is prompted to choose between several options. In
the finish.js first were set the finish text as a string so it
will be available for the finish.xml. After this if the
doSend() function is called, we would send the result
string and the users CPR number to the database
through an ajax call. Were displaying a prompt to the
user if the app sent the data successfully to the database
and also displaying an error text if the app couldnt send
it. (for example if theres no internet connection).

For the matter of on window load event listener is


added to maintain and organize the things that should
happen (See Figure 7). Stating that the sound retrieved
should be randomized, update test text, etc. And or
closed the test start is false meaning it will not start.

Figure 8. Finish.js file, Creating the Ajax call and setting up the json
file through Alloy.CFG

Androids built in function to go back (software or


hardware specific navigation buttons) will trigger an
5

alert box, where the user is limited of leaving the page


unless the data is sent. This is made by the same
principle as pausing within mid-test.
Another quite important part of MVCs architecture is
the styles/[PageGoesHere.tss] of the whole app where
the principle of such layout dictates the visual
representation of our app. This is a dynamic style sheet .tss for Appcelerator. Styling one page takes creating a
representative page as for example, applied to
views/index.xml file. The layout of these files is pretty
similar to regular CSS where tss attributes are properties
for objects within titanium (See figure 8).

Figure 9. index.tss file, code snippet of index.xml styling.

EVALUATION

First of all, we couldnt finish all of the functionalities


in the first version of the app. An audiometry
application is using a formula which is trying to
calculate the threshold of the user by playing sounds in
different pitches. As the user goes through the sounds
the application can determine the threshold of the users
hearing and in the end can calculate a result for the
doctors for further investigation. In our app however
due the limited timeframe we decided that go with a
simple solution and not implement the formula. Instead
we are playing predefined sounds in different pitches
and testing the use if hes able to hear them or not. We
are calculating the results based on how many times the
user answered correctly for the sounds test. We are also
tricking the user by not playing any sounds but
indicating that there should be a sound playing. With
this we can check if the user is actually trying to add
correct answers or he is just thinking that he heard
something. Despite that we are not using the scientific
method for calculating the results we are still sending
the results to the middleware DB where we stored them
and make them available for further investigation.
Another thing that we couldnt implement perfectly is
the design. The app is usable and all of the functions are
accessible, however not all the icons, logos and proper
fonts have been implemented. Instead we are relying on
native Android elements, like stock fonts, navigational
elements. Also there are no extra effects added like
shadows, animations. Despite of this the app is still
usable for the users and can be used on various phones

and on tablets also. In the future it would be also a good


idea to optimise the application for tablets, since there
we could provide an even usable user interface and also
optimise the app for iOS devices since a large amount of
users have iOS devices at home and they are already
familiar with the OS.
RESULT
Most of our initial objectives with the design and
development of the mobile application were reached
whilst others would have been met, had we had a longer
development time. We aimed for a visually pleasing
design that appealed to both the young and elderly user
and something that was relatively easy to use without
too much external supervision and with basic
functionality. The project itself could have been split
into two equally important aspects. The UX design and
the calculation of an advanced algorithm to determine
which sound to play dependent upon the user response.
In order to give a good prototype, we put the greater
amount of our energy on the design and thereafter on
the actual programming. Ideally a better result would
have been forthcoming had we had the development
time to calculate the sound algorithm, but settled for a
random sound including fake tones. We were able to
make good use of the Titanium Appcelerator MVC
model and utilized a lot of the available on-line tools for
the development and design of the mobile application.
Our project management structure was based Trello,
GITHUB and Google Drive. We held most of our
project meetings online via Facebook and discussed the
main aspects of the project in face to face short stand up
meetings at UCL. By preparing and researching what
we intended to discuss before we actually met up, we
were able to keep the meetings structured and focused.
DISCUSSION
We have covered most of the design and development
lifecycle of a mobile application whilst working on our
prototype. It should provide a viable foundation for
further research. Our main ethos was for a simple and
visually pleasing design combined with a solid
functionality and good reporting tools. Whats needed
now is a well a well-designed automated hearing test
algorithm to accurately hone in on the users hearing
loss. The importance of this warrants an independent
project in its own right. The application we have
designed at the moment runs on android, but with very
little further programming can be compiled on the
Titanium Appcelerator platformm to run equally well on
an iPad or iPhone. Another of our original objectives
was to change the appearance of the UI dependant on
the age of the user. If a child logged in, then more use of
color would have been used. Likewise, if it had been an
elderly user, then perhaps larger fonts could have been
used. There is still a lot of scope for improvement, but
the end results will both save time and expenses within
the health sector.

AMT Semester Project 2015