You are on page 1of 18

Accepted Manuscript

Title: Usability Testing of Existing Type 2 Diabetes Mellitus


Websites

Author: Dorian Davis Steven Jiang

PII: S1386-5056(16)30069-7
DOI: http://dx.doi.org/doi:10.1016/j.ijmedinf.2016.04.012
Reference: IJB 3331

To appear in: International Journal of Medical Informatics

Received date: 18-12-2015


Revised date: 25-3-2016
Accepted date: 23-4-2016

Please cite this article as: Dorian Davis, Steven Jiang, Usability Testing of Existing
Type 2 Diabetes Mellitus Websites, International Journal of Medical Informatics
http://dx.doi.org/10.1016/j.ijmedinf.2016.04.012

This is a PDF file of an unedited manuscript that has been accepted for publication.
As a service to our customers we are providing this early version of the manuscript.
The manuscript will undergo copyediting, typesetting, and review of the resulting proof
before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that
apply to the journal pertain.
Usability Testing of Existing Type 2 Diabetes Mellitus Websites

Dorian Davis, Steven Jiang

Department of Industrial and Systems Engineering, North Carolina A&T State University
Greensboro, NC 27411, USA

ABSTRACT
Background: Given the significant increase in the use of the Internet as an educational tool for diabetes, very little

research has been published on the usability of healthcare websites, even though it is a determining factor for user

satisfaction.

Objective: The aim of this study is to evaluate and critique the interfaces of existing diabetes websites for usability

concerns and provide design solutions for improvement. Emphasis is placed on Type 2 Diabetes Mellitus since it is

the most common and life threatening form of diabetes.

Method: A usability test was performed on the interfaces of three existing diabetes websites, American Diabetes

Association (www.diabetes.org), WebMD (www.webmd.com) and the National Diabetes Education Program

(ndep.nih.gov). The goal was to collect qualitative and quantitative data to determine: (1) if participants are able to

complete specified tasks successfully; (2) the length of time it takes participants to complete the specified tasks and;

(3) participants’ satisfaction with the three websites. Twenty adults, 18 years of age and older participated in the

study.

Results: The results from the MANOVA test revealed a significant difference between the three websites for number

of clicks, number of errors and completion time when analyzed simultaneously. The ANOVA tests revealed a

significant difference for all three variables. The Student-Newman-Keuls (SNK) test shows a significant difference

for completion time between American Diabetes Association and WebMD. A significant difference was found for

the number of clicks for the National Diabetes Education Program compared to the American Diabetes Association

and WebMD. However, no significant difference was found for the number of clicks between American Diabetes

and WebMD. Lastly, a significant difference was found between each interface for number of errors.

Discussion: Although, the American Diabetes Association web-interface was most favorable, there were many

positive design elements for each interface. On the other hand, the significant amount of information overload

experienced for each website left participants feeling perplexed. Thus, innovative solutions are needed to reduce
information overload and ensure users are engaged and empowered to make informed decisions about their

healthcare.

Keywords: Usability, Usability Testing, Healthcare websites, Type 2 Diabetes Mellitus

1. 1. Introduction

Diabetes is a chronic condition characterized by high levels of blood glucose caused by insulin resistance and/or

beta-cell dysfunction (Kahn, 2003). Since 2010, diabetes has remained the 7th leading cause of death in the United

States (American Diabetes Association, 2014). Type 2 Diabetes Mellitus (T2DM), the most common form of

diabetes has been the leading cause of many health complications including kidney failure and poor circulation,

resulting in limb amputation for some patients (Centers for Disease Control and Prevention, 2014). Studies show

that the diagnosis of T2DM has been increasing in the United States population over the past few decades (Geiss,

Wang, Cheng, & et al., 2014). A study released in 2013, estimated that the cost associated with T2DM had risen

from $74 billion in 2007 to $245 billion in 2012, when the cost was last examined (American Diabetes Association,

2013). If the trend continues in the United States, it is estimated that one of three adults will have T2DM by 2050

causing healthcare costs to continue to soar (Boyle, Thompson, Gregg, Barker, & Williamson, 2010). However, the

diagnosis of T2DM is preventable and reversible, if discovered early.

Currently, with the prevalence of chronic conditions in the United States, the healthcare needs of the American

population have been shifting from acute episodic care to chronic care. Unfortunately, the current United Statesd

healthcare system still focuses on treating individual diseases. In order to address the needs of people with chronic

conditions or having high risk of developing these conditions, the healthcare system must transform from a disease-

oriented and reactive care system to a preventive care system. Therefore, it is important to empower people with

tools and information that will assist them with making informed decisions concerning their health. Effective

education is very important in mitigating T2DM.

With the rapid growth of the World Wide Web, there has been a major shift from paper to information

technology in the healthcare sector. With healthcare information, the web offers a sense of anonymity, guidance and

support. Fox and Duggan (2013) study indicates that approximately, 72% of adults who have Internet access

browse the web seeking health information. The outcome of a systematic review on information technology for
diabetes self-management confirms that the use of technology is very promising in improving self-care (El-Gayar,

Timsina, Nawar, & Eid, 2013). Even so, users must be able to trust the site. Measuring user satisfaction helps to

gauge the overall quality of the website. Yet, many designers neglect to conduct usability evaluations which are

proven methods in assessing the effectiveness and efficiency of interactive systems.

Usability is a term used in human computer interaction (HCI) in which the intent is to eliminate any conceivable

frustration the user may experience when interacting with the interface (e.g. navigation of menus). The ISO9241-

11: Guidance on Usability defines the term as “the extent to which a product can be used by specified users to

achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (ISO9241-11,

1998). This requirement answers the question: Are people able to use the product effectively? To answer this

question appropriately, there are several attributes that can be considered, but the five that are most important for

web usability are outlined in Table 1. The benefit of considering usability in the design process is it generate

creative solutions to concerns that users may have with the product. As a result, the product becomes more efficient,

effective and safe to use.

To guarantee that the product works well for its intended purpose without causing confusion or frustration for

the user, usability evaluations must be performed. Usability evaluations test how well users can learn the product

and apply the design to accomplish their goals. Although there are several methods to test usability, heuristic

evaluation and usability testing have been the most common and useful methods in previous studies.

A heuristic evaluation is usually performed by several evaluators who are experts in the field of human factors.

Collectively, the evaluators discuss their findings; determine the most common and severe problems; and make

recommendations for resolve. The evaluations are based on a set of usability heuristics defined by Nielson (1995,

2001) for user interface design.

On the other hand, usability testing is the most widely used method for usability evaluation. To perform a

usability test, participants from the targeted population are recruited to complete a set of tasks within the system or

with the product. Their performance is assessed through several methods such as observation, video recording, the

think aloud method and questionnaires to identify any concerns. Results are then analyzed based on the usability

attributes (Table 1) and their methods of evaluation such as completion time to perform tasks; number of mouse

clicks; success or error rate; and survey instruments.


Considering the Internet has the capability to reach a larger audience within seconds, it is surprising that very

little literature in recent years was found regarding usability assessments on current healthcare website designs. In

2004, Bedell, Agrawal and Petersen (2004) critiqued 47 public diabetes websites to establish a criteria for excellence

when assessing health information. Their evaluation measured quality in three areas: content, reliability and

usability. For usability, their findings resulted in five websites (American Diabetes Association, Joslin Diabetes

Center, Diabetes UK, International Diabetes Federation and The Diabetes Mall). considered to have the best

usability. Overall, the American Diabetes Association and the Joslin Diabetes Center websites best met each

criterion. Recently, a heuristic evaluation study (Davis & Jiang, 2015) using Nielsen’s (1995) 10 usability heuristics

was conducted to identify usability problems with T2DM websites. The three existing T2DM websites evaluated

were American Diabetes Association, WebMD and National Diabetes Education Program. The results revealed 12

usability issues for ADA, 11 for WebMD and 15 for NDEP. Although, the evaluators collectively agreed that the

usability issues were easy to somewhat easy to fix, they all experienced some level of information overload when

interacting with each website. To confirm these findings, the intent of this study was to conduct a usability test with

participants from the targeted population and provide design recommendations that will assure users have a great

user experience when interacting with T2DM websites. The goal was to collect qualitative and quantitative data to

determine: (1) if participants are able to complete specified tasks successfully; (2) the length of time it takes

participants to complete the specified tasks and (3) participant satisfaction with the three websites.

2. 2. Methodology
Following the same methodology from the heuristic evaluation, the identification of the user-interfaces

evaluated in this study was determined using ©Alexa (www.alexa.com) and GOOGLE search engine

(www.google.com). ©Alexa is a commercial web trafficking analytic tool owned by Amazaon.com that measures

how a website is performing relative to other websites over a three month period. The ranking is based on an

average of daily visits to the website and its estimated number of pages viewed. ©Alexa reported GOOGLE search

engine as the most frequently used search engine. Therefore, using GOOGLE search engine, a web search for type

2 diabetes and diabetes education was conducted to determine the websites that appear on the first page. A study

conducted by Chitika (Lee, 2013), an online advertising network show that websites that appear on GOOGLE search

engine first page receives an average of 33% of the traffic. The traffic data of the websites that appeared on the first

page of GOOGLE search engine was then checked using ©Alexa. It was also critical that the websites presented
some information related to T2DM preventive care. Thus, the three websites selected were ADA, WebMD and

NDEP.

2.1 Experimental Design

A within-subject experimental design was used with the order of the websites counterbalanced in an attempt to

cancel out any order effects. All possible combinations of the websites were presented in a randomized order.

However, learning effect was not a concern because the three interface designs are distinctively different.

Participants were allowed to browse the websites prior to the study. In particular, differences were measured based

on task completion time, number of errors, ease of use, ease of learning and user satisfaction.

2.2 Participants

Twenty adults were recruited through email and flyer solicitations.

2.3 Stimulus Material and Equipment

A pre-test questionnaire was developed to collect participants’ demographics and information regarding their

experience with computers and the Internet. A post-study survey was also developed to measure the usability goals.

To conduct the study, a portable usability laboratory which consisted of a DELL Latitude E6420 laptop

equipped with the Morae 3.3 usability software (Recorder, Observer and Manager) was utilized to record

participants’ performance data (completion time, number of clicks and number of errors). The Morae Recorder

captured the participant’s on-screen activity and was used to administer the pre- test questionnaire and post- test

questionnaire. The facilitator used the Lenova Helix tablet equipped with the Morae 3.3 Observer to observe the

participant in real time to capture their experience, take notes, and flag errors during the study. The data from Morae

Recorder and Observer were merged and uploaded in Morae Manage to view and analyze the recordings.

2.4 Tasks

In order to assess the usability of the interfaces, participants were asked to perform a number of tasks using the

websites. The tasks were representative of those that the participants would carry out in a real life context. This is to

guarantee the accuracy of the data collected. The goal of the usability testing was to verify the results from the

heuristic evaluation. Therefore, the tasks written were related to importance to predict difficulties and specific

features.

For ADA, WebMD and NDEP, participants were asked to complete a set of 10, 8 and 9 tasks, respectively. The

number of tasks is different due to the websites’ content. A partial list of tasks for each website is recorded in Table
2. Although most of the content contained on the websites was duplicate information, some websites included

additional content. Therefore, it was important to include the additional content that users may encounter during the

evaluation.

2.5 Procedure

An application for this study was submitted and approved by the Institutional Review Board (IRB) at North

Carolina A&T State University. Once approved, the study was conducted as follows:

1. Upon arrival, each participant was briefed on the purpose of the test and asked to read and sign an informed

consent form that detailed the tasks and potential benefits of the study and their rights. There were no

foreseen risks for this study. The pre-test questionnaire was also administered to each participant.

2. Participants were asked to perform the specified tasks within the respective website. Participants were

allowed to skip tasks if they became frustrated and abort the study at any given time.

3. Participants were given a three minute break between interfaces.

4. A post- test questionnaire was administered upon completion of the study.

2.6 Data Collection

The usability goals identified for this study were learnability, efficiency, errors and satisfaction. The data

collected to measure these goals were completion time, number of clicks, number of errors and the post-test

questionnaire.

3. 3. Data Analysis and Results


The data was analyzed using both quantitative and qualitative measures. The qualitative measures were assessed

based on the responses collected from the pre- test questionnaire (e.g. skill level of experience with computers and

the internet). The quantitative measures collected were completion time, amount of clicks, number of errors and user

satisfaction data from the post-test questionnaire. Both descriptive and inferential statistics were used for the

analysis.

3.1 Participants

The ages of the participants were defined by their age range: nine participants were between the ages of 18-34;

eight were between the ages of 35-50 and three were between the ages of 51- 69. The participants consisted of ten

men and ten women. Experience using computer based technology was not required.

3.2 Pre-Test Questionnaire Results


A five point Likert type scale (1=Poor; 5=Excellent) was used to rate the participants technology experience.

The results show that 90% of the participants have used computer technology for more than ten years and browse

the internet very frequently. Fifty percent of the participants ranked their computer skill level as excellent; 45%

gave a ranking of good; and 5% ranked their skill level as fair.

Participants were also asked to rank their preference of importance for aesthetics, functionality, ease of use and

satisfaction when interacting with computer interfaces using a five point Likert type scale (1=not important; 5=very

important). The results in Figure 1 show that the participants considered functionality to be the most important

feature.

3.3 Descriptive Statistics

The basic descriptive statistics for number of mouse clicks and completion time are recorded in Table 3. As

shown in the table, NDEP and WebMD had the highest and lowest mean clicks and the highest and lowest

completion time (in secs) with similar deviations, respectively.

3.3 Inferential Statistics

Further analyses were done using SAS® 9.3 to investigate whether the interfaces differ statistically for the

dependent variables (number of errors, completion time and number of clicks). A Pearson r correlation analysis was

performed to investigate the relationship between the dependent variables. The correlation results are presented in

Table 4. Although a positive correlation was found amongst some of the variables, correlation does not mean

causation. While a positive correlation was found between the variables, the correlation was moderate Therefore,

MANOVA was performed to investigate the effect of the interface on the dependent variables when they were

studied simultaneously.

Model adequacy for MANOVA was checked and no major violations were found. The results show that there

is a significant difference between the three interfaces when completion time, number of clicks and errors were

analyzed simultaneously (Wilks’ Lambda = 0.47895862; F6, 110 = 8.16; p<0.0001). Following the MANOVA test, 3

individual ANOVAs were performed since a statistical significance was found. The results revealed a significant

difference between the 3 interfaces on completion time (F2, 57 = 5.44; p < 0.01), number of clicks (F2, 57 = 7.11; p <

0.01) and number of errors (F2, 57 = 15.58; p < 0.0001).


The Student-Newman-Keuls (SNK) test was used to perform the post hoc analysis to determine which

interfaces differ from each other for each measure. To interpret the results, if the letters are the identical there is no

significant difference between the interfaces. If the letters are different then there is a significant difference between

the interfaces. The results show that it took a significantly longer time to complete the tasks using NDEP’s interface

compared to ADA and WebMD. However, no significant difference was found between ADA and WebMD for

completion time. For the number of clicks, results show a significant increase in the number of mouse clicks for

NDEP compared to ADA and WebMD. On the other hand, no significant difference was found between ADA and

WebMD for number of clicks. Lastly, a significant difference was found between each interface for number of

errors. Table 5 display the results from the SNK Test.

3.4 Post-Test Questionnaire Results

The participants were administered a post-test questionnaire to rate their experience with the existing T2DM

interfaces. The questionnaire statements reflected the usability goals and other parameters of usability, such as

information overload and satisfaction with aesthetics. Located in Table 6 are the results for the usability goals

previously discussed and the participants’ responses to the post survey questions for all other parameters of

usability.

The post-study survey was analyzed statistically for the following: (1) Easy to Learn, (2) Easy to Use, (3)

Navigation, and (4) User Satisfaction. User satisfaction was based on the following categories: (a) Look and Feel,

(b) Layout and Organization, (c) Clarity of Labels and Links, and (e) Graphics. These variables were evaluated on a

5-point Likert type scale. The results are as follows:

3.4.1 Easy to learn

First a normality test was conducted. Results revealed that easy to learn is not normally distributed (Shapiro-

Wilk test W = 0.7918, p < 0.0001) as shown by the histogram in Figure 2.4. Since the variable did not follow a

normal distribution, the Kruskal-Wallis Test, a non-parametric analysis was conducted. The results revealed no

significant difference (χ2 (2) = 2.43, p=0.30) for the variable.

3.4.2 Easy to Use

A normality test was conducted and the results revealed that easy to use is not normally distributed (Shapiro-

Wilk test W = 0.8072, p < 0.0001) as shown by the histogram in Figure 2.5. The variable did not follow a normal

distribution, therefore, the Kruskal-Wallis Test was conducted. The results revealed a significant difference (χ2 (2)
= 10.5, p<0.05). The Dwass-Steel-Critchlow-Fligner procedure was conducted and the results (Figure 2.6) revealed

that there is no significant difference between ADA and WebMD; no significant difference between WebMD and

NDEP; and a significant difference between ADA and NDEP.

3.4.3 Navigation

A normality test was conducted and the results revealed that navigation is not normally distributed (Shapiro-

Wilk test W = 0.8973, p < 0.0001) as shown by the histogram in Figure 2.7. The Kruskal-Wallis Test was

conducted, since Navigation did not follow a normal distribution. The results revealed a significant difference (χ2

(2) = 19.55, p<0.0001) for the variable. The Dwass-Steel-Critchlow-Fligner procedure was performed and the

results (Figure 2.8) revealed that there is no significant difference between ADA and WebMD; significant difference

between ADA and NDEP; and a significant difference between WebMD and NDEP.

3.4.4 Look and Feel

A normality test was conducted and the results revealed that look and feel is not normally distributed (Shapiro-

Wilk test W = 0.8284, p < 0.0001) as shown by the histogram in Figure 2.9. The Kruskal-Wallis Test was

conducted, since Look and Feel did not follow a normal distribution. The results revealed a significant difference

(χ2 (2) = 6.48, p<0.05) for the variable. Next, the Dwass-Steel-Critchlow-Fligner procedure was conducted and the

results (Figure 2.10) revealed that there is no significant difference between ADA and WebMD; no significant

difference between WebMD and NDEP; and a significant difference between ADA and NDEP.

3.4.5 Layout and Organization

A normality test was conducted and the results revealed that layout and organization is not normally

distributed (Shapiro-Wilk test W = 0.8432, p < 0.0001) as shown by the histogram in Figure 2.11. The variable did

not follow a normal distribution, thus the Kruskal-Wallis Test was conducted. The results revealed a significant

difference (χ 2 (2) = 6.25, p<0.05). The Dwass-Steel-Critchlow-Fligner procedure was performed and the results

(Figure 2.12) indicate no significant difference between ADA and WebMD; no significant difference between

WebMD and NDEP; and a significant difference between ADA and NDEP.

3.4.6 Clarity of Labels and Links.

A normality test was conducted and the results revealed that clarity of labels and links is not normally

distributed (Shapiro-Wilk test W = 0.7763, p < 0.0001) as shown by the histogram in Figure 2.13. The Kruskal-
Wallis Test was conducted, since the variable did not follow a normal distribution. The results revealed a significant
2
difference (χ (2) = 13.15, p<0.05) for the variable. Next, the Dwass-Steel-Critchlow-Fligner procedure was

conducted. The results (Figure 2.14) revealed a significant difference between ADA and WebMD; a significant

difference between ADA and NDEP; and no significant difference between WebMD and NDEP.

3.4.7 Graphics

A normality test was conducted and the results revealed that graphics is not normally distributed (Shapiro-

Wilk test W = 0.7868, p < 0.0001) as shown by the histogram in Figure 2.15. The Kruskal-Wallis Test was

conducted, since the variable did not follow a normal distribution. The results revealed a significant difference (χ 2

(2) = 8.03, p<0.05). Next, the Dwass-Steel-Critchlow-Fligner procedure was performed and the results (Figure

2.16) indicate that there is a significant difference between ADA and WebMD; a significant difference between

ADA and NDEP; and no significant difference WebMD and NDEP.

4. Discussion and Conclusion

It is well documented in the literature that the Internet is a good resource to find information and is used

frequently to browse for health or medical information (Fox, Rainie, & Horrigan, 2006). As evident in this study,

there are numerous websites that provide a wealth of T2DM knowledge. However, poorly designed webpages can

limit learning. Therefore, the first objective of this study was to validate the usability concerns found with ADA,

WebMD and NDEP websites during the heuristic evaluation (Davis & Jiang, 2015). A usability test was conducted

to evaluate the three websites from the users’ perspective. As expected, the findings supported the results found by

Davis and Jiang (2015).

Resulting from both quantitative and qualitative data collected, the findings show that ADA met most of the

criteria for usability goals. Though, there were many positive design elements for WebMD and NDEP. For

example, the look and feel of the interfaces and the websites ability to convey the overall message were assessed for

very highly. In addition, for WebMD, participants found the interface very easy to navigate. Yet, for all three

websites, the momentum of the design fell short when users had to use the sites for reasons other than to browse.

Although all participants acknowledged that the ADA website had a clear purpose, 70% of the participants

agreed that the website contained too much information and 25% found the website unnecessarily complex.

Overall, participants found the menu labels to be very intuitive. However, many expressed their dislike for the
inconsistency of the “breadcrumb” trail. In addition, majority of the participants agreed that the content was well

organized without over powering graphics. Although, some of the graphics were found to be a distraction because

they had no relevance to diabetes.

Fifty percent of the participants agreed that WebMD contained too much information. Most participants used

the search engine due to the amount of information. However, some found the search engine helpful while others

found it to be ineffective. Based on observation, participants became comfortable navigating the interface after they

learned how information was organized throughout the website. Although, participants agreed that the look and feel

(aesthetics) were satisfactory, a few expressed that the website was cluttered which minimized its overall appeal.

NDEP was the least preferred among the participants. Ninety percent of the participants acknowledged that the

website’s message of diabetes management was clear, but 50% agreed that the home page was not engaging which

caused lack of motivation in wanting to explore the website further. The participants unanimously agreed that NDEP

contained too much information and appeared disorganized which made it difficult to navigate and locate

information. For example, many participants communicated that there was too many links on several pages (i.e.

Partners & Community Organization) that were not relevant to its heading. Some participants became frustrated and

aborted the tasks.

Overall, 65% and 45 % found the overall ease of use and ease of learning of NDEP somewhat difficult,

respectively. As discovered with the heuristic evaluation, several links were not accessible. Other dislikes included

the search engine. Participants who used the search engine found it to be ineffective. Information such as diabetes

symptoms was difficult to locate.

The remaining results for ADA, WebMD and NDEP are found in Tables 6.

The second objective was to provide design solutions according to the usability problems identified. One

limitation for this study is the dynamic structure of the Internet, some of the findings may change over time.

However, since information overload was consistent among the three most frequently visited websites, it is likely

the most important usability concern that needs to be addressed, especially with healthcare information. It can make

learning more complex and can cause potential users to make uninformed decisions in error. Since healthcare is an

individual necessity, the different needs require personal decisions.

To address the concern of information overload, recommender seems to be the most promising. Recommender

systems found their existence in the area of E-commerce (i.e. Amazon.com) where meaningful items are suggested
to the user based on their interest (Schafer, Konstan, & Riedl, 2001). These systems provide their users personalized

recommendations about services, products, and information that may be of their interest. It can be defined as an

information filtering system that uses search engines to enhance the user experience by finding, personalizing and

recommending items based on the user profile (Ricci, Rokach, & Shapira, 2011). The motivation behind

recommender systems is that it has been effective in reducing information overload, shortening search time and

retaining users (Perugini, Gonçalves, & Fox, 2004).

Recommender systems have also been studied for health educational systems on the internet (Fernandez-Luque,

Karlsen, & Vognild, 2009). Computer-tailoring health education systems (CTHES) are considered expert systems

that simulate tasks performed by health educators that adapts to personal needs (Fernandez-Luque et al., 2009). It is

used to tackle a specific health concern monitored by health professionals. Electronic and personal health records are

the most common type of CTHES.

Another design solutions that could be useful for healthcare websites is persuasive systems design. Persuasive

systems are interactive information technology that uses technology as a channel for persuasion. Persuasive systems

are designed with the intent to modify attitudes or behavior without coercion or deception (Fogg, 2002; Oinas-

Kukkonen & Harjumaa, 2008). This type of system uses computer-human persuasion or computer-mediated

persuasion approach. With computer-human persuasion, the computer utilizes some patterns of interaction similar to

human communication to affect one’s behavior (Nass & Moon, 2000). On the other hand, computer-mediated

persuasion investigates how people are influencing others through different communications mediums such as blogs,

social media or emails (Guadagno & Cialdini, 2005). Multiple techniques (i.e. tailoring, personalization,

trustworthiness and suggestions) have been used to support persuasive system design. With T2DM websites,

persuasive systems can be very useful to modify persons behavior where they will become more aware of their

health status and make informed decisions to prevent the onset of T2DM,

5. Future Work

The next phase of this research will investigate the use of recommender systems as a solution to reduce

information by personalizing and adapting T2DM preventative care knowledge based on individual differences.

This process will be guided by the Human-Centered Design methodology, an iterative lifecycle that includes the

targeted population throughout the design process, identifies the needs of the design rationale and contains

evaluation methods throughout the lifecycle to validate usability.


Extending this research provides a unique and excellent opportunity to design a health education website that

accommodates a diverse community and tailors to the diverse needs of its users. In future studies, usability

evaluations will be extremely important and critical to an effective interface design. It is hoped that such system

will not only decrease information overload, but motivate users to participate in improving their health and making

informed decisions to reduce the risk of T2DM and other unhealthy behaviors.

Acknowledgements

The authors would like to thank Title III at North Carolina A&T State University for funding this research.

References

American Diabetes Association. (2013). Economic costs of diabetes in the US in 2012. Diabetes Care, 36(4), 1033-
1046.
American Diabetes Association. (2014). Statistics About Diabetes. from http://www.diabetes.org/diabetes-
basics/statistics/
Bedell, S. E., Agrawal, A., & Petersen, L. E. (2004). A systematic critique of diabetes on the world wide web for
patients and their physicians. International journal of medical informatics, 73(9), 687-694.
Boyle, J. P., Thompson, T. J., Gregg, E. W., Barker, L. E., & Williamson, D. F. (2010). Projection of the year 2050
burden of diabetes in the US adult population: dynamic modeling of incidence, mortality, and prediabetes
prevalence. Popul Health Metr, 8(1), 29.
Centers for Disease Control and Prevention. (2014). National Diabetes Statistics Report: Estimates of Diabetes and
Its Burden in the United States. In U. S. D. o. H. a. H. Services (Ed.). Atlanta, GA.
Davis, D., & Jiang, S. (2015, March 2015). Usability evaluation of web-based interfaces for Type2 Diabetes
Mellitus. Paper presented at the Industrial Engineering and Operations Management (IEOM), 2015
International Conference, Dubai,United Arab Emirates.
El-Gayar, O., Timsina, P., Nawar, N., & Eid, W. (2013). A systematic review of IT for diabetes self-management:
are we there yet? International journal of medical informatics, 82(8), 637-652.
Fernandez-Luque, L., Karlsen, R., & Vognild, L. K. (2009). Challenges and opportunities of using recommender
systems for personalized health education. Paper presented at the MIE.
Fogg, B. J. (2002). Persuasive technology: using computers to change what we think and do. Ubiquity,
2002(December), 5.
Fox, S., & Duggan, M. (2013). Pew Internet and American Life Project: Pew Research Center Washington, DC.
Fox, S., Rainie, L., & Horrigan, J. (2006). The online health care revolution: How the Web helps Americans take
better care of themselves. : Pew Internet & American Life Project.
Geiss, L. S., Wang, J., Cheng, Y. J., & et al. (2014). PRevalence and incidence trends for diagnosed diabetes among
adults aged 20 to 79 years, united states, 1980-2012. JAMA, 312(12), 1218-1226. doi:
10.1001/jama.2014.11494
Guadagno, R. E., & Cialdini, R. B. (2005). Online persuasion and compliance: Social influence on the Internet and
beyond.
ISO9241-11. (1998). Ergonomics requirements for office work with visual display terminals (VDTs) - Part 11:
Guidance on usability. Geneva: International Standards Organisation. Also available from the British
Standards Institute, London.
Kahn, S. (2003). The relative contributions of insulin resistance and beta-cell dysfunction to the pathophysiology of
type 2 diabetes. Diabetologia, 46(1), 3-19.
Lee, J. (2013). No. 1 Position in Google Gets 33% of Search Traffic [Study]. Retrieved March 3, 2016, from
https://searchenginewatch.com/sew/study/2276184/no-1-position-in-google-gets-33-of-search-traffic-study
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues,
56(1), 81-103.
Nielsen, J. (1995). 10 Usability Heuristics for User Interface Design. Retrieved September 13, 2014, from
http://www.nngroup.com/articles/ten-usability-heuristics/
Nielsen, J. (2001). How to conduct a heuristic evaluation Retrieved September 13, 2014, from
ttp://www.gerrystahl.net/hci/he2.htm
Nielsen, J. (2012). Usability 101: Introduction to Usability. Retrieved September 13, 2014, from
http://www.nngroup.com/articles/usability-101-introduction-to-usability/
Oinas-Kukkonen, H., & Harjumaa, M. (2008). Towards deeper understanding of persuasion in software and
information systems. Paper presented at the Advances in Computer-Human Interaction, 2008 First
International Conference on.
Perugini, S., Gonçalves, M. A., & Fox, E. A. (2004). Recommender systems research: A connection-centric survey.
Journal of Intelligent Information Systems, 23(2), 107-143.
Ricci, F., Rokach, L., & Shapira, B. (2011). Introduction to recommender systems handbook: Springer.
Schafer, J. B., Konstan, J. A., & Riedl, J. (2001). E-commerce recommendation applications Applications of Data
Mining to Electronic Commerce (pp. 115-153): Springer.

Fig 1. Participants preference for key elements in website design.

Fig 2. Illustration of a non-normal distribution for easy to learn.

Fig. 3. Illustration of a non-normal distribution for easy to use.

Fig. 4. Dwass-Steel-Critchlow-Fligner method results for the variable easy to use.

Fig. 5. Illustration of a non-normal distribution for navigation

Fig. 6. Table of Dwass-Steel-Critchlow-Fligner method results for the variable navigation.

Fig. 7. Illustration of a non-normal distribution or look and feel

Fig. 8. Table of Dwass-Steel-Critchlow-Fligner method results for the variable look and feel.

Fig. 9. Histogram for layout and organization illustrates a non-normal distribution.

Fig. 10. Table of Dwass-Steel-Critchlow-Fligner method results for the variable layout and

organization

Fig. 11. Histogram for clarity of labels and buttons illustrates a non-normal distribution

Fig. 12. Histogram for graphics illustrates a non-normal distribution

Fig. 13. Table of Dwass-Steel-Critchlow-Fligner method results for the variable graphics

Figure 3.1 Table of Dwass-Steel-Critchlow-Fligner method results for the variable clarity of

labels and buttons


Table 1 Description of Usability Components Adapted from (Nielsen, 2012).
Components
Description
of Usability

How easy is it for users to accomplish basic


Learnability
tasks the first time they encounter the design

Are users having difficulty completing tasks


Efficiency
that should be rather simple
When users return to the design after a period
Memorability of not using it, how easily can they reestablish
proficiency Table 2 Partial list of the tasks performed per website
How many errors do users make, how severe
Errors are these errors and how easily can they
recover from the errors
Satisfaction How pleasant is it to use the design

Interface Tasks
Table 3 Descriptive statistics for number of mouse clicks,
1.
What are the risk factors for developing completion time (secs) and number of errors
T2DM?
ADA 2. Locate an online support group for patients
recently diagnosed.
3. What is required to prevent T2DM?
1. What are the risk factors for developing
T2DM?
WebMD
2. What is required to prevent T2DM?
3. How does exercise affect blood sugar?
1. What are the symptoms for T2DM?
NDEP 2. Locate the 50 ways to prevent T2DM? Table 4 Pearson r Correlation Analysis Results (N = 60)
3. Determine if you are at risk for T2DM?
Std
Interfaces Variable Mean Min Max
Dev
Clicks 79.1 38.25 22 191
ADA Time 885.2 276.88 517.77 1330.6
Errors 2.8 2.82 0 9 Table 5 SNK Test Results for Completion Time,
Clicks 65.95 15.47 37 92 Number of Clicks and Number of Errors
WebMD Time 880.44 258.38 330 1407.6
Errors 1.3 1.95 0 7
Clicks 115.85 63.80 33 242
NDEP Time 1208.1 508.57 480 2140.2
Errors 5 1.26 3 8
Errors Time Clicks
r(58)=0.21, r(58)=0.09,
Errors —
p= 0.11 p=0.51
Completion r(58)= 0.62,

Time p< .0001
Clicks —
Means with the same letter are not significantly different.
SNK
Means N Interface
Grouping
A 1208.10 20 NDEP

Completion
B 885.20 20 ADA
Time
B
B 880.44 20 WebMD
Number of A 115.85 20 NDEP
Clicks
B 79.10 20 ADA
B
B 65.95 20 WebMD
A 5 20 NDEP

Number of
B 2.8 20 ADA
Errors

C 1.3 20 WebMD

Table 6 Post-Test Questionnaire Results


QUESTION SCALE ADA WebMD NDEP

Strongly Disagree 0% 15% 0%

Disagree 20% 25% 15%


I found the system unnecessarily
complex Neutral 45% 35% 45%

Agree 25% 15% 35%


Strongly Agree 0% 10% 5%
Strongly Disagree 0% 5% 0%
Disagree 5% 15% 40%
I thought the system Neutral 20% 25% 45%
was easy to use
Agree 40% 65% 10%

Strongly Agree 35% 15% 5%

Strongly Disagree 0% 10% 5%

Disagree 5% 0% 35%
I was able to navigate the website
features Neutral 10% 30% 50%

Agree 50% 50% 15%

Strongly Agree 35% 10% 5%

Strongly Disagree 5% 15% 10%


I would imagine that it would be
easy for most people to learn the Disagree 5% 5% 30%
interface. 25% 40%
Neutral 25%
Agree 40% 40% 10%

Strongly Agree 25% 15% 10%

Strongly Disagree 30% 30% 5%

Disagree 35% 30% 35%


I thought there was too much
inconsistency in the website Neutral 25% 20% 15%

Agree 5% 15% 35%

Strongly Agree 5% 5% 10%

Strongly Disagree 30% 20% 30%


I found the website very difficult to Disagree 50% 30% 25%
use
Neutral 10% 40% 0%

Agree 10% 5% 25%

Strongly Agree 0% 5% 20%


QUESTION SCALE ADA WebMD NDEP

Strongly Disagree 60% 35% 0%

I need technical assistance to be able Disagree 35% 50% 25%


to use this website 15% 0%
Neutral 5%
Agree 0% 0% 60%

Strongly Agree 0% 0% 15%

Strongly Disagree 0% 5% 0%

Disagree 10% 15% 40%


I thought the various functions in
this website were well integrated Neutral 15% 20% 35%

Agree 50% 40% 0%

Strongly Agree 25% 20% 10%

Strongly Disagree 0% 10% 0%

Disagree 5% 10% 65%


I felt very confident using the
website. Neutral 15% 20% 0%

Agree 35% 35% 35%

Strongly Agree 45% 25% 0%

You might also like