You are on page 1of 17

Assessing the Reliability and Validity of an

Instrument for Measuring Patient Use of


Telemedicine in Light of the Technology Acceptance
Model
Arwa Althumairi (  aalthumairi@iau.edu.sa )
Imam Abdulrahman Bin Faisal University, College of Public Health, Health Information Management and
Technology Department

Research article

Keywords: Telemedicine, Acceptance, Survey, Validity, Patient

DOI: https://doi.org/10.21203/rs.3.rs-618524/v1

License:   This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License

Page 1/17
Abstract
Background:

Telemedicine has been used as a supportive service to assure that quality healthcare services are
provided in a timely manner. The growing use of telemedicine, especially for e-consultations, is increasing
patient access to care, especially during pandemics or emergencies. However, a valid and reliable tool is
needed to measure its use. This study aimed to evaluate the reliability and validity of patient acceptance
of telemedicine, utilizing the dimensions of Technology Acceptance Model (TAM).

Methods:

The researcher designed an instrument based on TAM dimensions/principles. The instrument was
distributed among 244 participants via an online link found in social media, such as Twitter and
WhatsApp, from August 2020 to December 2020. Content validity and exploratory and confirmatory
factor analysis were used to examine the construct validity, and Cronbach’s alpha test was employed to
examine the reliability of the instrument.

Results:

A total of 244 participants in the study and 22 items or questions had a loading factor greater than 60.
The assessment suggested 6 factors to explain the variation in the participant responses, with 75% of the
variation being explained by these factors. The confirmatory analysis agreed with the explanatory factors,
suggesting that the number of items and latent factors had a CMIN/DF equal to 3.021 and an RMSEA of
0.095, with a 90% CI (0.105 -0.086).

Conclusion:

The results of the study suggest that the measure shows acceptable validity and reliability.

Background
Telemedicine can be defined as any means of communication between patients and healthcare providers
to receive healthcare services at a distance, by text, audio or video; telemedicine can be used for
diagnosis, treatment or health monitoring (Kato-Lin & Thelen, 2020; Wootton, 2001). There has been an
increase in the use of telemedicine during crises such as the COVID-19 pandemic (Mann et al., 2020). A
number of healthcare services have been dedicated to COVID-19 survival, diagnosis and treatment; hence,
services have been reduced for healthcare needs for other conditions (Mehrotra et al., 2020).

The healthcare management and delivery of care have undergone a transformational change to reduce
crowding and reduce entrance to the hospital for incidences that could be diagnosed and treated at a
distance. Healthcare providers must attain the essential skills to use technology for distance diagnosis,
treatment and further clinical intervention (Garcia-Huidobro et al., 2020). More importantly, the goals of

Page 2/17
user satisfaction and willingness to use the services are among the essential elements for success of the
healthcare policy aims and goals to incorporate technology into healthcare services. (Mair & Whitten,
2000; MoH, 2019).

Therefore, a validated and reliable tool to assess patient use of telemedicine is needed. Especially in
Saudi Arabia, where the MOH 2030 goals are to adapt technology in unified health records, distance care
must be implemented to assure that services are accessible to the whole population (MoH2, 2019). The
available validated tools used to assess telemedicine have mainly been for healthcare providers (Keely et
al., 2013; Vidal-Alaball et al., 2020; Yen et al., 2014). This study aimed to create and validate a survey to
assess end users’ intention to use telemedicine.

Methods
Instrument design
The researcher performed a substantial literature review to identify the different dimensions used in
assessing the use of telemedicine in different countries (Barsom et al., 2020; Chau & Hu, 2002; Dhukaram
et al., 2011; Di Cerbo et al., 2015; Peng et al., 2020). The instrument items were adapted from these
findings. The survey is attached in the supplementary file.

The questionnaire was generated in two main steps. The first selection of questions was related to
assessing intention to use a consultation. This step was performed by reviewing the literature; as a result,
the majority of the literature followed the Technology Acceptance Model (TAM) in generating the
questionnaire with specific factors: perceived usefulness, which is explained by motivation, attitudes
toward use, trust, and perceived ease of use (Chau & Hu, 2002; Gurupur et al., 2017; Vidal-Alaball et al.,
2020) (Figure 1). Some elements were added, such as social factors, as important elements to be
addressed when assessing patients’ behavioural intention to use consultations (Dhukaram et al., 2011;
Vidal-Alaball et al., 2020). The study model was built as presented in Figure 1.

The second step was to select the scale, and based on the literature, a common type of scale used is the
5-point Likert scale: 1 strongly disagree, 2 disagree, 3 neutral, 4 agree and 5 strongly agree (Bakken et al.,
2006; Bergmo et al., 2005; Gurupur et al., 2017; Keely et al., 2013)

Face and content validity


A face and content analysis was performed to match the study objectives. The reviewers were academic
experts in the field of healthcare quality. The questions underwent some grammatical modifications and
some rephrasing, which occurred for two study items: “I would characterize e-consultations treated
providers as honest” was rephrased to “The treating physicians in the e-consultation services are honest”
and “My GP does not offer e-consultation” to “The physician/hospital does not offer e-consultation”.
Page 3/17
The questions were translated using backward and forward translation in English and Arabic using an
accredited agency for translations. Then, the questions were reviewed again by the academic reviewer,
and a sample of patients was collected to ensure that the questions were clear. The questions were
approved and ready to be used.

The questions were distributed to all of the patients through social media, such as Twitter and WhatsApp,
using Question Pro software.

Steps of content validity using factor analysis

1. Exploratory factor analysis (EFA)., i.e., general loading to decide which item is related to each factor
using principal component analysis and varimax rotation
2. Confirmatory factor analysis (CFA): to assess the model stability of the suggested model from step
1. The confirmatory analysis was conducted following structural equational modelling (SEM) using
AMOS. The SEM was extracted using maximum likelihood (ML) robust extraction methods.

Based on the following indices, the model fit criteria were assessed/

Normed (X2/df) recommended to be less than 3 to be considered as a good fit


RMSEA: the root mean-square error of approximation value recommended to be a value between
0.05 and 0.08, suggesting reasonable fit; a value >1.0 suggests poor fit, and values between up to
0.08 (Browne & Cudeck, 1992; Whittaker, 2016) or .10 (Whittaker, 2016) are considered acceptable.
The comparative fit index (CFI) and Tucker-Leis index (TLI) show the research model with the
baseline model, and a value >0.9 indicates reasonably good fit. (Bentler, 1990)
Reliability: The determination of reliability was undertaken using Cronbach’s alpha coefficient to test
the internal consistency of the responses for each dimension and the entire instrument.

Statistical analysis
The sample size was estimated to be 384 based on the following formula:

where p= for the population is estimated to be more than 1 million participants, e= margin of error is 0.5,
and the t value is 1.96 (Taherdoost, 2017). For factor analysis studies, it is acceptable to have a sample
of more than 50 participants, which can be estimated by the number of study items/independent
variables (22) times (10 participants), so a total of 220 participants will be valid to run factor analysis
statistics (Kotrlik & Higgins, 2001). In this study, 244 participants were included. Data are reported with

Page 4/17
95% confidence intervals. All analyses were performed using SPSS IBM software, version 27, and AMOS
software for confirmatory analysis.

Results
Patient characteristics
A total of 244 participants participated in this study, and the majority were female (84%) and lived in the
Eastern Province (Table 1). The educational level was commonly a bachelor’s degree, and a majority
worked in the government sector. Only 40% of the participants used e-consultations.

Explanatory factor analysis


A total of 31 questions were drawn from the literature and included in this study. After running the
explanatory factor analysis, only 22 items were included, and a list of the items appears in Table 2.

The construct validity was assessed using factor analysis with varimax rotation to describe the total
variation explaining the intention to utilize telemedicine. The Kaiser criterion (Kaiser, 1960) was used for
31 items, with the eigenvalue set to be greater than 1; six factor components were found to have
eigenvalues greater than 1 (Table 3, Figure 2).

Confirmatory factor analysis


The CFA to assess the model fit for the EFA showed that the model had a Chi2 of 513.5 and degrees of
freedom (Df) of 170 with a sample of 252.

The model fit criteria were met for some of the indices.

1. The CMIN/DF was equal to 3.021, which was within the acceptable model fit criteria (Yen et al.,
2014).
2. RMSEA (0.095), with 90% CI (0.105 -0.086), although not considered a good fit, was considered
acceptable (Whittaker, 2016).
3. CFI (0.858) and TLI (0.825) were very close to the reasonable fit criteria (Bentler, 1990)

Additionally, for the loading of variables, all of the study items had a considerable loading factor greater
than 60 and a correlation level less than 2 (Figure 3).

Reliability
The Cronbach’s alpha of the 22 items was 0.844, with a value greater than 0.7 for each component factor.

Page 5/17
Discussion
To the best of our knowledge, this study is the first to assess tools for measuring patient acceptance of
telemedicine. The TAM model has been used differently in the literature, and standardized tools are
essential for comparability purposes; thus, a policy can be devised to improve health services. The
assessment of the tools showed that the number of items and number of factors were within the
acceptable criteria; all of the items that remained in the model had a loading factor greater than 1, and
the factors covering 75% of the item variations were explained by 6 factors. Furthermore, the confirmatory
analysis showed that the suggested model fulfils the majority of the likelihood criteria conditions.

Motivations, attitude towards use, trust, ease of use, social factors, and intention to use were acceptable
criteria for assessing patient acceptance of telemedicine. These factors were similarly approved by earlier
studies (Bakken et al., 2006; Vidal-Alaball et al., 2020; Yen et al., 2014).

Limitations of the study are that the model fit assessment was near what was suggested by the literature;
however, the model was not ideally fit. The sample was not randomly selected based on nonprobability
sampling. Further research could be performed to assess the validity of the questionnaire using random
sampling methods.

Conclusions
In this study, the author combined the principles of TAM to assess telemedicine use among patients and
to propose an instrument for measuring and evaluating intention and actual telemedicine use among
patients. This new instrument can be used to assess the patient intention to use telemedicine, especially
during epidemiological crises. The results of the study suggest that the measure shows acceptable
validity and reliability using both EFA and CFA.

Abbreviations
TAM = Technology Acceptance Model

EFA = Explanatory Factor Analysis

CFA = Confirmatory Factor Analysis

RMSEA = Root Mean Square Error of Approximation

CI = Confidence Interval

CFI = Confirmatory Fit Index

TLI = Tucker–Lewis Index

SEM = Structural Equation Modelling


Page 6/17
ML = Maximum Likelihood

Declarations
Ethics approval and consent to participate
IRB approval was attained by the office of the Vice President for Research and Higher Studies, Imam
Abdulrahman Bin Faisal University. The Institutional Review Board number is IRB -2020-03-335, approved
in 29/10/2020. At the beginning of the questionnaire, the participants were asked for their consent to
participate in the study and use their data for research purposes. Additionally, the data were aggregated,
anonymized and saved in a secure server. Any cells that constituted fewer than five cases were hidden to
avoid breaching of case identity. The data are available upon request.

Declaration of interest

None: I declare that this paper was not sent for publication previously.

Funding

This research did not receive any specific grants from funding agencies in the public, commercial, or not-
for-profit sectors.

Authors' contributions

The author was responsible for designing the tools, collecting the data, the analysis and the paper
manuscript.

Acknowledgements

I am thankful for the English writing and editing of the manuscript to meet the publication requirements.

References
1. Bakken, S., Grullon-Figueroa, L., Izquierdo, R., Lee, N.-J., Morin, P., Palmas, W., Teresi, J., Weinstock, R.
S., Shea, S., Starren, J., & for the IDEATel Consortium. (2006). Development, Validation, and Use of
English and Spanish Versions of the Telemedicine Satisfaction and Usefulness Questionnaire.
Journal of the American Medical Informatics Association, 13(6), 660–667.
https://doi.org/10.1197/jamia.M2146
2. Barsom, E. Z., Jansen, M., Tanis, P. J., van de Ven, A. W. H., Blussé van Oud-Alblas, M., Buskens, C. J.,
Bemelman, W. A., & Schijven, M. P. (2020). Video consultation during follow up care: Effect on quality
of care and patient- and provider attitude in patients with colorectal cancer. Surgical Endoscopy.
https://doi.org/10.1007/s00464-020-07499-3

Page 7/17
3. Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107(2),
238.
4. Bergmo, T. S., Kummervold, P. E., Gammon, D., & Dahl, L. B. (2005). Electronic patient–provider
communication: Will it offset office visits and telephone consultations in primary care? International
Journal of Medical Informatics, 74(9), 705–710. https://doi.org/10.1016/j.ijmedinf.2005.06.002
5. Browne, M. W., & Cudeck, R. (1992). Alternative Ways of Assessing Model Fit. Sociological Methods &
Research, 21(2), 230–258. https://doi.org/10.1177/0049124192021002005
6. Chau, P. Y. K., & Hu, P. J.-H. (2002). Investigating healthcare professionals’ decisions to accept
telemedicine technology: An empirical test of competing theories. Information & Management, 39(4),
297–311. https://doi.org/10.1016/S0378-7206(01)00098-2
7. Dhukaram, A. V., Baber, C., Elloumi, L., Beijnum, B. van, & Stefanis, P. D. (2011). End-User perception
towards pervasive cardiac healthcare services: Benefits, acceptance, adoption, risks, security, privacy
and trust. 2011 5th International Conference on Pervasive Computing Technologies for Healthcare
(PervasiveHealth) and Workshops, 478–484.
https://doi.org/10.4108/icst.pervasivehealth.2011.246116
8. Di Cerbo, A., Morales-Medina, J. C., Palmieri, B., & Iannitti, T. (2015). Narrative review of telemedicine
consultation in medical practice. Patient Preference and Adherence, 9, 65–75.
https://doi.org/10.2147/PPA.S61617
9. Garcia-Huidobro, D., Rivera, S., Chang, S. V., Bravo, P., & Capurro, D. (2020). System-Wide Accelerated
Implementation of Telemedicine in Response to COVID-19: Mixed Methods Evaluation. Journal of
Medical Internet Research, 22(10), e22146. https://doi.org/10.2196/22146
10. Gurupur, V., Shettian, K., Xu, P., Hines, S., Desselles, M., Dhawan, M., Wan, T. T., Raffenaud, A., &
Anderson, L. (2017). Identifying the readiness of patients in implementing telemedicine in northern
Louisiana for an oncology practice. Health Informatics Journal, 23(3), 181–196.
https://doi.org/10.1177/1460458216639740
11. Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and
Psychological Measurement, 20(1), 141–151.
12. Kato-Lin, Y.-C., & Thelen, S. T. (2020). Telemedicine for Acute Conditions During COVID-19: A
Nationwide Survey Using Crowdsourcing. Telemedicine and E-Health.
https://doi.org/10.1089/tmj.2020.0351
13. Keely, E., Liddy, C., & Afkham, A. (2013). Utilization, benefits, and impact of an e-consultation service
across diverse specialties and primary care providers. Telemedicine Journal and E-Health: The
Official Journal of the American Telemedicine Association, 19(10), 733–738.
https://doi.org/10.1089/tmj.2013.0007
14. Kotrlik, J., & Higgins, C. (2001). Organizational research: Determining appropriate sample size in
survey research appropriate sample size in survey research. Information Technology, Learning, and
Performance Journal, 19(1), 43.

Page 8/17
15. Mair, F., & Whitten, P. (2000). Systematic review of studies of patient satisfaction with telemedicine.
BMJ, 320(7248), 1517–1520. https://doi.org/10.1136/bmj.320.7248.1517
16. Mann, D. M., Chen, J., Chunara, R., Testa, P. A., & Nov, O. (2020). COVID-19 transforms health care
through telemedicine: Evidence from the field. Journal of the American Medical Informatics
Association: JAMIA, 27(7), 1132–1135. https://doi.org/10.1093/jamia/ocaa072
17. Mehrotra, A., Ray, K., Brockmeyer, D. M., Barnett, M. L., & Bender, J. A. (2020). Rapidly converting to
“virtual practices”: Outpatient care in the era of Covid-19. NEJM Catalyst Innovations in Care Delivery,
1(2).
18. MoH. (2019). National E- Health Strategy—Ministry Vision “e-Health.”
https://www.moh.gov.sa/en/Ministry/nehs/Pages/default.aspx
19. MoH2. (2019). Healthcare Transformation Strategy.
https://www.moh.gov.sa/en/Ministry/vro/Pages/Health-Transformation-Strategy.aspx
20. Peng, Y., Yin, P., Deng, Z., & Wang, R. (2020). Patient–Physician Interaction and Trust in Online Health
Community: The Role of Perceived Usefulness of Health Information and Services. International
Journal of Environmental Research and Public Health, 17(1), 139.
https://doi.org/10.3390/ijerph17010139
21. Taherdoost, H. (2017). Determining sample size; how to calculate survey sample size. International
Journal of Economics and Management Systems, 2.
22. Vidal-Alaball, J., Flores Mateo, G., Garcia Domingo, J. L., Marín Gomez, X., Sauch Valmaña, G., Ruiz-
Comellas, A., López Seguí, F., & García Cuyàs, F. (2020). Validation of a Short Questionnaire to
Assess Healthcare Professionals’ Perceptions of Asynchronous Telemedicine Services: The Catalan
Version of the Health Optimum Telemedicine Acceptance Questionnaire. International Journal of
Environmental Research and Public Health, 17(7), 2202. https://doi.org/10.3390/ijerph17072202
23. Whittaker, T. (2016). Structural equation modeling. Applied Multivariate Statistics for the Social
Sciences, 639–733.
24. Wootton, R. (2001). Telemedicine. BMJ, 323(7312), 557–560.
https://doi.org/10.1136/bmj.323.7312.557
25. Yen, P.-Y., Sousa, K. H., & Bakken, S. (2014). Examining construct and predictive validity of the Health-
IT Usability Evaluation Scale: Confirmatory factor analysis and structural equation modeling results.
Journal of the American Medical Informatics Association, 21(e2), e241–e248.

Tables

Table 1: The distribution of patient characteristics

Page 9/17
Characteristics N=224 %

Age Mean=37.4 SD=13.4

Gender

Male 35 15.6

Female 188 83.9

Region

Makkah 25 11.2

Riyadh 49 21.9

Eastern Province 145 64.7

Other 5 2.2

Educational level

Diploma and less 35 15.6

Bachelor’s 141 62.9

Postgraduate 48 21.4

Occupational level

Government 83 37.1

Semi-government 14 6.3

Private sector 34 15.2

Not employed 93 41.5

Monthly family income

Less than 5,000 SR 18 8.0

From 5,001 to 10,000 SR 43 19.2

From 10,001 to 15,000 SR 57 25.4

From 15,001 to 20,000 SR 33 14.7

More than 20,000 SR 73 32.6

Do you suffer from any chronic diseases

Yes 41 18.3

No 183 81.7

Have you used e-consultation

Page 10/17
Yes 89 39.7

No 135 60.3

Table 2: Questions in the Intention of Telemedicine Utilization

Page 11/17
Number Questions/Item

1a To prevent a visit to the physician during the Covid 19 pandemic

2a To be able to contact a physician about my health concerns at any time

3a To save on travelling time

4 To be able to ask questions that might arise after a visit to the physician

5 To seek second opinion

6 To ask how I can best cope with my health problem

7a To ask questions about medication use (for example side effects)

8 To help relieve stresses and worries about my symptoms

9 To reduce my uncertainty

10 To decide whether a visit to the physician is necessary

11 To improve my wellbeing (motivation)

12 The treating physicians in the e-consultation services are honest

13 I believe that the health service provided by e-consultations platform is useful

14 Physicians on the e-consultations platform have medical qualifications.

15 The consultation or diagnosis provided by doctors on e-consultations platforms is reliable

16 In my opinion, physicians on the e-consultations platform are trustworthy.

17 Family and friends influence my decision of using the e-consultations platforms.

18 Social media influencers influence my decision of using the e-consultations platforms.

19 National policies and laws influence my decision of using the e-consultations platforms.

20 I intend to use e-consultations platforms to consult health issues when needed in the
future.

21 I plan to use e-consultations platforms to consult health issues when needed in the future.

22 I am welling to explore new applications/system for online consultations

23 a The physician/hospital does not offer e-consultation

24 The use of the Internet and e-mail is difficult

25 a The use of e-consultation is not refunded by the insurer

26 a The preference of a face-to-face visit to the physician

Doubting the reliability of information received through e-consultation


Page 12/17
27 a

28 a The high risk of providing personal information to e-consultations providers

29 The application/software used for the online consultation is difficult to use

30 Not owning a smart phone or home internet network

31 Worried about smart phone and internet bills

a
9 items were deleted as their loading value below the average.

Table3: Varimax rotation of the six factor components for the Intention of Utilizing Telemedicine
Questionnaire

Page 13/17
Item number Factor1: Factor2: Factor3: Factor4: Factor5: Factor6:
Attitude toward
Motivation use Trust Social Behavioural Ease of
factors intention to use
use

1 0.754

2 0.789

3 0.756

4 0.829

5 0.821

6 0.796

7 0.776

8 0.836

9 0.557

10 0.852

11 0.734

12 0.815

13 0.759

14 0.716

15 0.825

16 0.744

17 0.842

18 0.810

19 0.825

20 0.842

21 0.789

22 0.663

Eigenvalue 5.8 3.3 2.4 1.6 1.5 1.2

Percentage of 15.0 13.5 12.5 12.1 9.7 8.9


variance

Footnote: Extraction Method is Principal Component Analysis.

Page 14/17
Rotation Method: Varimax with Kaiser Normalization with Eigenvalue above 1.

9 items were excluding due to low loading value.

Figures

Figure 1

Instrument model design

Page 15/17
Figure 2

Scree plot of the number of factors with number of Eigenvalue

Page 16/17
Figure 3

CFA loading factors and correlation value

Page 17/17

You might also like