You are on page 1of 17

Computers & Education 181 (2022) 104447

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

Patterns of cognitive returns to Information and Communication


Technology (ICT) use of 15-year-olds: Global evidence from a
Hierarchical Linear Modeling approach using PISA 2018
Aditi Bhutoria a, b, *, Nayyaf Aljabri c
a
Indian Institute of Management Calcutta, Diamond Harbour Road, Kolkata, 700104, India
b
Strategic Advisor, EdTech Hub, UK
c
Education and Training Evaluation Commission, King Khalid Bin Abdulaziz Road, Al-Nakheel Al-Gharbi District, Riyadh, 11683, Saudi Arabia

A R T I C L E I N F O A B S T R A C T

Keywords: Existing literature shows varying impacts of Information and Communication Technology (ICT)
Information and communication technology on learning outcomes depending on the type and quality of technology used. However, current
(ICT) research on the optimal level of ICT use for the cognitive development of students is scarce and
Programme for international student
has remained country-specific, primarily focusing on developed economies. This paper un­
assessment (PISA)
Hierarchical linear modeling (HLM)
dertakes a cross-country comparison across 79 nations, investigating ICT use and its association
Concave returns with cognitive gain patterns as determined by reading, mathematics, and science test scores of 15-
Organisation of economic cooperation and year-olds. The Programme for International Student Assessment (PISA) 2018 dataset collected by
development (OECD) the Organisation of Economic Cooperation and Development (OECD) has been used by applying
two-stage regression analysis. The first stage involves a three-level Hierarchical Linear Modeling
(HLM) factoring for data nested at the student-, school-, and country-level. The second stage of
the empirical model involves a heterogeneity analysis to evaluate variance in ICT use patterns
across different groups of countries, clustered on the basis of their level of ICT development. The
results show a positive impact of ICT engagement on the test scores of students across all the
subjects. However, returns to ICT use tend to start diminishing after the engagement level of
students crosses a medium threshold of using ICT several times within a week. Furthermore, the
heterogeneity analysis supports conspicuous diminishing patterns in ICT use irrespective of the
economic status of the students. Cross-country comparisons show that diminishing returns to ICT
use are more prominent in countries with well-developed ICT infrastructure than in less-
developed ones. Where diminishing returns hold, excessive use of ICT in education is not an
optimal choice, and significant cognitive gains can be achieved by using the complementarity
between traditional learning techniques with ICT-based learning in different blended settings.

1. Introduction

Information and Communication Technology (ICT) has occupied a key role in the education sector over the past few years, across
both developed and developing countries. COVID-19 pandemic has amplified the focus of policymakers on technology, giving a much-
needed push to the role of online learning tools in the education sector. ICTs are integrated into the teaching and learning processes in

* Corresponding author.
E-mail addresses: abhutoria@iimcal.ac.in (A. Bhutoria), nraljabri@etec.gov.sa (N. Aljabri).

https://doi.org/10.1016/j.compedu.2022.104447
Received 24 June 2021; Received in revised form 9 January 2022; Accepted 16 January 2022
Available online 20 January 2022
0360-1315/© 2022 Elsevier Ltd. All rights reserved.
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

List of abbreviations

CAL Computer-Assisted Learning


HLM Hierarchical Linear Modeling
IDI ICT Development Index
ICT Information and Communication Technology
ILSA International Large-Scale Assessments
LMIC Low and middle-income countries
OECD Organisation of Economic Cooperation and Development
PISA Programme for International Student Assessment
RCT Randomized Controlled Trial
TIMSS Trends in International Mathematics and Science Study

education in several ways. Educators and policymakers have persistently been advocating ICT use for instruction, administration, and
communication with numerous implications on classroom management, teaching practices, pedagogical approaches, and time use.
Students’ engagement with ICT (both in school and at home) can affect what they learn, their cognitive processes, and their overall
well-being. Resultantly, competence in terms of using ICT is being recognized as an important skill that students need to acquire if they
are to flourish in the present digital age. OECD’s annual report of 2015, highlights the importance of bolstering student’s ability to
navigate through digital devices, showing a positive association between access and use of computers in schools with academic
achievement in terms of test scores (Avvisati, 2015). Economies with high GDP and internet proliferation levels have also been found
to be associated with a greater number of innovations in educational technology along with higher test scores in standardized in­
ternational assessments (Kozma, 2005; Krieg, 2019).
Given the ever-increasing reach and importance of ICT, it becomes imperative to investigate the role of ICT, how it transforms into
students’ learning outcomes, and if there are any adverse effects of excessive ICT usage. The EdTech Hub in its Insights Report (Nicolai
et al., 2020) has identified the need to dispel existing myths regarding the use of ICTs in education. For instance, it is widely believed
that both technology and content amount to learning and an uninhibited use of ICT can continually reap benefits. However, recent
studies contradict this notion and show that the gains from ICT use in education may not proliferate beyond a certain threshold. These
studies show that increasing ICT use, in terms of engaging in computer-assisted learning in academia, does not provide additional
returns (Bettinger et al., 2020; Chiu, 2020).
Determining the pattern of returns from ICT use is also pertinent from a cost-efficiency perspective. Education economists have
often argued that the per-pupil cost should be a paramount factor in choosing the optimum level of ICT tools and exposure (Piper et al.,
2016). The financial crisis in 2008 and the current COVID-19 pandemic have only exacerbated the scarcity of resources and inequality
of educational access (Lake & Makori, 2020). Developing countries, in particular, are grappling with the issue of trade-off between
minimizing costs and maximizing the efficiency of prescribed ICT tools with their increased availability and exposure on students.
Hence, investigating ICT return patterns becomes imperative to understand whether returns are increasing, decreasing, or constant so
as to implement efficient education policies, especially for the benefit of marginalized groups. As a large number of such students are in
the low and middle-income countries (LMIC), current research needs to consider the educational anxieties and apprehensions of such
geographies, providing policy inputs for efficient ICT proliferation and use. A recent World Bank report aptly described the presently
renewed focus on ICT in education stating that the pandemic has shifted the objective of education technology from a disrupting
status-quo to inclusion and reducing inequity (Wilichowski and Cobo, 2021).
The first step toward a digitally inclusive global education system is to broaden our existing knowledge base of ICT and computer-
based learning. Much of the current ICT-based literature uses international large-scale assessments (ILSA) because of their big data
characteristics, representative sampling, policy salience, and availability of specialized ICT questionnaires. Results from these as­
sessments are used as valid evidence to steer national policy reforms in participating countries (Breakspear, 2012). However, such
studies do not look at the heterogeneity or variability in the ICT returns, often leading to the oversimplification of ICT-based findings.
Furthermore, the unavailability of ICT-specific questionnaire responses for various countries leads to a systematic absence of certain
geographies in the ICT-research agenda. For example, in the 2018 round of the Programme for International Student Assessment
(PISA), students from around 29 countries did not respond to the ICT-specific questionnaire.
This study exploits the PISA (2018) round that includes data on 79 countries, spanning different continents, with a varied range of
national income levels and cultural settings. The novelty of the approach lies in quantifying the optimal level of ICT use using an
alternate indicator of ICT activity, developed using the popularly answered PISA student questionnaire, instead of the ICT-specific
questionnaire. The research questions explored in this paper are: Does a consistently higher use of ICT lead to higher cognitive
gains in terms of improved test scores? What is the nature of the relationship between student-level ICT use and their performance on

2
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

the PISA test? In other words, does student-level ICT use have a linear or quadratic relationship with test scores; and do trends in
returns from ICT use vary by the socio-economic status of an individual as well as the level of ICT infrastructure available at the school
and country levels, respectively?
The remaining part of this paper has been organized into four sections. Section 2 provides a literature review. Section 3 discusses
the data and methodology used for the study. Section 4 presents the results and Section 5 concludes the paper with a discussion and
policy implications.

2. Literature review

The existing literature on ICT use in education includes interventions such as supplemental computer-assisted interactive programs,
computer-managed learning systems, and educational software for improving teaching and learning alike (Cheung & Slavin, 2013).
These interventions are supported through a plethora of digital media across computers, mobile devices, internet access, learning
management systems, and interactive whiteboards. An increase in types of media has compelled policymakers to take decisions on how
much and to what extent should one invest in digital learning media. This suggests that for effectively implementing digitization
policies in education, optimal interaction time with ICT devices needs to be thoroughly understood and quantified.
Studies exploring the association between the type and intensity of ICT interaction in education systems with cognitive outcomes of
students (e.g., math or science scores) have used ILSAs like the PISA and Trends in International Mathematics and Science Study
(TIMSS) datasets (e.g., for Canada: Bussière & Gluszynski, 2004; Eastern Europe: Sofianopoulou & Bountziouka, 2011; Italy: Ponzo,
2011; Turkey: Guzeller & Akin, 2014; Switzerland: Ramseier & Holzer, 2005). Few studies document the existence of gender-based
differences in ICT use as male students report higher confidence in working with computers (Tømte & Hatlevik, 2011; Zhang &
Liu, 2016). Nonetheless, ICT exposure has been found to positively impact achievement scores for both male and female students
(Gumus & Atalmis, 2011; Luu & Freeman, 2011). Furthermore, the existing literature also examines variations in ICT use across
different socioeconomic backgrounds of students. The findings suggest a digital divide that is based on the social, cultural, and eco­
nomic status of the students (Luu & Freeman, 2011; Scherer & Siddiq, 2019). Several researchers have documented the important role
of ICT technology in determining the magnitude of cognitive benefits (Fernández-Gutiérrez et al., 2020; Spiezia, 2011). However, a
majority of such studies primarily focus on developed countries. Consensus remains elusive in terms of understanding ICT use across
LMICs.
Some studies investigate the limitations of ICT use for students. Excessive use of ICT devices may lead to pathological behavioral
changes as suggested by some review studies. Hinvest and Brosnan (2012) perform a meta-analysis theorizing that excessive ICT use in
the classroom or at home leads few students to show addictive tendencies towards particular technologies like the internet and video
games. Recent empirical studies examining the country-specific PISA datasets report moderate use of ICT (interfacing with technology
several times a week) at home and school to be positively correlated with the highest reading scores and a negative quadratic rela­
tionship is shown (Gubbels et al., 2020; Naumann & Sälzer, 2017). Chiu (2020) investigates the returns to ICT use specifically for
Taiwan using the PISA 2018 data. The author shows that all three categories of ICT use: leisure, educational, and school elicit quadratic
returns for subject scores, implying stagnating or diminishing returns after a certain threshold of ICT use. Additionally, Karlsson (2020)
shows a negative association of daily computer use with test scores, although monthly and weekly computer use has a positive as­
sociation with the student’s scores. Bettinger et al. (2020) in their Randomized Controlled Trial (RCT) involving approximately 6000
children from 343 schools in two provinces of Russia show that the first dose of CAL, which involves 20–25 min of math and language
per week classes provides a better return than the second dose that involves 40–45 min of classes per week. The results strongly suggest
the presence of concave returns positing that ICT-based returns follow an inverted U-shaped curve. Gómez-Fernández and Mediavilla
(2018) and Zhang and Liu (2016) evaluate the differential impact of different ICT-based tools on students’ learning outcomes. Though
the abovementioned studies provide some evidence on concave returns through ICT use, most of the results are either based on a
particular country or do not investigate the patterns in detail. There could be various reasons for this result, perhaps the absence of ICT
questionnaire for different countries in PISA datasets, mainly for developing economies. Furthermore, developing countries may still
be working on their ICT infrastructure and may need more time to adapt. Therefore, cross-country analysis on ICT use patterns and
returns is imperative but is scarcely available in the existing literature.
This paper aims to address the aforementioned gaps in the ICT literature. The main objective of the paper is to analyze the un­
derlying patterns in the cognitive returns from ICT use globally and to estimate the optimal usage of ICT. First, the paper empirically
evinces diminishing marginal returns from ICT use by investigating the full PISA 2018 cross-country sample covering 79 countries,
instead of using a unique country or a homogenous group of countries. In order to include the countries that did not opt for the ICT
questionnaire, the study uses a proxy indicator of ICT use developed with the PISA 2018 student questionnaire data, which broadly
covers the student’s online activity including educational web browsing, online discussion, checking emails, and social media usage
frequency. The analysis is further extended by exploring the heterogeneity in the return patterns from ICT use given variations in
economic status of students or the geography of their residence. Second, the paper presents case studies for select groups of countries
based on their ICT Development Index (IDI) covering the top-, middle- and bottom-groups in the ICT index. This exercise depicts how

3
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

the ICT use patterns vary across countries with different levels of ICT development, and whether the diminishing returns theory for ICT
use can be applied consistently in all contexts irrespective of country- and school-level environmental factors. Finally, the paper also
develops a conceptual framework to present a rationale for the use of blended models of learning in the presence of diminishing
marginal returns from ICT use.

3. Methodology

3.1. Hierarchical Linear Model (HLM) approach

This paper employs a Hierarchical Linear Model (HLM) approach to answer the research questions. Given the nested structure of
International Large-Scale Assessments (ILSAs), researchers advocate the use of HLM to remove the aggregate bias and underestimated
standard error limitations associated with Ordinary Least Squares (OLS) regression analysis (Lee, 2000). There are three levels in our
empirical model: Level 1 corresponds to student, Level 2 corresponds to school, and Level 3 corresponds to a country. As we are
interested in looking at mean differences and not the variation in the slopes due to higher levels, we include the variables as fixed
factors (Raudenbush & Bryk, 2002). However, the intercept is randomized at both the country- and school-levels.
The three-level HLM model can be represented as follows:

Level-1 (Student) model:



n
( )
Yijk = β0jk + βijk * Xijk + eijk , eijk ∼ N 0, σ 2
k=0

Level-2 (School) model:



n
( )
β0jk = γ00k + γ 0jk * Y0jk + μ0jk , μ0jk ∼ N 0, μ1 2
j=0

Level-3 (Country) model:



n
( )
γ00k = α000 + α00k * Z00k + π00k , π 00k ∼ N 0, π1 2
k=0

where,

Yijk denotes the mathematics, science, and reading scores for student i at school j in country k;
β0jk denotes the expected subject scores for individual i at school j in country k; and
βijk denotes the estimated impact of level 1 (school) predictors on the subject scores.
Xijk denotes the vector of Level 1 (student-level) independent variables which includes background variables for students like socio-
economic-cultural index, gender, and meta-cognition strategies variables, alongside the main variables of interest: linear and
quadratic terms for online activity variable.
Y0jk denotes the vector of Level 2 (school-level) independent variables including the school’s ICT capacity, an index related to the
school’s perception of its educational resources, school location, and school-averaged ESCS index.
Z00k denotes the vector of Level 3 (country-level) independent variables including log of the country’s per capita gross domestic
product (GDP).

These variables are not randomized at the school and country-level, so estimates are fixed at the student level. Therefore, β0jk = γ0jk
for each j > 1. Similarly, the Level 3 model transforms into γ 00k = α00k for each k > 1. In this manner, fixed effects are estimated in this
model, with random intercepts at Levels 2 and 3. Further, PISA provided weights have been used in the analysis at the student-level to
adjust variables for non-inclusion and non-participation of students, and school-level weights are used to adjust for oversampling of
schools with more minority students (Rabe-Hesketh & Skrondal, 2006).
We first employ a three-level HLM model with country-level and school-level predictors and use a linear and quadratic effects term

4
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

for ICT usage to explore the diminishing returns from ICT usage. Thereafter, in order to undertake a heterogeneity analysis, an
interaction term between categorical ESCS index and categorical “online activity” variable is included in the Level 1 HLM model. This
helps us to understand how the returns from ICT use vary for students from various socio-economic backgrounds. Finally, in the third
step, a sub-group analysis is undertaken and the aforementioned standard HLM model (with random intercepts and fixed slopes) is
employed using student-, school-, and. country-level factors for select groups of countries. Varying on the basis of their existing ICT
development. The country-specific analysis is important to investigate whether the findings from the first model including all 79
countries are similar or heterogenous across different economic setups.

3.2. Description of the key variables

This study derives its individual-level data from the PISA 2018 survey. It is merged with the country-level data retrieved from
International Monetary Fund (IMF) statistics and ICT development index from United Nations International Telecommunication Union
(ITU) to create a dataset with variables at the three-levels. PISA conducted by the Organization of Economic Cooperation and
Development (OECD) is a triennial international survey that aims to evaluate education systems worldwide by testing the skills and
knowledge of 15–16 year old students who have either completed or are soon going to complete their compulsory schooling. Students
are tested mainly in reading, mathematics, and science which are considered as critical indicators of student performance. The in­
dicators generated allow national authorities to compare their students’ performance to that of students in other countries and global
economies for driving evidence-based policy making.1

3.2.1. PISA test scores


The dependent variables of interests are the PISA mathematics, science, and reading scores for students. We use one of the plausible
values (pv2) for each test score as the dependent variable. Since PISA treats scores as missing data and then infers scores from the item
responses of the cognitive questionnaire, it provides ten plausible values as likely proficiencies for each student (PISA, 2012). This
complexity and computational constraints make it difficult to consider all the plausible values for analysis, hence one of the plausible
values is randomly selected (Rutkowski et al., 2010).2

3.2.2. Online activity index


PISA has implemented a specialized ICT questionnaire since the survey round of year 2000, albeit keeping it optional for
participating countries in PISA 2018; therefore, many countries and specific regions within a particular country do not opt for this
questionnaire. Out of 79 geographies in the PISA data, 29 did not opt for the ICT familiarity questionnaire in PISA 2018 (Lorenceau
et al., 2019). Given multiple levels of missing data in the ICT familiarity questionnaire, using the same restricts the scope for a
cross-country analysis and inclusion of the complete PISA sample. This has motivated us to look for alternative ways to measure a
student’s ICT behavior. To this effect, we create an indicator of Online Activity (OA) using variables from the PISA student’s ques­
tionnaire, which is mandatory for all countries.
The current paper identifies six Likert’s scale items from the student questionnaire to develop the OA Index, which measures an
individual student’s ICT use. Past evidence suggests that students use ICT primarily for communication, web surfing, entertainment,
and schoolwork. While students use it frequently for chatting and writing emails to communicate, they also do web browsing for
school-related work like literature search, presentations, and preparing assignments (Edmunds et al., 2012; Kennedy et al., 2008; Khan
et al., 2011; Kvavik, 2005). The Online Activity Index developed for this paper takes the average of the following six items covering
broad uses of a student’s ICT usage, as represented in the literature:

i. How often a student involves in online chatting with his/her friends and relatives.
ii. How often a student takes part in online discussions.
iii. How often a student indulges in online browsing for receiving the news.
iv. How frequently a student checks his/her email.
v. How often he/she does online browsing.
vi. How often student uses the Internet to schedule an event, etc.

The OA Index takes four values signifying negligible, low, medium, and high ICT usage. “Negligible OA” category corresponds to
“don’t know or seldom use”, “Low OA” corresponds to “low usage of once or twice in a month”. “Medium OA” signifies the usage of
“several times a week” and finally “High OA” category indicates usage of “several times a day” in these activities. We also run a
Cronbach’s alpha test on the OA Index with all the six items to examine the reliability of the scale. The value of the coefficient is around
0.81, which is considered as fairly good (Bhatnagar et al., 2014). The empirical model includes both linear and quadratic effects of OA

1
In the sample, plausible values for all the test scores for “Vietnam” are missing, and hence Vietnam is not included in the analysis. Multiple
imputation method has been used to reduce the incidence of missing values for the variables with >5% of the values missing. We use Multiple
Imputation by Chained Equations (MICE) to impute missing values in our analysis (Azur et al., 2011).
2
Our results remain robust and consistent even when we consider the average of the plausible values, which is another method of using them as
suggested in literature (Laukaitye and Wiberg, 2017). However, this method has been found to underestimate the standard errors generating issues
of statistical significance and we do not report this method in this paper.

5
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Index. The quadratic effects are represented by the squared term of the OA Index.

3.2.3. Other student-level controls


Students’ motivation to learn is controlled by using PISA’s predefined student motivation index. Intrinsic motivation has been
found to influence academic achievement of students (Ripley, 2017). Moreover, learning strategies of students while determining the
effectiveness of ICT use on test scores of students is also considered. Learning strategies or metacognition strategies have been
increasingly found to be important determinants of test scores, and have been consistently overlooked in previous studies (Odell et al.,
2020). In a comprehensive review, Wang et al. (1990) conclude that meta-cognition is one of the most influential predictors of
achievement scores. This is supported by more recent studies on the subject (Cheung et al., 2014, 2016; Säälik et al., 2015). Pennequin
et al. (2010) use a primary study to show that meta-cognition strategies could be helpful for low achievers, compensating for
low-intellectual capability. Resultantly, the current analysis accounts for metacognition strategies employed by students including a
student’s competency in assessing the credibility of any online information (Appendix A1). Finally, the model controls for the eco­
nomic, social, and cultural status (ESCS) as well as the gender of students, which are two well-established predictors of test scores in
educational literature (Chiu & McBride-Chang, 2006; Luu & Freeman, 2011). ESCS index is a standardized continuous index provided
by PISA and is derived from three indices: highest occupational status of the parents, highest educational level of parents, and home
possessions.3

3.2.4. School-level controls


At Level 2 (or school-level) of the HLM model, school’s location, type of school (whether the school is publicly or privately
managed), and the school-averaged ESCS index are considered as covariates. In addition, the school’s ICT index is also included which
reflects a school’s exposure to and preparedness for the use of ICT. School-related ICT questions SC155 test item in the PISA 2018
school questionnaire, which enquires: “To what extent do you agree with the following statements about your school’s capacity to
enhance learning and teaching using digital devices?” and has multiple options including number of digital devices, bandwidth, and
technical staff is used to create an index of the school’s ICT capacity. We also run a Cronbach’s alpha test on this school ICT index to
check for reliability. The value of the coefficient is around 0.92, which is considered as fairly good (Bhatnagar et al., 2014). Finally, the
model also includes the availability of educational resources at the school. This is proxied by the school principal’s response to the
question: “Does your school’s educational resource quality hinder students’ learning”, which has categorical responses, ranging from
“No” to “A lot”.

3.2.5. Country-level controls


At Level 3 (or country-level), the PISA 2018 data is combined with country-level indicators from the IMF data catalog that includes
the prior measure a country’s Gross Domestic Product (GDP) i.e., log of the per capita GDP for the year 2017 that helps to account for a
country’s economic potential.
Later, for country-level cases studies, the ICT Development Index (IDI) data is used which is collected from the United Nations
International Telecommunication Union (ITU) website. The IDI index contains eleven indicators, broadly covering three parameters:

i. ICT infrastructure and access: It includes aspects like the proportion of telephone subscriptions, cellular telephone proliferation,
internet access, internet bandwidth, and the proportion of computer users
ii. ICT use: It incorporates aspects like percentage of internet users and proportion of broadband subscriptions
iii. ICT skills: It includes demand-side indicators such as mean years of schooling rate and gross enrollment ratio

The index values help us to select five countries each lying in the top, middle and bottom quantile of the IDI, to conduct a deep-dive
investigation of whether our primary findings from the three-level HLM with 79 PISA countries hold for the selected countries at
different levels of the IDI distribution.
Tables 1 and 2 shows the descriptive statistics for the key variables used in the analysis. Student level characteristics reveal that
although an average female student has higher reading scores than an average male, girls lag in mathematics and science. Girls
outperform boys in the learning and meta-cognitive strategies like understanding and remembering, assessing credibility by a margin
of approximately 0.20 standard deviations. The student motivation index, which records the student’s motivation to learn and master
tasks, is also higher by 0.17 points for girls. Furthermore, seventy-nine percent of the schools are public schools in the selected sample.
The range for the different indices at the school-level such as ICT capacity, or school’s socio-economic status is also presented below. A
higher value on the school location scale indicates a greater degree of urbanization in the areas surrounding the school. Finally, the
range for the log values of the GDP per capita for 2017 of the countries covered in this study has also been provided, reflecting a large
economic diversity across the countries included in this study.

3
However, we use the ESCS index as a categorical variable with its values ranging from 1 to 3, where 1 represents the “low ESCS”, the lowest
strata of the socioeconomic groups, 2 represents “medium ESCS”, and 3 represents the highest strata. While distributing the individuals into income
groups, we ensure their compatibility with recent findings (Hamel & K, 2018 , September 27), hence ensuring that fifty percent of data lies in “low”
and “high” ESCS category and the remaining fifty percent lies in moderate ESCS category. We also run a Cronbach’s alpha test on this categorical
index to examine the reliability of the index. The value of the coefficient is around 0.90, which is considered as fairly good (Bhatnagar et al., 2014).

6
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Table 1
Descriptive statistics for the key student-level variables by student gender.
Male Female

Mean SD Freq. Mean SD Freq.

Student-Level Variables

Mathematics (pv2) 463.303 107.301 291,298 459.657 100.904 289,625


Science (pv2) 459.507 106.262 291,298 463.056 99.117 289,625
Reading (pv2) 442.088 110.851 285,662 471.543 103.717 284,229
ESCS -.276 1.116 285,817 -.294 1.133 286,480
Understanding and remembering -.202 .991 272,160 .044 .959 276,211
Assessing credibility -.217 .96 271,285 -.092 .961 274,389
Summarizing -.286 .988 272,131 .019 .954 276,075
Online Activity 3.535 .783 273,246 3.584 .665 276,466
Motivation Index .005 1.025 274,362 .199 .982 277,742

Table 2
Descriptive statistics for the key school-and country-level variables.
Variable Freq. Mean Std. Dev. Min Max

Student-Level Variables

Public School 553,964 .796 .403 0 1


School ICT capacity 558,672 2.695 .69 1 4
Shortage of resources 560,295 .067 1.092 − 1.932 3.523
School location 558,113 3.199 1.21 1 5
School ESCS (mean) 585,338 -.284 .777 − 5.003 2.251

Country-Level Variables

Log (GDP per capita 2017) 586,303 9.838 .962 7.525 11.603

Table 3
Hierarchical linear model (HLM) regression results for 79 countries participating in PISA 2018
Mathematics Science Reading

Student fixed effects

Female (dummy) − 17.65*** − 12.58*** 11.37***


Understanding and remembering 7.363*** 8.490*** 9.132***
Assessing credibility 14.96*** 18.22*** 19.52***
Summarizing 11.32*** 13.00*** 14.37***
ESCS index 10.90*** 9.963*** 9.718***
Online Activity 36.87*** 35.20*** 44.38***
Online Activity (quadratic) − 4.845*** − 4.607*** − 5.551***
Motivation index 6.642*** 6.994*** 8.292***

School fixed effects

Public school (dummy) 0.289 0.567 0.753


School’s ICT capacity 0.816 0.545 − 0.203
Shortage of educ. resources − 0.884** − 0.637* − 0.824**
School’s location − 0.013 − 0.224 0.961***
School’s ESCS (mean) 41.57*** 39.18*** 42.04***

Country fixed effects

Per capita GDP (log) 11.21** 9.017* 9.438*


_cons 312.9*** 335.2*** 294.5***

Random effects (Country)

var(_cons) 1199.8*** 1066.0*** 917.2***

Random effects (School)

var(_cons) 884.9*** 803.9*** 825.8***


var (Residuals) 4573.9*** 4504.2*** 4839.9***

Observations 485,760 485,760 478,812

Note: *p < 0.05, **p < 0.01, ***p < 0.001.

7
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Table 4
Heterogenous Effects of Engaging in Online Activity w.r.t the ESCS Status of Students.
Mathematics Science Reading

Student fixed effects

Female (dummy) − 17.82*** − 12.73*** 11.23***


Understanding and remembering 7.423*** 8.540*** 9.184***
Assessing credibility 15.01*** 18.26*** 19.56***
Summarizing 11.40*** 13.06*** 14.43***
Motivation index 6.729*** 7.073*** 8.409***

Interaction between Online activity and ESCS index

Negligible OA**Low ESCS 0 0 0


Low OA**Low ESCS 10.79*** 8.807*** 13.61***
Medium OA**Low ESCS 17.71*** 14.65*** 23.14***
High OA**Low ESCS 16.49*** 14.21*** 23.00***
Negligible OA**Moderate ESCS 3.963** 0.268 0.395
Low OA**Moderate ESCS 5.828*** 8.137*** 7.818***
Medium OA**Moderate ESCS 7.700*** 11.08*** 10.64***
High OA**Moderate ESCS 6.023*** 8.814*** 9.235***
Negligible OA**High ESCS 12.56*** 6.185*** 7.665***
Low OA**High ESCS 18.12*** 21.91*** 19.14***
Medium OA**High ESCS 17.81*** 22.85*** 20.24***
High OA**High ESCS 14.37*** 18.32*** 16.94***

School fixed effects

Public school (dummy) 0.701 1.001 1.151


School’s ICT capacity 0.790 0.521 − 0.219
Shortage of educ. resources − 0.949*** − 0.701** − 0.884**
School’s location − 0.001 − 0.206 0.981***
School’s ESCS (mean) 44.07*** 41.37*** 44.24***

Country fixed effects

Per capita GDP (log) 11.31** 9.107* 9.506*


_cons 348.5*** 371.6*** 344.1***

Random effects (Country)

var(_cons) 1207.7*** 1065.9*** 917.0***

Random effects (School)

var(_cons) 860.6*** 781.9*** 806.2***


var (Residuals) 4581.9*** 4508.1*** 4846.9***
Observations 485,760 485,760 478,812

Note: *p < 0.05, **p < 0.01, ***p < 0.001.

4. Results

The results from the main HLM model are given in Table 3. The model we run is comprised of fixed effects at individual-, school-,
and country-level with the intercepts being randomized at school- and country-level, respectively. Following this, Table 4 presents the
heterogeneity analysis for different levels of PISA’s socio-economic index (i.e., the ESCS index).

4.1. Cognitive return patterns from engaging in online activity

The estimated coefficient on OA Index in Table 3 suggests a positive correlation with the test scores, with the highest magnitude for
reading scores. The coefficient on quadratic effects has a negative correlation. It suggests that increases in returns are decline gradually
(concave returns), showing resonance with the inverted U-shaped curve for returns.
There is a pronounced gender gap in mathematics and science scores where male students are found to perform better than the
female students. However, females outperform males in reading scores. The study finds significant and positive correlation of the meta-
cognition indices with achievement scores across subjects, consistent with the previous studies (Kim et al., 2009; Ku & Ho, 2010).
Furthermore, student’s motivation level also positively impacts cognitive achievement. At the school level, the school’s socioeconomic
status observed through the socio-economic status of its students is found to be highly significant and strongly impacts the student’s
scores. This shows that a school attended by high-status children correlates with better test outcomes. Expectedly, the coefficients on
shortage of education resources show that low-quality education materials adversely affect test scores. The school’s location which
represents the surrounding area of the school (a higher score represents greater degree of urbanization), is not statistically significant
related to Mathematics and Science scores. However, students from schools located in more urbanized areas are likely to report better
reading scores. At the country level, log of the per capita GDP shows a positive association with test scores across all subjects.

8
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

4.2. Heterogenous effects of engaging in online activity w.r.t the ESCS status of students

The second step in our analysis is conducted by using the interaction term between the categorical Online Activity and ESCS indices
to investigate heterogeneity in the diminishing cognitive returns because of the varying economic and cultural status of students. The
results are presented in Table 4 below.
Overall, the sign and significance of all the controls remain consistent with the findings from Table 3, presented earlier. Addi­
tionally, Table 4 shows that the interaction terms are statistically significant at a 95 percent confidence level. Keeping the ESCS
category fixed as “Low ESCS”, the first finding from the interaction terms suggest that compared to negligible online activity, every
higher level of online activity produces a higher score, irrespective of the test subject. Similar results have been observed for other ESCS
categories too. The “medium OA” category represents ICT usage frequency of multiple times within a week while “high OA” signifies
ICT activity undertaken multiple times a day. In terms of the pattern of returns, the “medium” level of online activity produces the
highest scores for all test subjects for each of the ESCS categories. A further increase in OA Index from the “medium” to “high” level
results in either the same returns as “medium” online activity or diminished returns. In other words, the table shows that active weekly
online engagement is likely to be more conducive in terms of improving learning outcomes compared to daily engagement. Further, it
can be seen that for a particular OA category, returns are not homogenous across the ESCS categories that suggests a moderation effect
of ESCS on OA. This result has been represented graphically through the predictive margin plots presented below. Brotman (2016)
shares an explanation for the finding saying that lower strata students might be lagging in effectively harnessing the potential of
technology due to a lack of educated parents or the right guidance at home.
Predictive margin plots help to establish the association between online activity returns and test scores for reading, mathematics,
and science for different ESCS levels. This visualization predicts concave returns for 79 countries participating in PISA and included in
this paper’s analysis. Fig. 1 validates that “medium” ICT use is optimal in terms of test score gains. As the frequency of OA increases,
scores increase with each category and reach their peak at medium OA, with a further increase in OA leading to a decrease or stag­
nation in scores. Overall, the regression results in both Tables 3 and 4 as well as Fig. 1 show that for the countries included in our HLM
model, the hypothesis of diminishing returns from ICT engagement holds true irrespective of social status and test subject of the
students.

4.3. Country-specific sub-group analysis

There is a possibility that the hypothesis of positive yet diminishing marginal cognitive returns to higher online activity is valid for
the PISA sample of 79 countries, but that it may not hold true for individual geographies due to country-specific heterogeneities.

Fig. 1. Predictive margin plots for the effects of online activity of students across subjects and ESCS levels.

9
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Therefore, we examine whether the findings hold true for countries with different levels of existing ICT endowment such as infra­
structure, access levels, as well as fundamental and digital skills of the populace. To this effect, this paper studies cases of nations
falling in different quantiles of the ICT Development Index (IDI) developed by the United Nations International Telecommunication
Union (ITU), which was also discussed earlier in Section 3.
IDI ranks 176 countries based on their existing ICT infrastructure. For the subset of the IDI countries, which are included in PISA
2018, the IDI values range from 0.96 to 8.98. Three sub-groups of five countries each are presented based on their IDI 2017 rankings
contingent on their participation in PISA 2018 i.e., the top 5 group comprises of countries that were top 5 according to the IDI rankings
and participated in PISA (Iceland, Korea, Switzerland, Denmark, and United Kingdom); similarly the middle 5 group of countries
includes Kingdom of Saudi Arabia, Serbia, Chile, Romania, Moldova; and the bottom 5 group of countries includes Peru, Morocco,
Philippines, Dominican Republic, Indonesia that coincide with the bottom countries in terms of IDI ranking. The sections below present
the predictive margin plots developed using the results of the aforementioned three-level HLM models including the interaction effect
between the Online Activity and ESCS indices run separately for each of the country sub-groups using the PISA 2018 data.

4.3.1. Top 5 group (Iceland, Korea, Switzerland, Denmark, and United Kingdom)
Along with some of the highest IDI index values, these nations have above-average scores in PISA and comparatively high per capita
GDP. The IDI values of this group of countries range from 8.65 for the UK to 8.98 for Iceland. It can be inferred from Fig. 2 that as online
activity of students increases, the test scores increase for the top 5 group up to medium ICT use, for all three subjects and for each ESCS
category. The scores tend to reach their peak when there is medium online activity (OA) but there is a stagnation or drop in scores once
the OA index increases to “high” use. In general, the plots show that test score returns are maximum at medium level engagement.
Based on the ESCS category, we observe diminishing returns for all ESCS groups for math and reading, respectively. However, for
science, while the diminishing returns persist for high socio-economic category (i.e., green line), for the “moderate” and “low” ESCS
categories represented by red and blue lines, respectively, a higher OA results in increased cognitive returns. This finding shows a
subject-wise variation in terms of the test score responses to ICT use.

4.3.2. Middle 5 group (Kingdom of Saudi Arabia, Serbia, Chile, Romania, Moldova)
These select countries fall in the middle of the IDI ranking order. Contextually, this group is less homogenous than the top one and
except for Chile, none of the countries are OECD members. The average IDI index value is 6.55 points. PISA performance is equally
diverse with the Kingdom of Saudi Arabia (KSA) being the poorest performer and Chile being the top performer from the list on the
selected parameters. Here, the graph shows that cognitive returns initially increase with an increase in OA, peak at medium OA, and
then either stagnate or start falling with further increase in OA for all levels of ESCS. There are slight variations to this general pattern
based on the subject under consideration.

Fig. 2. Predictive Margin Plots for the Effects of Online Activity of Students across Subjects and ESCS Levels for the Top-5 Group of PISA Countries
in terms of the IDI Rankings.

10
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Fig. 3. Predictive Margin Plots for the Effects of Online Activity of Students across Subjects and ESCS Levels for the Middle-5 Group of PISA
Countries in terms of the IDI Rankings.

4.3.3. Bottom 5 group (Peru, Morocco, Philippines, Dominican Republic, Indonesia)


This list of countries is geographically diverse with Morocco from Africa, Peru from South America, Indonesia and the Philippines
from Asian, and the Dominican Republic being a Caribbean archipelago. The IDI values are low and range from 4.33 to 4.85. In terms of
PISA test score performance, the Dominican Republic fares the worst while Peru reported the highest scores in this group. None of these
countries is an OECD member. Here, we find that the cognitive returns from ICT use seem to increase up to middle level use, post which
it remains stagnant or increases slightly with higher levels of online activity which is contrary to the top and middle ranked countries
that showed diminishing returns in most instances. Thus, having low levels of existing ICT endowment does not corroborate the finding
of diminishing returns from ICT use. In fact, for these countries the returns continue to increase even with active use of technology by a
student on a daily basis.
Innovations in the field of ICT have radically modified the traditional education system. However, these modifications should
ideally be in line with the aim of improving students’ cognitive outcomes. Incidentally, the recent worldwide pandemic has
compellingly constricted the education sector to bring forth a new regime with a completely virtual approach; but the efficiency of this
new regime is still not beyond skepticism. To speak in simpler terms, one might ask, how much involvement with ICT maximizes
students’ cognitive outcomes? In sum, based on Figs. 2 and 3, it can be observed that in countries with relatively high and medium ICT
development, which are also likely to be developed nations, an increase in online activity of students tends to have a positive impact on
their test scores up to the point of medium ICT usage. Moreover, with the use of high ICT, diminishing returns of test scores is visible for
mostly all categories of ESCS. Fig. 4 suggests that in case of countries with poor ICT infrastructure, that are also likely to be developing
nations, a higher usage of ICT does not show decreasing returns on test scores for the subjects. Overall, the cognitive return patterns
from ICT use for students depends upon the initial structural and technological infrastructure available to them including the type and
quality of ICT resources. In that, at a very nascent stage, introducing technological aid can improve learning experience dramatically;
e.g., computers and internet can be an extraordinary source of information and provide audiovisual stimulus to enhance learning
process. For instance, a 50 pages long chapter can be understood better and quicker using a 10-min-long video which provides visual
aid. However, as the specificity of a certain curricula begins to have intricate demands, uses of ICT become more complicated. Not only
does this require more research on feasibility and implementation techniques, but also pupils require proper guidance navigating
themselves through these newer tools of learning. This is where an instructor becomes a facilitator, enabling the learners to use a
technology-infused system in order to have an optimum cognitive outcome. This explains why relying too much on ICT alone can be
counterproductive and can result in a sharp decline in learning outcomes. However, judging by the predicaments, it is evident that at

11
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Fig. 4. Predictive Margin Plots for the Effects of Online Activity of Students across Subjects and ESCS Levels for the Bottom-5 Group of PISA
Countries in terms of the IDI Rankings.

the core of this discord is the requirement to synchronize technology-infused curriculum with subject-specific learning requirements,
learning goals, learning abilities, and mere access to technology is not sufficient.

5. Discussion and conclusion

The COVID-19 pandemic has led to a rising significance of ICTs in the education sector. Governments all over the world have
struggled to rein the negative impacts of school closures by exploiting remote and online learning (World Bank, 2021). With billions of
dollars being invested in ICT infrastructure and digital devices, there is scant global evidence on the optimal levels of ICT use. Few
studies that try to explore this topic synthesize either primary data on a specific country or a homogenous group of countries, which
fails to provide a holistic cross-country perspective on ICT use. Since there has been an exponential rise in ICTs in developing countries
due to the pandemic, research warrants the inclusion of diverse geographies encompassing varied economic status and cultural
contexts. This study examines PISA (2018) data to analyze the pattern of correlation between ICT use and test score returns with an
Hierarchical Linear Model (HLM) and furthers the analysis by investigating the heterogeneities in the findings based on subject-type
and socio-economic status. We control for various student-, school-, and country-level factors like meta-cognition strategies, moti­
vation levels, school type, school location, availability of resources at schools, per capita GDP, amongst others. To analyze our primary
research question, a reliable ICT-use indicator, which measures the online activity of students based on a series of parameters including
participation in online discussions, sending emails, scheduling events, online chatting, and digital communication is employed in this
study. This index relies on simple indicators readily available in the PISA student questionnaire, which enables the inclusion of ge­
ographies that may not have responded to the niche and optional PISA ICT familiarity questionnaire.
The paper concludes that although ICT use has a positive association with the test scores (mathematics, reading, and science)
irrespective of socio-economic status of the student, this association gets weaker as one goes above a certain threshold of ICT use. We
classify these patterns as concave returns where the optimal level of association is established at not more than “several times a week”
or medium-level of ICT use. This finding is similar to that by Lei and Zhao (2007) who used a pre-and post-analysis on a small data
sample collected from the United States to show that students who use a computer for more than 3 h a day lose in terms of their grade
point average (GPA). The authors also found that almost half of the surveyed students used computers for gaming in addition to
completing their academic work, suggesting that an increased use of computers is highly correlated with non-academic activities such
as entertainment and unnecessary communication on mails that do not necessarily contribute towards academic achievement. A recent

12
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

empirical study also documented lower GPAs and fewer hours of studying time among Facebook users compared to non-users
(Kirschner & Karpinski, 2010). Similarly, another study conducted an impact evaluation of the Bring-you-own-device (BYOD) pro­
gram showing that computer use resulted in some distraction among students (Schellenberg, 2018). These findings suggest that
one-to-one interaction with ICTs for a student increases non-beneficial use of ICTs in addition to academic use (Lei, 2010). Other
literature also points towards adverse effects of ICT use. Extensive use of ICTs may alter the focus and attention capabilities of a student
and inhibit cognitive development (Goundar, 2014; Junco & Cotten, 2011).
Our findings contribute to the external validity of such findings by using a large representative dataset of 79 countries to show that
ICT use beyond a level has a negative correlation with the cognitive returns, and may not be optimal. While previous research on the
ICT’s concave returns have employed different definitions of ICT {for example, Bettinger et al. (2020) defines ICT exposure by
computer-assisted learning; Zhang and Liu (2016) use indicators in the PISA ICT familiarity questionnaire}, we find evidence on
concave returns to ICT use with a simpler and more readily available measure of online activity. Results from the heterogeneity
analysis indicate that concave return patterns hold for different socio-economic categories. Using a categorization of ESCS index into
low, medium, and high, this paper shows that excessive use of ICT does not provide extra returns beyond a level across all the ESCS
categories. Overall, our paper shows that higher online activity is positively associated with a country’s per capita GDP. Developed
economies are also more likely to score high on the ICT development index, which reflects that they have a good ICT infrastructure,
mobile and internet users, and hence better positioned to cope up with ICT-enabled education. However, in such countries the
challenge remains to regulate and incentivize optimal use of ICT by students so as to maximize their learning outcomes. Further, our
study finds that concave returns tend to exist only in countries with established ICT infrastructure. This is a novel finding given the lack
of previous studies exploiting cross-country data.
Overall, through our empirical findings, this study suggests that the focus of ICT use should be on quality and the extent of use
rather than quantity. This supports the reliance on blended models of ICT use in education, especially for countries with pre-existing
robust ICT infrastructure. In order to validate and accentuate the empirical findings of this paper, a conceptual framework has been
developed using a utilitarian perspective. Mathematically, a finite amount of time that a learner spends on education can be allocated
into two parts - one which is spent in learning in a traditional format which includes classroom teaching, reading books etc. and the
other which is spent in learning from ICT based resources e.g. watching a prerecorded video lecture, engaging with online resources, or
participating in digital educational fora. Given this setup, a simple optimizing exercise (explained in detail in Appendix A1), performed
to maximize the utility of the learner, reveals that notwithstanding the positive effects of ICT in the education sector, a complete
substitution of traditional learning by ICT devices may not provide optimal cognitive returns, thus both the methods need to coexist
side-by-side with each other to achieve maximum cognitive returns. In other words, this conceptually alludes towards the use of
blended models for teaching and learning in education for countries exhibiting diminishing cognitive returns due to higher ICT usage.
The role of technology is envisioned to be crucial in the post-pandemic-era and blended learning is gaining prominence as a remedy to
overcome several challenges related to the loss of learning, mental health, and overall student development due to school closures
(Gaol & Hutagalung, 2020; López-Pérez et al., 2013; Sharma, 2021).
Using the ICT development index (IDI), the paper also finds that countries scoring low on the ICT development index do not
experience stagnating returns to higher ICT usage. There could be a few hypotheses to support this finding. Firstly, starting at a lower
base level of ICT use, these countries may be subject to increasing returns from additional ICT exposure, compared to countries with
high existing levels of ICT proliferation. Moreover, a lack of ICT access and electricity in less-developed countries may limit the ICT
interaction, thereby constraining its non-academic use as well (Ndambakuwa & Brand, 2020). Finally, there could be cultural factors
and unobserved heterogeneities across countries that could influence the perceptions and use of technology in education. Thus, the
challenge in terms of ICT access and use in education differs for LMICs with poor ICT endowments from that of developed countries.
Here it is critical to first provide access to good-quality ICT for all, especially in a post-pandemic world, without apprehensions of
concave returns from online engagement. Digital education policy in countries with poor ICT infrastructure have a concern of ensuring
access and inclusiveness of ICT use, instead of optimizing the extent of ICT use which may be of greater interest for digital policy in
developed nations, at present. Nevertheless, this study has few limitations. Though PISA data allowed us a cross-country analysis with
a large sample size, it still is observational data and cannot provide causal conclusions.

Declarations of competing interest

None.

Funding sources

This research has been supported by funding received from the Education and Training Evaluation Commission (ETEC), Kingdom of
Saudi Arabia.

Author statement

Aditi Bhutoria: Conceptualization; Formal analysis; Investigation; Methodology; Project administration; Resources; Software;
Visualization; Writing - original draft; Nayyaf Aljabri: Conceptualization; Data curation; Funding acquisition; Methodology; Writing -
review & editing.

13
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Acknowledgments

We thank Divesh Pandey for exceptional research assistance for this project. We are grateful to the editor, anonymous referees, and
seminar participants for their helpful comments. We are appreciative of the continuous support offered by Dr. Khaleel Harbi in
organizing and managing this research project. We thank the Education and Training Evaluation Commission (ETEC) Kingdom of
Saudi Arabia for funding support.

APPENDIX

A1. Conceptual Framework for Blended Models for ICT in Education

Given the limited time with each individual, we can assume that student’s time to spend on education is limited, i.e., time T is
constrained. Suppose a student i can divide this total time T between traditional learning and ICT-based learning to achieve optimal
returns. Let us assume that T1 and T2 are time units devoted to traditional and ICT learning methods, respectively. Suppose the
cognitive returns to the traditional method of learning are concave, which means that they are diminishing in nature. This can be
assumed based on the findings from the studies that explore the non-linear relationship between educational investment and growth
(Krueger & Lindahl, 1999; Sianesi & Reenen, 2003). Similarly, we assume concave returns for ICT use in our model based on our own
findings and the one observed in prior literature. As mentioned above time with individuals are limited, hence they face a constraint
T1 + T2 = 1 which implies that T2 = 1 − T1 , or T1 = 1-T2 We have assumed that utility functions for both types of learning are concave.
Let u1 (T1 ) a continuous and differentiable utility function on a closed set [0,1], represent the utility derived from the traditional
learning method. Similarly, let u1 (T2 ) be another continuous and differentiable utility function on a closed set [0,1] for ICT-based
learning. Since both of these functions are concave, they follow Inada conditions, implying following three properties:
uj (Tj ) = +∞ : It suggests that marginal returns on increasing ICT usage from “negligible” to “low” is very high.

uj (Tj ) = − ∞ : This suggests the diminishing role of ICT usage and implies marginal returns at a high frequency of ICT usage are low

or equivalent to null.
u′′j (Tj ) < 0: This is the direct implication for the concave returns assumption.
Let us assume that the total utility derived from education takes an additive form, hence it can be represented as U(T1 , T2 ) total of
each of the utilities represented by traditional and ICT-based learning methods. The maximization problem can be mathematically
illustrated as under:
U(T1 , T2 ) = u1 (T1 ) + u2 (T2 ) such that.
T1 ≥ 0 and T2 ≥ 0 and T1 + T2 = 1 (1)
Deriving the first-order conditions, we get
U (T1 ) = u 1 (T1 ) − u 2 (1 − T1 )
′ ′ ′

Let’s consider the above equation as F(T1 ) = u 1 (T1 ) − u 2 (1 − T1 ) (2)


′ ′

For a blended approach to exist, we have to prove that a unique solution exists for F(T1 ) = 0 and it is not equal to. T = 0 or 1.
Using the first and second condition of the above three conditions,
F(T1 ) = +∞ and. F(T1 ) = − ∞
Here, the Intermediate value theorem implies that there exists a solution T* ε(0, 1) for F(T1 ) = 0…………………….Result 1.
Differentiating F(T1 ) with respect to T1 , we obtain F (T1 ) < 0 implying the monotonicity of ………………………….Result 2.

Using the above two results (Results 1 and 2), it is implied that T* is a unique solution of F(T1 ) and pertains to the open set (0, 1).
The solution T* ε(0, 1) supports the claim made in the paper that complete substitution of traditional learning by ICT devices may not
provide optimal cognitive returns.

A2. Meta-cognition strategies used as control variables

a) UNDREM is metacognitive awareness of reading strategy, understanding, and remembering. It is derived from a student’s reports
on the usefulness of a list of strategies for understanding and memorizing the text read. For example, I underline important parts of the
text. Six items ask for student opinions rated on a six-point Likert-type scale (OECD, 2010, p. 113).
b) METASUM is metacognitive awareness of reading strategy: summarizing. It is derived from students’ reports on the usefulness of a
list of strategies for writing the summary of a long and rather difficult text, e.g., I carefully check whether the most important facts in the
text are represented in the summary. Five items seek student opinions, rated on a six-point Likert-type scale (OECD, 2010, p. 113,
p.113)
c) METASPAM measures a student’s competency in assessing the credibility of online information. It is derived from students’ re­
sponses on the reaction to a received email, e.g., I answer the mail and ask for more information about the smartphone. Five items seek
students’ responses, rated on a six-point Likert-type scale. Assessing credibility is also associated with the intrinsic critical thinking
of an individual (Gamazo and Martinez-Abad, 2020).

14
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

A3. Correlation Matrix with the Independent Variables

A correlation matrix with the independent variables of our model has been presented below. Multicollinearity is a situation where
two or more predictors/explanatory variables are highly linearly correlated with each other. In general, an absolute correlation co­
efficient of greater than 0.7 among two or more predictors may indicate the presence of multicollinearity (Tabachnick et al., 2007).
The Correlation Matrix below shows no significant indications of multicollinearity between variables.

Table A1
Correlation Matrix for the independent variables used in the HLM Regression Model

Variables (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12)

(1) Female 1.000


(2) Understanding 0.123 1.000
and
remembering
(3) Assessing 0.064 0.302 1.000
credibility
(4) Summarizing 0.154 0.450 0.374 1.000
(5) ESCS ¡0.008 0.138 0.198 0.169 1.000
(6) Online Activity 0.032 0.074 0.043 0.056 0.186 1.000
(7) Motivation Index 0.096 0.085 0.020 0.077 0.039 0.182 1.000
(8) Public School 0.002 ¡0.046 ¡0.060 ¡0.062 ¡0.198 ¡0.033 ¡0.034 1.000
(9) School ICT ¡0.002 0.020 0.035 0.011 0.182 0.067 0.002 ¡0.178 1.000
capacity
(10) Education ¡0.008 ¡0.044 ¡0.069 ¡0.059 ¡0.218 ¡0.051 ¡0.007 0.230 ¡0.463 1.000
material
(quality)
(11) School location ¡0.005 0.056 0.077 0.062 0.201 0.082 0.022 ¡0.228 0.126 ¡0.139 1.000
(12) Log GDP (’17) ¡0.001 0.065 0.155 0.122 0.361 ¡0.011 ¡0.055 ¡0.174 0.121 ¡0.244 0.066 1.000

A4. The Unconditional or Null Model

We first run a variance component model including only the random intercepts for each level and none other variables for any level.
This shows the proportion of variance in the PISA test scores for reading (using plausible value 2) contributed by the variation across
the schools and countries. Also termed as the Null model, this step helps to evaluate the need of the HLM model in the analysis. An
Intra-Cluster Correlation (ICC) coefficient of 0 (i.e. a null value) implies that there does not exist any variation at higher levels like
school or country, hence an HLM model is not necessary. Table A2 below shows that a significant 22 percent of the variance is
explained at the country level and 27 percent of the variance is associated with school-level heterogeneities. Similar results were also
found for PISA mathematics and science scores. Hence, it is necessary to use HLM to receive appropriate standard errors in the analysis
and account for the nested data structure.

Table A2
Null Model results for justifying the use of HLM as a methodology

Level ICC Std.Err. [95% Conf. Interval]

Country .2176 .0281 .1674 .2778


School |Country .4856 .0186 .4493 .5220

References

Avvisati, F., & Organisation for Economic Co-operation and Development. (2015). Students, computers and learning: Making the connection. OECD publishing. https://
www.oecd-ilibrary.org/education/students-computers-and-learning_9789264239555-en.
Azur, M. J., Stuart, E. A., Frangakis, C., & Leaf, P. J. (2011). Multiple imputation by chained equations: What is it and how does it work? International Journal of
Methods in Psychiatric Research, 20(1), 40–49. https://doi.org/10.1002/mpr.329
Bettinger, E., Fairlie, R. W., Kapuza, A., Kardanova, E., Loyalka, P. K., & Zakharov, A. (2020). Does EdTech substitute for traditional learning? Experimental estimates of the
educational production function. NBER Working Paper. https://doi.org/10.3386/w26967. w26967.
Bhatnagar, R., Kim, J., & Many, J. E. (2014). Candidate surveys on program evaluation: Examining Instrument reliability, validity and program effectiveness.
American Journal of Educational Research, 2(8), 683–690. https://doi.org/10.12691/education-2-8-18
Breakspear, S. (2012). The policy impact of PISA: An exploration of the normative effects of international benchmarking in school system performance.
Brotman, S. N. (2016). The real digital divide in educational technology. Brookings TechTank. https://www.brookings.edu/blog/techtank/2016/01/28/the-real-digital-
divide-in-educational-technology/.
Bussière, P., & Gluszynski, T. (2004). The impact of computer use on reading achievement of 15-year-olds. Learning Policy Directorate, Strategic Policy and Planning
Branch, Human Resources and Skills Development Canada.

15
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Cheung, K. C., Mak, S. K., Sit, P. S., & Soh, K. C. (2016). A typology of student reading engagement: Preparing for response to intervention in the school curriculum.
Studies in Educational Evaluation, 48, 32–42. https://doi.org/10.1016/j.stueduc.2015.12.001
Cheung, K. C., Sit, P. S., Soh, K. C., Ieong, M. K., & Mak, S. K. (2014). Predicting academic resilience with reading engagement and demographic variables: Comparing
Shanghai, Hong Kong, Korea, and Singapore from the PISA perspective. The Asia-Pacific Education Researcher, 23(4), 895–909. https://doi.org/10.1007/s40299-
013-0143-4
Cheung, A. C., & Slavin, R. E. (2013). The effectiveness of educational technology applications for enhancing mathematics achievement in K-12 classrooms: A meta-
analysis. Educational Research Review, 9, 88–113. https://doi.org/10.1016/j.edurev.2013.01.001
Chiu, M. S. (2020). Linear or quadratic effects of ICT use on science and mathematics achievements moderated by SES: Conditioned ecological techno-process.
Research in Science & Technological Education, 1–22. https://doi.org/10.1080/02635143.2020.1830270
Chiu, M. M., & McBride-Chang, C. (2006). Gender, context, and reading: A comparison of students in 43 countries. Scientific Studies of Reading, 10(4), 331–362.
https://doi.org/10.1207/s1532799xssr1004_1
Edmunds, R., Thorpe, M., & Conole, G. (2012). Student attitudes towards and use of ICT in course study, work and social activity: A technology acceptance model
approach. British Journal of Educational Technology, 43(1), 71–84. https://doi.org/10.1111/j.1467-8535.2010.01142.x
Fernández-Gutiérrez, M., Gimenez, G., & Calero, J. (2020). Is the use of ICT in education leading to higher student outcomes? Analysis from the Spanish autonomous
communities. Computers & Education, 157, 103969. https://doi.org/10.1016/j.compedu.2020.103969
Gamazo, A., & Martínez-Abad, F. (2020). An exploration of factors linked to academic performance in PISA 2018 through data mining techniques. Frontiers in
Psychology, 11, 3365. https://doi.org/10.3389/fpsyg.2020.575167
Gaol, F. L., & Hutagalung, F. (2020). The trends of blended learning in South East Asia. Education and Information Technologies, 25, 659–663. https://doi.org/10.1007/
s10639-020-10140-4
Gómez-Fernández, N., & Mediavilla, M. (November 26, 2018). Do information and communication technologies (ICT) improve educational outcomes?. In Evidence for
Spain in pisa 2015. https://doi.org/10.2139/ssrn.3290513. IEB Working Paper N. 2018/20 https://ssrn.com/abstract=3290513.
Goundar, S. (2014). The distraction of technology in the classroom. Journal of Education & Human Development, 3(1), 211–229.
Gubbels, J., Swart, N. M., & Groen, M. A. (2020). Everything in moderation: ICT and reading performance of Dutch 15-year-olds. Large-scale Assessments in Education, 8
(1), 1–17. https://doi.org/10.1186/s40536-020-0079-0
Gumus, S., & Atalmis, E. H. (2011). Exploring the relationship between purpose of computer usage and reading skills of Turkish students: Evidence from PISA 2006.
Turkish Online Journal Of Educational Technology-TOJET, 10(3), 129–140.
Guzeller, C. O., & Akin, A. (2014). Relationship between ICT variables and mathematics achievement based on PISA 2006 database: International evidence. Turkish
Online Journal of Educational Technology-TOJET, 13(1), 184–192.
Hamel, H. K., & K. (2018, September 27). A global tipping point: Half the world is now middle class or wealthier. In Brookings TechTank. https://www.brookings.edu/
blog/future-development/2018/09/27/a-global-tipping-point-half-the-world-is-now-middle-class-or-wealthier/.
Hinvest, N., & Brosnan, M. (2012). Identifying vulnerability markers for pathological internet use and pathological video-game playing within an educational context.
Journal of Educational Computing Research, 46(4), 357–376.
Junco, R., & Cotten, S. R. (2011). A decade of distraction? How multitasking affects student outcomes. How Multitasking Affects Student Outcomes. September 13, 2011.
Karlsson, L. (2020). Computers in education: The association between computer use and test scores in primary school. Education Inquiry, 1–30. https://doi.org/
10.1080/20004508.2020.1831288
Kennedy, G. E., Judd, T. S., Churchward, A., Gray, K., & Krause, K. L. (2008). First year students’ experiences with technology: Are they really digital natives?
Australasian Journal of Educational Technology, 24(1). https://doi.org/10.14742/ajet.1233
Khan, S. A., Bhatti, R., & Khan, A. A. (2011). Use of ICT by students: A survey of faculty of education at IUB. Library Philosophy and Practice, 1.
Kim, B., Park, H., & Baek, Y. (2009). Not just fun, but serious strategies: Using meta-cognitive strategies in game-based learning. Computers & Education, 52(4),
800–810.
Kirschner, P. A., & Karpinski, A. C. (2010). Facebook® and academic performance. Computers in Human Behavior, 26(6), 1237–1245. https://doi.org/10.1016/j.
chb.2010.03.024
Kozma, R. B. (2005). National policies that connect ICT-based education reform to economic and social development. Human Technology: An Interdisciplinary Journal
on Humans in ICT Environments. https://jyx.jyu.fi/handle/123456789/20179.
Krieg, J. M. (2019, September 6). International student assessment: Performance and spending. Fraser Institute. https://www.fraserinstitute.org/studies/international-
student-assessment-performance-and-spending.
Krueger, A. B., & Lindahl, M. (1999). In Education for growth in Sweden and the world. https://www.nber.org/papers/w7190.
Ku, K. Y., & Ho, I. T. (2010). Metacognitive strategies that enhance critical thinking. Metacognition and Learning, 5(3), 251–267. https://doi.org/10.1007/s11409-010-
9060-6
Kvavik, R. B. (2005). Convenience, communications, and control: How students use technology. Educating the Net Generation, 1(2005), 7-1.
Lake, R., & Makori, A. (2020). The digital divide among students during COVID-19: Who has access? Who doesn’t? The Lens, Center on Reinventing Public Education,
June, 16. https://doi.org/10.1007/s11409-010-9060-6
Lee, V. E. (2000). Using hierarchical linear modeling to study social contexts: The case of school effects. Educational Psychologist, 35(2), 125–141. https://doi.org/
10.1207/S15326985EP3502_6
Lei, J. (2010). Quantity versus quality: A new approach to examine the relationship between technology use and student outcomes. British Journal of Educational
Technology, 41(3), 455–472. https://doi.org/10.1111/j.1467-8535.2009.00961.x
Lei, J., & Zhao, Y. (2007). Technology uses and student achievement: A longitudinal study. Computers & Education, 49(2), 284–296. https://doi.org/10.1016/j.
compedu.2005.06.013
López-Pérez, M. V., Pérez-López, M. C., Rodríguez-Ariza, L., & Argente-Linares, E. (2013). The influence of the use of technology on student outcomes in a blended
learning context. Educational Technology Research and Development, 61(4), 625–638. https://doi.org/10.1007/s11423-013-9303-8
Lorenceau, A., Marec, C., & Mostafa, T. (2019). Upgrading the ICT questionnaire items in PISA 2021.
Luu, K., & Freeman, J. G. (2011). An analysis of the relationship between information and communication technology (ICT) and scientific literacy in Canada and
Australia. Computers & Education, 56(4), 1072–1082. https://doi.org/10.1016/j.compedu.2010.11.008
Naumann, J., & Sälzer, C. (2017). Digital reading proficiency in German 15-year olds: Evidence from PISA 2012. Zeitschrift für Erziehungswissenschaft, 20(4), 585–603.
https://doi.org/10.1007/s11618-017-0758-y
Ndambakuwa, S., & Brand, G. (2020). Commentary: Many students in developing countries cannot access education remotely. https://harris.uchicago.edu/news-events/
news/commentary-many-students-developing-countries-cannot-access-education-remotely.
Nicolai, S., Wilson, S., Jefferies, K., Proctor, J., Gichigi, T., & Gould, B. (2020). EdTech and COVID-19: 10 Things to Know. EdTech Hub. https://edtechhub.org/wp-
content/uploads/2020/09/EdTech_10_things_V11-1.pdf.
Odell, B., Cutumisu, M., & Gierl, M. (2020). A scoping review of the relationship between students’ ICT and performance in mathematics and science in the PISA data.
Social Psychology of Education, 1–33. https://doi.org/10.1007/s11218-020-09591-x
Organisation for Economic Co-operation and Development (OECD). (2010). PISA 2009 results: Learning to learn: Student engagement, strategies and practices, iii. Paris,
France: OECD.
Pennequin, V., Sorel, O., Nanty, I., & Fontaine, R. (2010). Metacognition and low achievement in mathematics: The effect of training in the use of metacognitive skills
to solve mathematical word problems. Thinking & Reasoning, 16(3), 198–220. https://doi.org/10.1080/13546783.2010.509052
Piper, B., Zuilkowski, S. S., Kwayumba, D., & Strigel, C. (2016). Does technology improve reading outcomes? Comparing the effectiveness and cost-effectiveness of ICT
interventions for early grade reading in Kenya. International Journal of Educational Development, 49, 204–214. https://doi.org/10.1016/j.ijedudev.2016.03.006
Ponzo, M. (2011). Does the way in which students use computers affect their school performance? Journal of Economic and Social Research, 13(2), 1.

16
A. Bhutoria and N. Aljabri Computers & Education 181 (2022) 104447

Rabe-Hesketh, S., & Skrondal, A. (2006). Multilevel modelling of complex survey data. Journal of the Royal Statistical Society: Series A (Statistics in Society), 169(4),
805–827. https://doi.org/10.1111/j.1467-985X.2006.00426.x
Ramseier, & Holzer. (2005). Vertrautheit mit Informations- und Kommunikationstechnologien (IKT). PISA 2003: 31 Kompetenzen für die Zukunft. Zweiter nationaler
Bericht. Neuchâtel: BFS und EDK.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (Vol. 1). sage.
Ripley, A. (2017, September 21). Boys are not defective. The Atlantic. https://www.theatlantic.com/education/archive/2017/09/boys-are-not-defective/540204/.
Rutkowski, L., Gonzalez, E., Joncas, M., & von Davier, M. (2010). International large-scale assessment data: Issues in secondary analysis and reporting. Educational
Researcher, 39(2), 142–151. https://doi.org/10.3102/0013189X10363170
Säälik, Ü., Nissinen, K., & Malin, A. (2015). Learning strategies explaining differences in reading proficiency. Findings of Nordic and Baltic countries in PISA 2009.
Learning and Individual Differences, 42, 36–43. https://doi.org/10.1016/j.lindif.2015.08.025
Scaling PISA cognitive data. reportPISA 2012 technical report (pp. 143-163). OECD.
Schellenberg, D. (2018). Analyzing the impact of BYOD in secondary school English classrooms (Doctoral dissertation). https://api.semanticscholar.org/CorpusID:
210682251.
Scherer, R., & Siddiq, F. (2019). The relation between students’ socioeconomic status and ICT literacy: Findings from a meta-analysis. Computers & Education, 138,
13–32. https://doi.org/10.1016/j.compedu.2019.04.011
Sharma, A. (2021). Blended mode of learning is the way forward in the post pandemic era. CSD working paper series: Towards a new Indian model of information and
communications technology-led growth and development. ICT India Working Paper #61. Center for Sustainable Development, Columbia University.
Sianesi, B., & Reenen, J. V. (2003). The returns to education: Macroeconomics. Journal of Economic Surveys, 17(2), 157–200. https://doi.org/10.1111/1467-
6419.00192
Sofianopoulou, C., & Bountziouka, V. (2011). Information and communication technology and cross-national analysis of educational performance. In ICERI2011
proceedings (pp. 800–806). IATED.
Spiezia, V. (2011). Does computer use increase educational achievements? Student-level evidence from PISA. OECD Journal: Economic Studies, 2010(1), 1–22. https://
doi.org/10.1787/19952856
Tabachnick, B. G., Fidell, L. S., & Ullman, J. B. (2007). Using multivariate statistics (Vol. 5, pp. 481–498). Boston, MA: Pearson.
Tømte, C., & Hatlevik, O. E. (2011). Gender-differences in self-efficacy ICT related to various ICT-user profiles in Finland and Norway. How do self-efficacy, gender
and ICT-user profiles relate to findings from PISA 2006. Computers & Education, 57(1), 1416–1424. https://doi.org/10.1016/j.compedu.2010.12.011
Wang, M. C., Haertel, G. D., & Walberg, H. J. (1990). What influences learning? A content analysis of review literature. The Journal of Educational Research, 84(1),
30–43. https://doi.org/10.1080/00220671.1990.10885988
Wilichowski, T., & Cobo, C. (2021). Considering an adaptive learning system? A roadmap for policymakers. World Bank Blogs. https://blogs.worldbank.org/education/
considering-adaptive-learning-system-roadmap-policymakers.
World Bank. (2021). Keeping Bangladesh’s students learning during the COVID-19 pandemic. April 18. World Bank https://www.worldbank.org/en/results/2021/04/18/
keeping-bangladesh-s-students-learning-during-the-covid-19-pandemic.
Zhang, D., & Liu, L. (2016). How does ICT use influence students’ achievements in math and science over time? Evidence from PISA 2000 to 2012. Eurasia Journal of
Mathematics, Science and Technology Education, 12(9), 2431–2449. https://doi.org/10.12973/eurasia.2016.1297a

17

You might also like