You are on page 1of 15

UNIVERSITY REMOTE LEARNING 1

Remote learning during COVID-19: Student satisfaction and performance

Daniel Loton1, Cameron Stein2, Philip Parker3 & Sally Gauci1

1
Connected Learning, Victoria University, Melbourne, Australia
2
Data Insights, Victoria University, Melbourne, Australia
3
Institute for Positive Psychology and Education, Australian Catholic University

Author Note

Daniel Loton https://orcid.org/0000-0003-4106-0555

Philip Parker https://orcid.org/0000-0002-4604-8566

We thank Ms. Leisa Franklin for extracting assessment design data, the authors of all R

packages utilised.

Statement of contributions: DL conceived the study, obtained ethical approval, undertook

the analysis, and drafted the method and partially the introduction; CS navigated and helped

interpret institutional data, and reviewed the draft manuscript; PP guided all aspects of the

statistical analyses including use of the specific software applications utilised, and reviewed the

draft manuscript; and SG completed the draft introduction, wrote the discussion, edited and

reviewed the draft manuscript.

Three authors are employed by the intervention institution.

Correspondence concerning this article should be sent to Daniel.Loton@vu.edu.au.


UNIVERSITY REMOTE LEARNING 1

Remote learning during COVID-19: Student satisfaction and performance

The rapid introduction of University remote learning due to COVID19 raised concerns of poorer
educational outcomes, especially for at-risk students. Comparing satisfaction (n=33,029) and marks
(n=128,823) in the first online unit to the previous three years, multilevel models ascertain the effect
of remote learning with comprehensive controls, and test equity and curricula moderators. Results
indicate significant small decrements in satisfaction and an increase in marks; effects so small as to
be insubstantial. No highly dissatisfied or poorly performing student sub-groups were identified.
While not all education aspects are measured, this high-level comparison indicates a successful
initial transition to remote learning.

Keywords: Higher education, COVID19, remote learning, student satisfaction, student performance
UNIVERSITY REMOTE LEARNING 2

In response to the COVID-19 global pandemic, Universities rapidly introduced remote


learning in early 2020 to limit virus transmission. This unprecedented initiative was also considered
to preserve potential to deliver quality University education. However, given the rapid and
emergency nature of the introduction, and prior online learning literature generally focussing on
those who choose to study online, it is plausible that this newly implemented education ecosystem
is less than ideal.
Distance, blended and online learning at Universities have been steadily growing over the
past fifty years (Shachar et al. 2010, Means et al. 2013). Reviews of ‘fully online’ learning broadly
conclude that students can find satisfaction in this mode of learning - at least as effective as face-to-
face learning - despite reported disadvantages such as feelings of isolation and technology gaps (for
a review see Castro & Tumibay, 2019).
Similar factors that determine satisfaction and student performance during in-person
education seem to apply online, such as perceived relevance, self-efficacy, and the quantity and
quality of content, systems and student-instructor interactions (Mayer, 2019; Noetel et al., 2018; see
also Walker & Fraser, 2005 and citing literature). Some meta-analyses indicate blended learning
moderately outperforms in-person and fully online learning (Means et al. 2013).
Where online learning may have the potential to deliver similar outcomes compared with
other modes of learning when intentionally implemented or chosen, the capability of mass,
impromptu remote learning to enable satisfaction and learning remains unknown. Furthermore,
given that some at-risk students experience a greater performance gap when learning online (Xu &
Jaggers, 2014), online learning may exacerbate these gaps.
Victoria University, Melbourne, Australia (VU), is in an opportune position to investigate the
impact of remote learning due to its recent transition of all undergraduate units (courses) to a
blended, one-unit-at-a-time block model for its diverse student cohort. At the advent of COVID19,
the University quickly prepared for block units to be delivered remotely. The first online unit was
delivered in April 2020, and the University is still delivering remote learning to submission date.
Face-to-face classes were replaced with online learning, consisting of video conferencing software
with capacity for smaller break out rooms. A learning management system was already in place.
Special approval for some students with limited technology access were permitted on campus
libraries to attend online classes.
Student performance (also termed achievement, assessment results, grades) has been the
subject of extensive study (Schneider & Preckel, 2017), as has student satisfaction, often measured
with what are commonly termed Student Evaluation of Teaching scales, abbreviated as SET scales
(Marsh et al., 2019). SET-type scales generally have good measurement characteristics and some
content validity, but are also subject to some biases (for a review see Spooren et al., 2013).
As University systems produce assessment results and undertake student satisfaction
measures regularly, they can be useful institutional indicators – a kind of pulse check – to examine
the effects of major changes such as the introduction of remote learning in light of the historical
levels of that indicator.
This study examines institutional student satisfaction and grade data for the first remote
learning unit delivered at one University in Australia, comparing it with three prior years and using
comprehensive controls for changing student cohorts and nested levels. Furthermore, the study
tests student and curricula indicators as moderators of the effect of moving online, in attempt to
identify particularly poorly performing or satisfied student sub-groups.

Hypotheses
There are two hypotheses that were pre-registered on the Open Science Framework
(anonymous link) prior to accessing the relevant institutional data:

1. The transition to remote delivery will have a negative effect on student satisfaction and
performance.
UNIVERSITY REMOTE LEARNING 3

2. The effect of remote delivery will be moderated by student demographics and


pathways, and curricula factors including number of times delivered in block mode and
assessment design change.

Method

The introduction of remote learning was treated as a natural experiment, and effects on
student satisfaction and marks were examined at a University located in Victoria, Australia. Ethical
approval was sought though the lead author’s institution (HRE17-192).

Sample and measures

The dataset spans from 2016 when the current satisfaction survey version was introduced,
to the second block unit of 2020 where fully online learning began. Table 1 presents the item
wordings in the institutional satisfaction survey, delivered via the learning management system and
email near conclusion of each unit. Similar SET scales have been the subject of thousands of studies
(Marsh, Dicke & Pfeiffer, 2019). Student performance (grades, marks) refers to the final weighted
summative assessments for a given unit, and is also the subject of extensive study including meta-
analytic reviews (Schneider & Preckel, 2017).
A predicted latent score for satisfaction with unit and teaching was calculated using the
Lavaan predict function, in a confirmatory factor analysis with each of the six items loading on to a
respective latent factor, with the two factors covaried. Fit indices were adequate and latent factor
correlation large: χ2(53)=28213.18, CFI = .97, TLI = .97, RMSEA = .08, p<.001 [0.07 – 0.08], SRMR =
0.02, Ψ = 0.74 (0.03), [0.73, 0.75]. Predicted factor scores rather than latent variables were used due
to the complexity of the nested structure in the primary models.

Insert table 1 around here

Institutional data was filtered to units with observations in remote learning, and before
remote learning, resulting in 214 units with marks observations and 223 with satisfaction
observations. These two datasets comprised n=128,823 marks observations (9737 remote, 119,086
pre-remote) from n=29,429 unique students, and n=33,029 satisfaction observations (3055 remote,
29,974 pre-remote), from n=13,576 unique students, rating n=923 teachers. Satisfaction response
rates were fairly low at mean 25.72% (sd=8.44) per unit. This rate is common in SET-type surveys
(Marsh, Dicke & Pfeiffer, 2019), and simulation studies suggest it is low for making an inference
about a given teacher in a given unit and delivery period (He & Freeman, 2020); but the sample size
is large for estimating the central effect of interest – the effect of remote delivery.
As marks and satisfaction data is likely to exclude students who discontinue a unit early,
partly due to a program that was recently introduced to automatically discontinue students with no
engagement early in a unit with the aim of minimising student debt, we examine pre-census attrition
against historical levels. The census date is the date at which a student must withdraw/discontinue
from a unit in order not to take on the associated debt.
We confirmed that pre-census discontinuations were not majorly different in remote
learning. Pre-census attrition in the first block unit of 2020 (non-remote) was 18.32%, and 21.25% in
the first remote learning block, an increase of 2.93%. In 2018 and 2019 a similar increase in pre-
census attrition was evident from the first to second block of the year (2.90% in 2018, 1.62% in
2019). This small increase may indicate some students chose not to study in remote learning, and/or
cancellation of some units which required in-person delivery. As a similar pattern of attrition was
seen prior to remote learning, the current sample is unlikely to have excluded a large subgroup of
students who chose to discontinue due to remote learning.
UNIVERSITY REMOTE LEARNING 4

Student demographic, pathway and equity indicators are collected at enrolment and hence
had very low missingness except for two: the Australian Tertiary Admittance Rank (or ATAR, 41.7%
missing), and the Socio-Economic Indexes for Areas (SEIFA, 11.1% missing). All covariates are listed in
Table 2. ATAR is a score used to rank performance in the final year of secondary schooling in
Australia, and often used to grant entry to University courses; yet students are increasingly able to
enrol in University courses without an ATAR score, in line with increasing access to the sector.
Student entry pathway with a basis in ATAR are flagged in our data by a VTAC (Victorian Tertiary
Admissions Centre) entry flag. SEIFA is an index of socio-economic status produced based on
postcode, which can be missing if students do not report a postcode, or are living outside of
Australia at the time of enrolment.
Two unit-level curricular moderators were generated: the number of times a unit has been
delivered in block mode, as the introduction of a block model accompanied blended learning which
was thought to help prepare for fully online learning; and assessment design change, such as by
removing exams in fully online learning. Indicators of the number of times a unit was delivered in
block mode prior to remote was calculated (range=0-14, m=3.80(sd=3.41), mode=53, in satisfaction
dataset; range=0-14, m=3.89(sd=3.44), mode=59 in marks dataset), and a dichotomous assessment
design flag created using the curriculum management system at the University, in which 47 units
changed in remote delivery.
Hot deck imputation was undertaken using the R package Hotdeck (Cranmer et al, 2020), to
impute missing ATAR (42.7% missing) and SIEFA (11.1% missing) values. Partly because there is no
central database recording a given student unit attempt against a given teacher or teaching team,
missing satisfaction and marks values were not imputed as no meaningful missing data model could
be formulated. We also include a flag for missing ATAR and SIEFA scores as a dichotomous covariate
in the models.
Satisfaction was highly negatively skewed for both unit and teaching, meaning most
students were satisfied as in previous studies of similar student satisfaction scales: simple scale
means satisfaction with unit m=4.06(sd=0.89) overall, m=4.05(sd=0.88) pre-remote,
m=4.01(sd=0.97) in remote; satisfaction with teaching overall m=4.17(sd=0.92), pre-remote
m=4.16(sd=0.93), in-remote m=4.17(sd=0.88). Marks clustered at the lower boundary of a credit,
with an increase that is mostly attributable to the introduction of the block model: overall
m=61.88(sd=21.98), pre-remote m=61.36 (sd=22.07) pre-remote, in-remote m=68.34 (sd=19.47).

Analyses

Mixed-effect cross-classified models were fitted using the R package lme4 (Bates et al., 2015,
version 1.1-21), to compare student satisfaction and performance pre and post the introduction of
remote learning. Comprehensive control variables were included for different student cohorts,
comprised mainly of student demographic, pathway and equity indicators collected on enrolment
(see Table 2). As the University began a transition to block mode in 2018, which was associated with
higher marks (Author blinded, 2019), we include block mode as a covariate. We also include a linear
and quadratic time variable to account for trends over time, and seasonal patterns. Moderation
tests of the effect of remote learning examined whether student or curricula factors moderated the
effect of remote learning.
As the data contains multiple nested levels, random intercepts were included for unit,
student and teacher. However, for models predicting marks a teacher random intercept was not
included as the teacher identification relied on the voluntary completion of the satisfaction survey,
and including a teacher random intercept resulted in the loss of many observations in models. Intra-
class correlations indicated the nested levels explained substantial variance in satisfaction items
(unit 6-8%, teacher 11-23% and person 16-23%) and marks (unit 8%, person 65%), and that random
intercepts fit substantially better than fixed intercept models (Author blinded, 2019). Similarly,
including a random intercept for teacher in marks models made little difference (Author blinded,
2019).
UNIVERSITY REMOTE LEARNING 5

Pseudo-R2 and variance components are calculated per Johnson (2014) and reported. As
some methodologists advocate including a random slope for all variables where possible due to
advantages of shrinking error estimates (see Harrison et al., 2018 for a review), we generate and
compare R2 figures in models including a random slope for remote delivery and unit, student and
teacher (again with exception of the marks model). To ensure excluding a teacher random intercept
does not affect the estimation of marks in remote delivery greatly, we also estimate a main effect
model including the teacher random intercept, but not random slopes.

Place table 2 around here

Main effect models were fitted using lme4, with the deltaMethod function in the package
car (Fox & Weisberg, 2019, version 3.0-6) to estimate marginal effects with confidence intervals, and
the package interplot (Solt & Hu, 2019, version 0.2.2) to produce conditional significance plots for
continuous moderators. We test whether inclusion of random slopes added any variance explained,
and examine the variance components of the models using the method described in John (2014).

Results

Table 3 reports the main effects and significant moderators of remote learning on
satisfaction with unit/course, satisfaction with teacher, and education achievement for the unit.
Figure 1 plots the continuous significant moderators. Where an interaction term is significant,
marginal effects are reported.
Hypothesis one was partly confirmed. Satisfaction with unit and teaching significantly
declined, but marks significantly improved. However, in all cases, while outcomes are significant in
that they do not include zero in the 95% confidence interval, they are so small in size as to be
educationally insubstantial.
The inclusion of random slopes for unit, student and teacher did not explain substantial
additional variance in the pseudo-R2 indicator in any model. In random slope models, a total of
43.11% of variance in satisfaction with unit was explained, 0.02% from fixed effects, compared with
42.80 for the same model without random slopes. In satisfaction with unit models, including random
slopes resulted in 42.99% total variance explained, 0.02% from fixed effects, compared with 42.52%
in models without random slopes. Finally, 66.15% of total variance in marks was explained by the
random slope models, with 0.07% from fixed effects. As a test of robustness, including teacher as a
random intercept in marks models made little difference to the standardised effect on marks in
remote learning: b=0.03, se=0.01, 95% CI [0.01, 0.06].
Hypothesis two was also partly confirmed. A comprehensive range of demographic, pathway
and equity indicators, as well as two curricula moderators, were tested. Again, some interactions
terms were significant, with some disciplines experiencing a greater decline in satisfaction, and some
student groups a greater gain in marks – yet marginal effects show that changes are very small. A
profile of a highly disadvantaged student group who are much less satisfied and/or performing more
poorly than their historical counterparts did not emerge. In terms of curricular moderators, the
number of times a unit was delivered in block mode, or whether a unit altered assessment design,
did not moderate the effect of remote learning on satisfaction or marks.

Place Table 3 around here

Place Figure 1 around here


UNIVERSITY REMOTE LEARNING 6

Discussion

Overall, the rapid and mass introduction of fully online, remote learning at VU did not result
in a major decrement in student satisfaction with units and teaching, and student performance. In
fact, student performance increased marginally, a very small effect consistent with meta-analytic
effect estimations. In fact, the standardised effect of remote delivery on marks in the present study
is identical to the meta-analytically estimated effect of online learning in Schneider & Preckel (2017),
ranked 88th in size (Cohen D = 0.05, p. 15).
Student satisfaction declined by a similarly small amount, also consistent with reviews of
student satisfaction in distance or online learning (Castro & Tumibay, 2019). Given the rushed shift
to remote learning, the small differences found in student performance and satisfaction are
reassuring, and consistent with literature regarding well-planned online learning, which suggest
online learning can be as effective for student satisfaction and performance as other delivery modes;
at least in the short-term (Castro & Tumibay 2019, Mayer, 2019).
Furthermore, no highly dissatisfied or poorly performing student sub-groups were identified
through moderation testing of many comprehensive equity, pathway and demographic sub-groups.
A long line of literature establishes a link between higher educational progress and aspect of socio-
economic status, but the effect is sometimes not very large, ranked 68th in the meta-effects
compiled by Schneider and Preckel (2017). There were concerns that students already at-risk of poor
progress would decline further in a remote learning, due in part to potentially greater reliance on
self-directed learning, and lower engagement via less face-to-face contact, as well as technological
barriers, often termed the ‘digital divide’ (Thomas et al., 2016). Yet insofar as this first unit delivered
remotely, moderators were either very small or non-significant.
Including random slopes made little difference to variance explained, suggesting the effect
of remote delivery was relatively homogenous across units, students and teachers; similar to the
moderation results. However, variance components indicate random effects captured far more
variance than fixed effects. In addition, the robustness test of including a teacher random intercept
made little difference to the standardised estimate of marks in remote learning.
The study has several limitations. First and foremost, these reassuring findings may not
generalise to other institutions. Key differences include block mode and the amount of synchronous
learning during COVID-19, which remained high at three classes per week as per the teaching and
learning policies and model at the institution. Some studies have found a positive effect on
assessment results for synchronous over asynchronous learning online modes (Ebner &
Gegenfurtner, 2019). Other institutions may report different results.
Regarding block mode, the potential to test remote learning in a block mode versus remote
learning in a semester mode was limited in the current dataset. The only semester-mode units in
progress in early 2020 were post-graduate, and initially delivered in-person with remote learning
commencing half-way through the unit. As such, we focussed on undergraduate units that enabled a
somewhat clean comparison of before and after the introduction of remote learning. As block mode
increased marks in the present institution (Author blinded et al., 2019), we controlled for this effect
by including block as a covariate.
Another limitation is that many aspects of higher education are not captured by the
institutional measures utilised. Where the measures adopted are useful in providing a rich historical
comparison group, aspects such as a sense of belonging and engagement, self-efficacy, and in-
person communication skills, which may be particularly challenging to foster online, are not in the
present data. Further, teachers may have changed their marking practices online. While we test for
the moderator of explicit assessment design change using the University’s institutional curriculum
UNIVERSITY REMOTE LEARNING 7

tracking system, it is possible that with identical assessment design, it is still possible teachers make
allowance for remote learning.
Although we test comprehensive student equity indicators as moderators of the transition
to remote learning, we do not have specific indicators of, for example, access to technology or
suitability of the home environment. We would, however, expect many of these factors to co-vary
with the equity indicators utilised. In addition, the institution took steps to ensure technology access
was not a barrier, with loan computer schemes and on-campus arrangements to attend remote
classes. Further, Australia may not have as much inequity in technology access as other nations
(Thomas et al., 2017). Hence, results may differ in other national contexts.
A somewhat low response-rate per unit for the satisfaction survey, and potential range
restriction with a highly negatively skewed satisfaction score was evident, although this is fairly
common in such measures (He & Freeman, 2020), and simulation studies indicate this is only a low
response rate in the context of inferring a given level of satisfaction with an individual teacher and
given class (delivery period in our terminology). The sample is sufficiently large to make inferences
about remote learning. Furthermore, units with a dichotomous grade were excluded from this study;
future studies predicting pass rate with mixed effect logistic regression may address this gap.
In conclusion, COVID-19 and the advent of mass remote learning has changed the Higher
Education landscape for the foreseeable future. Broadly, initial results support the potential for
remote learning to produce similar outcomes in terms of satisfaction and assessment results, during
the global pandemic. Results may differ over longer time-periods, across nations with different
equity profiles, with less synchronous learning, and potentially in semester rather than block mode.
The current data does not capture many elements of University education that may potentially differ
online, and a phased return to on-campus education is desirable when safe.
UNIVERSITY REMOTE LEARNING 8

References

Al-Fraihat, D., Joy, M., & Sinclair, J. (2020). Evaluating E-learning systems success: An
empirical study. Computers in Human Behavior, 102, 67-86.

Author manuscript under review – blinded.

Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models
using lme4. Journal of Statistical Software, 67(1), 1-48. doi:10.18637/jss.v067.i01.

Castro, M. D. B., & Tumibay, G. M. (2019). A literature review: efficacy of online learning
courses for higher education institution using meta-analysis. Education and Information
Technologies, 1-19.

Cranmer, S., Gill, J., Jackson, R., Murr, A., & Armstrong, D. (2020). hot.deck: Multiple hot-
deck imputation. R package version 1.1-2. https://CRAN.R-project.org/package=hot.deck

Ebner, C., & Gegenfurtner, A. (2019). Learning and satisfaction in webinar, online, and face-
to-face instruction: a meta-analysis. Frontiers in Education, 4, 92-98.

Fox, J., & Weisberg, S. (2019). An {R} Companion to Applied Regression. Third Edition.
Thousand Oaks CA: Cage. https://socialsciences.mcmaster.ca/jfox/Books/Companion/

He, J., & Freeman, L. (2020). Can we trust teaching evaluations when response rates are not
high? Implications from a Monte Carlo simulation. Studies in Higher Education.
https://doi.org/10.1080/03075079.2019.1711046

Johnson, P. C. (2014). Extension of Nakagawa & Schielzeth's R2GLMM to random slopes


models. Methods in Ecology and Evolution, 5(9), 944-946.

Mayer, R. E. (2019). Thirty years of research on online learning. Applied Cognitive


Psychology, 33(2), 152-159.

Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and
blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1-47.

Marsh, H. W., Dicke, T., & Pfeiffer, M. (2019). A tale of two quests: The (almost) non-
overlapping research literatures on students' evaluations of secondary-school and university
teachers. Contemporary Educational Psychology, 58, 1-18.

Noetel, M., Griffith, S., Delaney, O., Sanders, T., Parker, P., del Pozo Cruz, B., & Lonsdale, C.
(2020, May 18). Are you better on YouTube? A systematic review of the effects of video on learning
in higher education. https://doi.org/10.31234/osf.io/kynez

Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher
education: A systematic review of meta-analyses. Psychological bulletin, 143(6), 565.

Shachar, M., & Neumann, Y. (2010). Twenty years of research on the academic performance
differences between traditional and distance learning: Summative meta-analysis and trend
examination. MERLOT Journal of Online Learning and Teaching, 6(2).

Solt, F., & Hu, Y. (2018). Interplot: Plot the Effects of Variables in Interaction
Terms. https://cran.r-project.org/web/packages/interplot/vignettes/interplot-vignette.html

Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of
UNIVERSITY REMOTE LEARNING 9

teaching: The state of the art. Review of Educational Research, 83(4), 598-642.

Thomas, J., Barraket, J., Ewing, S., MacDonald, T., Mundell, M., & Tucker, J. (2017).
Measuring Australia's digital divide: the Australian digital inclusion index 2017.
https://apo.org.au/node/97751

Xu, D., & Jaggars, S. S. (2014). Performance gaps between online and face-to-face courses:
Differences across types of students and academic subject areas. The Journal of Higher
Education, 85(5), 633-659.
Table 1. Item wording for the satisfaction with unit and satisfaction with teaching scales

Item # Student Evaluation of Unit (SEU) Student Evaluation of Teacher (SET)


1 Overall, I am satisfied with the quality Overall, I am satisfied with the quality of
of this unit. teaching provided by the lecturer / tutor
2 The expectations were clear This teacher / lecturer gave me helpful feedback

3 The activities helped me to learn This teacher / lecturer helped make the subject
interesting
4 The learning resources were relevant This teacher / lecturer made an effort to
and up to date understand any difficulties I might be having
with my work
5 The assessment tasks clearly This teacher / lecturer motivated me to do my
evaluated the learning outcomes best work
6 The workload in this unit was This teacher / lecturer was good at explaining
reasonable things
Note. Anchor points: 1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree.
Table 2. List of covariates included in all models.

Covariate fixed effects Variable level


Linear time variable Individual
Quadratic time variable Individual
Block or traditional mode Unit
Age Individual
ATAR Individual
ATAR missing flag Individual
Discipline Unit
Count of pre-remote block deliveries Unit
Count of pre-remote semester deliveries Unit
Domestic or international student flag Individual
Highest prior educational attainment Individual
Disability flag Individual
Aboriginal or Torres Straight Islander flag Individual
Commencing or continuing student flag Individual
Full time or part time student flag Individual
Gender Individual
Multimodal student flag Individual
Non-English speaking background flag Individual
Assessment design change in remote learning flag Unit
Victorian Tertiary Admittance Rank pathway flag Individual
Resides in the Western Suburbs flag Individual
SIEFA percentile Individual
SIEFA missing flag Individual
Remote delivery* Unit

Covariate random effects


Student ID Individual
Unit code Unit
Teacher ID Individual
Note. Linear time variable is coded as 4 units for every semester period, and 1 unit for every block
period. Quadratic time variable captures fluctuation from delivery to delivery period. ATAR =
Australian Tertiary Admittance Rank. Multimodal = a flag to capture the effect of studying in both
semester and block mode units concurrently. * = Treatment effect, the key effect of interest in the
current study.
Table 3. Main, interaction and marginal effects of remote learning

Satisfaction with unit/course Satisfaction with teaching Marks


b(se) [95% CI] b(se) [95% CI] b(se) [95% CI]
Main effects of remote learning
standardised -0.06(0.03) [-0.11, -0.01] -0.08(0.03) [-0.13, -0.03] 0.05(0.01) [0.02, 0.07]
unstandardised -0.06(0.02) [-0.10, -0.01] -0.07(0.02) [-0.11, -0.02] 0.99(0.23) [0.55, 1.43]
Significant moderator interaction terms (unstandardised), and χ2Δ from main effect model for multi-categorical moderators
Entry pathway 0.11(0.03) [0.05, 0.17] 0.09(0.03) [0.03, 0.15] 0.72(0.35) [0.02, 1.41]
Gender -0.10(0.03) [-0.17, -0.04] -0.09(0.03) [-0.16, -0.03] -0.93(0.35) [-1.61, -0.24]
Language background -0.08(0.03) [-0.15, -0.02] -0.08(0.03) [-0.14, -0.02] -1.17(0.35) [-1.85, -0.48]
Discipline χ Δ
2
χ Δ(5)=14.36, p<.01
2
χ (5)=18.46, p<.001
2
χ2(5)=27.28, p<.001
Socio-economic status ns -0.001(0.00) [-0.002, -0.000] ns
Prior attainment χ Δ
2
ns ns χ2(9)=18.01, p<.02
Country of birth -0.07(0.03) [-0.14, -0.01] -0.07(0.03) [-0.13, -0.01] -1.59(0.36) [-2.30, -0.87]
Western region ns ns 1.22(0.35) [0.54, 1.90]
International student -0.09(0.04) [-0.17, -0.00] ns -2.05(0.48) [-2.99, -1.10]
ATAR score ns ns 0.03(0.01) [0.01, 0.05]
Age ns ns ns
Study load ns ns 2.14(1.00) [0.18, 4.09]
Marginal effects (unstandardised) for significant moderators
Entry pathway Direct entry student -0.10(0.03) [-0.15, -0.05] -0.10(0.03) [-0.15, -0.05] 0.71(0.27) [0.19, 1.23]
VTAC student 0.01(0.03) [-0.05, 0.07] -0.02(0.03) [-0.07, 0.04] 1.42(0.31) [0.82, 2.03]
Gender female -0.02(0.03) [-0.07, 0.03] -0.03(0.03) [-0.08, 0.02] 1.42(0.28) [0.87, 1.96]
male -0.12(0.03) [-0.18, -0.06] -0.13(0.03) [-0.18, -0.07] 0.49(0.29) [-0.09, 1.07]

Satisfaction with unit/course Satisfaction with teaching Marks


b(se) [95% CI] b(se) [95% CI] b(se) [95% CI]
Language English-speaking -0.02(0.03) [-0.07, 0.03] -0.03(0.03) [-0.09, 0.02] 1.49(0.27) [0.96, 2.02]
Background Non-English speaking -0.10(0.03) [-0.16, -0.05] -0.11(0.03) [-0.17, -0.06] 0.32(0.30) [-0.27, 0.91]
Discipline Arts & Education+ -0.01(0.04) [-0.10, 0.07] 0.02(0.04) [-0.06, 0.10] 1.79(0.40) [1.02, 2.57]
Business -0.06(0.05) [-0.16, 0.03] -0.09(0.05) [-0.18, 0.00] 1.90(0.46) [1.00, 2.81]
Engineering & Science -0.10(0.05) [-0.20, 0.00] -0.11(0.05) [-0.21, -0.01] -0.10(0.51) [-1.11, 0.90]
Health & Biomedicine -0.02(0.03) [-0.09, 0.05] -0.05(0.03) [-0.11, 0.02] 1.09(0.40) [0.30, 1.87]
Law & Justice -0.23(0.06) [-0.34, -0.12] -0.11(0.05) [-0.21, -0.01] -0.80(0.51) [-1.81, 0.21]
Sport & Exercise 0.00(0.06) [-0.12, 0.12] -0.01(0.06) [-0.13, 0.10] 1.30(0.55) [0.22, 2.37]
Highest prior Bachelor ns ns -0.49(0.79) [-2.04, 1.06]
educational Postgraduate ns ns -0.46(1.14) [-2.74, 1.78]
attainment Sub-degree ns ns 0.44(0.67) [-0.87, 1.75]
Incomplete HE course ns ns 2.02(0.44) [1.16, 2.88]
Incomplete VET course ns ns 3.92(3.52) [-2.99, 10.82]
No information ns ns 3.58(1.41) [0.80, 6.35]
No prior education ns ns 0.21(1.80) [-3.33, 3.75]
Other ns ns 1.12(0.59) [-0.03, 2.27]
Secondary education ns ns 0.78(0.28) [0.24, 1.32]
VET course ns ns 1.86(1.07) [-2.04, 1.06]
Country of birth Born in Australia -0.03(0.03) [-0.08, 0.03] -0.04(0.03) [-0.09, 0.01] 1.56(0.26) [1.05, 2.07]
Born overseas -0.10(0.03) [-0.16, -0.04] -0.11(0.03) [-0.17, -0.05] -0.03(0.32) [-0.67, 0.61]
Western region Non-western region ns ns 0.45(0.27) [-0.09, 0.98]
Western region ns ns 1.66(0.30) [1.09, 2.24]
Funding group Domestic student -0.04(0.02) [-0.09, 0.01] ns 1.33(0.24) [0.86, 1.79]
International student -0.13(0.04) [-0.21, -0.05] ns -0.72(0.46) [-1.63, 0.18]
Study load Part time ns ns -1.08(0.99) [-3.02, 0.86]
Full time ns ns 1.06(0.23) [0.61, 1.50]
Note. ns = Non-significant, based on the 95% confidence interval for the estimate including zero. = reference group, for multi-categorical moderators. χ2Δ
+

= Change in χ2 calculated using the lmerTest anova function, applied to the main effect model and the model including the moderator interaction term. As
TAFE award course pathway only had ten students, this category was merged with VET award course. VET = Vocational Education and Training. HE = Higher
Education.
Figure 1. Regional significance for continuous moderators of the effect of remote learning.

Note. SET = A predicted score on a latent factor reflecting satisfaction with teaching. SEIFA = Socioeconomic Index for Areas index score. ATAR = Ausralian
Tertiary Admission Rank.

You might also like