You are on page 1of 15

Asia Pacific Education Review (2020) 21:211–225

https://doi.org/10.1007/s12564-019-09620-1

Modeling undergraduate STEM students’ satisfaction with their


programs in China: an empirical study
Tengteng Zhuang1   · Alan C. K. Cheung1 · Winnie Tam2

Received: 23 January 2019 / Revised: 6 September 2019 / Accepted: 2 November 2019 / Published online: 26 November 2019
© Education Research Institute, Seoul National University, Seoul, Korea 2019

Abstract
Several major reform areas attempted by ‘New Engineering Education’ (NEE), China’s most recent engineering education
reform initiative at university level, are examined for their direct and indirect impact on Chinese STEM-major students’
satisfaction with their programs in this study. With data collected from a sample of 619 Chinese undergraduate students, the
measurement and structural models both display good model fits. The structural results indicate that course satisfaction fully
mediates the impact of classroom instruction method on program satisfaction, while partially mediates the impact of support
from faculty members and alternative assessment methods on program satisfaction. The impact of resource and service on
program satisfaction, however, is direct without any mediating effect in between. Multigroup analyses show that the impact
of alternative assessment methods on course satisfaction is significantly stronger for first-tier university students than for
non-first-tier university students. Furthermore, there is stronger impact of resource and service on program satisfaction for
junior and senior students than for freshmen and sophomores. Practical implications are discussed.

Keywords  Program satisfaction · Higher STEM education · Education reform · Chinese higher education · New
engineering education

Introduction Student satisfaction has been examined in different coun-


tries such as Australia (Grace et al. 2012), Britain (Douglas
Higher STEM education is internationally recognized as a et al. 2006), China (Yin and Wang 2015), Pakistan (Butt and
driver for technological breakthroughs and economic devel- Rehman 2010), and the USA (Elhadary 2016), with numer-
opment, but STEM subjects are more often than not per- ous predictors identified including teaching and learning
ceived ‘difficult to complete’ (Baird et al. 2016) and cause quality (Douglas et al. 2006), instructors’ expertise (Butt
tremendous attrition (Chen 2013). Higher education institu- and Rehman 2010), course experience (Grace et al. 2012),
tions worldwide therefore have launched initiatives to reform faculty’s relationship with students (Hill et al. 2003), assess-
their STEM programs (Graham 2018; NVAO 2017) in order ment methods (O’Donovan 2017), and so forth.
to address the attrition issue and improve overall quality. It’s worth noting that most of the extant literature on
Among many specific mission goals, improving student sat- student satisfaction was focused on course satisfaction,
isfaction constitutes an important one for universities (Elliott and much fewer studies directly investigated antecedents of
and Shin 2002) as satisfaction has a close link to students’ program satisfaction. There are, however, substantive dif-
retention and recruitment in STEM fields after their gradu- ferences between ‘course’ and ‘program.’ In many parts of
ation (Douglas et al. 2006; Sum et al. 2010). the world except several countries, courses refer to classes
that usually last for one semester or one academic term,
while programs are degree-oriented that involve broader
* Tengteng Zhuang connotations. In ‘Chapter Nine: The Nature of Programs’
tengteng_zhuang@link.cuhk.edu.hk of the famous British Dearing Report (Dearing 1997), the
broadness of programs is explicitly illustrated by the line
1
Department of Educational Administration and Policy, The ‘employers are also concerned about the general capabilities
Chinese University of Hong Kong, Hong Kong SAR, China
and potential of those with higher education qualifications,
2
Center for University School Partnership, The Chinese not just about the subject they have studied,’ indicating that
University of Hong Kong, Hong Kong SAR, China

13
Vol.:(0123456789)

212 T. Zhuang et al.

a program functions much more than a course or a subject. For a significant period, Chinese higher education has
This is substantiated by another line in the same report been criticized for the dominance of teacher-centered lec-
‘higher education was constrained by a tradition of rela- turing (Ye 2011), the scant interaction between faculty
tively narrow educational experiences,’ which also signals members and students (Leng 1996), and the demotivating
that the nature of a program is supposed to involve broader assessment and examination systems (Li and Hui 2007).
educational experiences than merely providing courses. To address these concerns, several areas are especially
In the Chinese context, the National Standards of Teach- emphasized in NEE, including upgrading teaching meth-
ing Quality for Undergraduate Programs (National Stand- ods, increasing faculty support to students outside classes,
ards hereafter) issued by the Ministry of Education in 2018 employing multiple methods to assess student learning, and
recognizes that program is a higher-level unit than course. further improving necessary hardware resources and edu-
A program defined in National Standards has exogenous cational service.
and endogenous components, with exogenous components For example, in one of the NEE official documents
including teaching goals, years of schooling, faculty stu- ‘Tianda Action’ (MOE 2017), a salient subhead in bold reads
dent ratio, institutional support of the program, and so forth, ‘Studying students’ interest to change instruction methods,
while endogenous components being more about courses. in order to innovate teaching approaches and measures for
Courses of a program are categorized into gateway courses, engineering education’ (Paragraph Six of ‘Tianda Action’),
prerequisite courses, core courses, special courses, labo- followed by:
ratory courses, and so forth (MOE 2018a). For example,
(We should) enhance faculty–student interaction,
the National Standards require that public higher educa-
reform teaching and assessment methods to form a
tion institutions must ensure that their telecommunication
learner-centered engineering education pattern. (We
engineering programs, if provided, include core courses of
should) promote deeper integration of information
‘fundamentals of circuit analysis,’ ‘analogue electronics
technology and education, develop and publicize
technique,’ ‘digital electronic technique,’ ‘high-frequency
online learning, and employ technologies such as vir-
electronic circuit,’ and 12 other core courses (MOE 2018b,
tue reality to innovate teaching methods. (We should)
p. 311). As such, ‘course’ is conceptualized as only part of
also improve a system of “creativity-innovation-
a ‘program’ in this study with the understanding that pro-
entrepreneurship” for new engineering education, and
grams involve various non-course elements such as skill
extensively build platforms for praxis education. (p. 1)
acquisition, development of capabilities, and adaptability
cultivation, whereas a course is mostly subject-related with In NEE, China bases its engineering education reform
its major focus on knowledge acquisition or theory build- on several understandings. Firstly, it matters to understand
ing. This study therefore attempts to directly investigate how students learn efficiently and employ multiple teach-
undergraduate STEM students’ program satisfaction to fill ing means to cater to students’ learning needs in contempo-
the research gap. Our research is carried out in the context rary times when students tend to rely on various sources for
of China’s New Engineering Education (NEE) initiative in knowledge acquisition. Secondly, it is important to increase
which improving STEM students’ program satisfaction con- the faculty–student interaction and provide more support to
stitutes one of the major reform goals. facilitate students’ active learning. Thirdly, multiple assess-
ment methods should be employed to complement the exist-
ing examination mechanism to effectively investigate learn-
ing outcomes. Fourthly, necessary resources such as praxis
STEM higher education reform in Chinese education platforms, information technology, and other
context: new engineering education (NEE) hardware devices need to be guaranteed.
Furthermore, China’s higher education sector has experi-
China joined other 14 countries and regions to become a enced dramatic quality stratification along with quantitative
formal signatory of the Washington Accord (WA) in 2016. expansion over the past two decades. Universities funded by
After joining other WA signatories, China’s Ministry of Edu- the Ministry of Education’s ‘985 Project’ and the ‘211 Pro-
cation launched the NEE initiative in 2017 to systematically ject’ are usually regarded as first-tier institutions. Their capa-
upgrade the country’s undergraduate-level engineering pro- bility of attracting resources from various stakeholders such
grams and cultivate STEM graduates in accordance with as the government, industry, society, and alumni is extremely
internationally substantially equivalent standards. strong, while non-first-tier institutions are struggling for
every progress they can make (Zhuang and Xu 2018). With

13
Modeling undergraduate STEM students’ satisfaction with their programs in China: an empirical… 213

respect to the curriculum system of engineering programs, tremendous student agency. In the USA, the use of infor-
Chinese higher education institutions normally offer basic mation technology such as virtue reality to teach in uni-
theoretical courses to students of first 2 years, while offering versity classrooms was also found positive in predicting
more specialized courses for third- and fourth-year students. more student satisfaction (Smith et al. 2007). Different
Freshmen and sophomores tend to spend much time adapt- means of instruction influences students’ different level
ing to the learning environment and studying theories, while of engagement in learning (Newmann and Wehlage 1993).
junior and senior students work intensively on their capstone Apart from instruction, the social–cultural perspective
projects and internships. of student learning holds that learning is enhanced through
Against such a backdrop, we attempt to explore three socially supportive interactions (National Research Coun-
specific research questions related to China’s higher STEM cil 2003) and through substantive faculty–student con-
program reform measures: nections beyond classrooms (Tam et al. 2009). Faculty
members’ encouragement was found positively predicting
(1) Do variables representing major aspects of China’s students’ overall satisfaction (Yin et al. 2016), so was the
reform initiative (classroom instruction method, sup- emphasis on independent learning (Yin and Wang 2015).
port from faculty members to students, alternative Showing genuine concerns for students’ learning (Husband
assessment methods, and institutions’ resources and 2013), having high expectations of students (Newmann
service) significantly impact STEM-major students’ and Wehlage 1993), and providing beyond-classroom sup-
satisfaction with their programs? port to students such as timely feedback to assignments
(2) If so, to what extent do these variables explain program and guiding undergraduate research experiences (National
satisfaction? Research Council 2003) were all regarded as important
(3) Are there any other factors mediating the impact of yardsticks to measure the quality of university education.
those variables on program satisfaction? Scholars have also investigated the roles of assessment
in student learning and satisfaction. Yin and Wang (2015)
The aforementioned distinctive features of universities found a negative effect that appropriate assessment had
and students at different levels also lead the study to exam- on academic efficacy and no significant effect on students’
ine whether the impact of relevant antecedents on students’ overall satisfaction, while other scholars concluded that
program satisfaction varies across tiers of universities and students’ approaches to learning were impacted by their
across academic years. perceptions of how they would be assessed (Wilson et al.
1997). Usually, the traditional summative assessment
methods like exams are believed not conducive in culti-
Literature review and proposed hypotheses vating students’ competence and often leads to a surface
learning approach (Kreber 2003; Webster et  al 2009),
A wealth of research indicates that great instructors and while the process-focused formative assessment methods
instruction matter most in ensuring excellence of learn- or alternative assessment methods are deemed appropriate
ing STEM subjects (Holdren et al. 2010; Ortiz and Srira- to gather comprehensive evidence of student achievement,
man 2015). In China, students were found less likely to especially in application-focused STEM fields (Crawley
quit what they were learning if they felt positive about the et al 2014). The driving effect of assessment on student
way they were taught (Xu 2015), but more likely to make learning and satisfaction was substantiated by other studies
complaints when the teaching method was primarily lec- as well (Ramsden 2003; Rust et al. 2003).
turing (Ye 2011). Chinese academia has called for replac- Moreover, tangible hardware resources and intangible
ing traditional teacher-centered pedagogy with diversified service provided by higher education institutions such
instruction means (Yin and Wang 2015) and employing a as infrastructure, technical assistance, welfare provision,
‘student-focused’ approach in classroom instruction (Hal- accommodation, and physical and psychological care have
linger and Lu 2013) to enhance teaching effectiveness and also been shown to influence student satisfaction (Knight
student satisfaction. Internationally, Navarro et al. (2005) 2008; Marginson 2011; Price et al. 2003). For instance,
found that course administration and teaching methods Price et al (2003) revealed that availability of computers,
were key elements to determine Spanish students’ satis- quality of library facilities, the availability of areas for self-
faction and their subsequent subject loyalty. In Australia, study, and friendly staff and administrators were among the
electrical engineering program’s students’ satisfaction top reasons why students decided to enroll in a program.
increased by as much as 32% when teaching laboratories Similarly, Yin and Wang (2015) revealed that the role of
were leveraged to complement normal teaching practices IT facilities in promoting course experiences was positive.
(Nikolic et al. 2015) as in laboratories instruction was no The extant literature adequately suggests that each rel-
longer confined to lecturing and listening, but involved evant factor reviewed above has an impact on student

13

214 T. Zhuang et al.

satisfaction separately. Thus, we might reasonably establish participation. As the online survey platform could bind
the first hypothesis as follows: with the most widely used cellphone app in China—
WeChat, it allowed every participant to answer survey
H1:  Method of instruction (H1a), support from faculty questions on their smartphones easily. We took several
members (H1b), alternative assessment methods (H1c), and measures to ensure the quality of submitted answers. For
resources and service (H1d) separately impact STEM-major example, we set on the online platform that each par-
students’ program satisfaction significantly and positively. ticipant was allowed to select only one option from ‘1’
(strongly disagree) to ‘6’ (strongly agree) for each item
For students registered in a program, course is the most to prevent any outliers. Meanwhile, we set that every sub-
important tangible product for which they pay tuition fees. mission could only be made after a participant finished
On the one hand, course works as a major source of stu- answering all the questions. In addition, we deliberately
dents’ judgement of the quality of an institution (Devinder set that the submission of the survey could be done only
and Datta 2003) and perceptions of educational effectiveness once through one device to avoid duplicate submissions by
(DeChenne et al. 2012; Husband 2013). Improving course the same participant, hence the authenticity of the survey
quality seems a key to resolving many existing educational answers.
problems. On the other hand, course improvement also rests We firstly relied on four faculty members teaching at four
upon prerequisite factors related to faculty, assessment, and Chinese universities to forward our online survey to some
many others (Green et al. 2015; So and Brush 2008). As WeChat groups comprising of STEM-major students at their
such, course seems to play a mediating role between various universities. The four faculty members as instructors asked
influencing factors and students’ ultimate satisfaction with their students in the WeChat group to answer survey ques-
their educational experiences. As course and program are tions and asked them to further forward the survey to their
two different concepts distinguished in previous sections, fellow students in and outside their institutions in a snowball
this study considers the possible mediating role of course manner. They emphasized that the survey should be only
and makes hypotheses as follows: forwarded to STEM-major students, and one background
question in the survey ‘What is your major’ allowed us to
H2:  Course satisfaction significantly and positively impacts eliminate submissions from non-STEM-major students later
STEM-major students’ program satisfaction. when conducting data analysis, if any. The survey was car-
ried out between mid-January to the end of February 2018.
H3:  Course satisfaction mediates the impact of relevant
antecedents such as method of instruction (H3a), support
from faculty members (H3b), alternative assessment meth- Participants
ods (H3c), and resource and service (H3d) on program
satisfaction. Among a total of 670 submissions, unengaged responses
were eliminated according to three criteria: (1) obvious
non-sense responses such as ‘Let me take a look’ or ‘You
Methodology Guess’; (2) responses from non-STEM-major students; and
(3) responses with a standard deviation of zero, indicating
Prior to collecting the data, the study design was reviewed the questions were answered hastily and perfunctorily. After
by the Survey and Behavioral Research Ethics Committee this process, 619 participants from 75 Chinese universities
of all the three authors’ affiliation and granted the ethical provided effective responses.
approval for carrying it out after approximately 1 month of Among all, 80.3% of the participants (497) came from
being reviewed. In the section of ‘Instruction’ on the first 7 universities (two located in Beijing, two in Jiangsu, the
page of the questionnaire, we articulated that the data col- other three in Guangdong, Hubei, and Hebei Provinces,
lected would only be used for academic purposes, and the respectively) and the remaining 19.7% (122) scattered in the
questionnaire only needed to be completed anonymously. remaining 68 universities across 20 Chinese provinces. One
Furthermore, we stated a line ‘Please start to answer ques- hundred and forty-three students were from first-tier univer-
tions only after you agree to participating in our study’ at the sities (‘985’ and ‘211’ Project university) and 476 were from
end of ‘Instruction’ session of the questionnaire to ensure non-first-tier institutions (‘Non-985’ nor ‘Non-211’ Project
that every participant would take part in our study on an university). There were 155 freshmen, 175 sophomores, and
entirely voluntary basis. 209 junior and 80 senior students, respectively. Two hundred
We administered our survey through a widely used and fifteen (34.7%) students were enrolled in a science (S)
Chinese online survey platform ‘Wenjuanxing (Ques- or math (M) program, while 404 (65.3%) were studying in a
tionnaire Star)’ for the convenience of sample students’ technology (T) or engineering (E) program.

13
Modeling undergraduate STEM students’ satisfaction with their programs in China: an empirical… 215

Instruments which constituted an integral part of the educational environ-


ment (Yin et al. 2016; Zepke and Leach 2010). Sample items
Classroom instruction method (CIM) include ‘I can get access to necessary hardware resources
(e.g., facilities and devices) from the university when
Informed by literature on the application of the latest needed’ and ‘I benefit a great deal from library resources
information technology into higher education instruction at the university.’
(Haller 2006; O’Flaherty and Phillips 2015), 6 items were
used to measure how instructors generally teach students Course satisfaction (CS)
in the classroom. Do they generally use traditional lectur-
ing as the main method or does teaching much happen in According to Grace et  al (2012), the Course Experi-
labs or praxis spots? Are industry professionals invited to ence Questionnaire (CEQ) (Wilson et al. 1997) should be
teach in class, and is the latest technology like VR/AR used expanded in terms of its dimension of satisfaction with
to facilitate teaching and learning? Item examples include courses for a better psychometric quality. Originally, CEQ
‘Instructors generally teach in laboratories or praxis bases’ contained just one item to measure the overall satisfaction,
and ‘Instructors generally adopt flipped classroom to deliver which was believed inadequate. Informed by this sugges-
course instruction.’ tion, based on using the original item on course satisfaction
in CEQ, we developed two other items to investigate more
Support from faculty members (SFFM) aspects of satisfaction, such as whether students were satis-
fied with the roles of program courses in helping them build
Students’ perceptions of how much support they got from a solid theoretical and practical foundation.
their instructors or faculty members for learning were meas-
ured in a 5-item scale, adapted from the factor Teacher Sup- Program satisfaction (PS)
port from Yin and Lu’s (2014) validated 10-item Univer-
sity Mathematics Classroom Environment Questionnaire Grace et al’s (2012) scale’s overall satisfaction item was
(UMCEQ). Adhering to the spirit of the original items, 5 of firstly adapted to measure program satisfaction in this study.
the items, after being paraphrased to better fit the vernacular Then we developed another item to investigate whether stu-
context in China, formed this new dimension to measure to dents would recommend the program to future students.
what extent faculty members valued students’ learning. Sam- Moreover, as retention and persistence are key to measur-
ple items include ‘Instructors generally value the importance ing the quality of STEM programs (Holdren et al. 2013;
of students’ academic performances and improvements’ and Ohland et al. 2008), another item ‘The program makes me
‘Instructors generally encourage students to innovate ways willing to pursue further study in my major or field’ was
of learning and overcome learning barriers’. also developed, thus forming a three-item scale to assess
students’ program satisfaction.
Alternative assessment methods (AAM) All the items underwent translation and back translation
and were proofread by native speakers to ensure participants
Informed by literature on assessment method in STEM fields understood the items accurately. Each item was scored on
(Crawley et al. 2014; National Academies of Sciences, Engi- a 6-point Likert scale from ‘1’ (strongly disagree) to ‘6’
neering, and Medicine 2017) and the factor Appropriate (strongly agree). The full-scale CFA results and the reli-
Assessment of the course experience questionnaire (Wilson ability and validity results of all the six measures are shown
et al. 1997), 5 items were proposed to measure other forms in following sections, indicating that all the dimensions
of assessment of students’ learning than the prevalent exami- employed in this study are well validated.
nations and tests. Example items include ‘Multiple assess- Kline (2011) argues that a typically enough sample size for
ment methods are used to assess students’ learning outcomes structural equation modeling (SEM) is 200 cases, and it can be
besides exams (e.g., using oral exams, praxis capability test, considered even more adequate if the ratio of cases (N) to the
and so on)’ and ‘Group projects generally influence students’ number of model parameters (q) reaches 10. However, Kline
final course grades.’ maintains that this standard is not one that must be absolutely
adhered to. Hair et al. (2014) hold that if models have seven or
Resources and service (RS) fewer constructs, 300 cases can be the minimum sample size.
In this study, N (619) is far greater than the recommended 200
Four items adapted from Supportive Facilities in the Univer- or 300, and the N/q ratio is very close to 10 (619:69). Moreo-
sity-level Environment Scale (Dorman 1998) were employed ver, power analysis was also conducted to detect the minimum
to examine students’ access to hardware and personnel ser- sample size needed in our case. Assuming power of .80, alpha
vices such as learning devices, counseling, and medical care, of .05, degree of freedom of 242, null RMSEA of .05, and

13

216 T. Zhuang et al.

Table 1  Results of the Construct Item Std. FL Composite reli- AVE


measurement model ability

Classroom instruction method (CIM) CIM1 0.772 0.888 0.574


CIM2 0.828
CIM3 0.830
CIM4 0.823
CIM5 0.630
CIM6 0.631
Support from faculty members (SFFM) SFFM1 0.814 0.912 0.676
SFFM2 0.771
SFFM3 0.880
SFFM4 0.831
SFFM5 0.810
Alternative assessment method (AAM) AAM1 0.829 0.877 0.589
AAM2 0.821
AAM3 0.701
AAM4 0.682
AAM5 0.792
Resource and service (RS) RS1 0.784 0.907 0.663
RS2 0.879
RS3 0.858
RS4 0.844
RS5 0.691
Course satisfaction CS1 0.880 0.896 0.743
(CS) CS2 0.846
CS3 0.859
Program satisfaction (PS) PS1 0.897 0.891 0.732
PS2 0.885
PS3 0.779

setting alternative RMSEA of .08 (for close-fit hypothesis) and Results


.01 (for non-close-fit hypothesis), respectively, the minimum
sample size required for the model in our study was calculated Test of the measurement model
as 74.8 and 107.6, respectively (https​://quant​psy.org/rmsea​/
rmsea​.htm). Our actual sample size (N = 619) far outnumbered A confirmatory factor analysis (CFA) was firstly run on
these minimum standards, thus supporting the feasibility of all factors (including CIM, SFFM, AAM, RS, CS, and
using SEM. The total 27 items are listed in the Appendix. PS) used in this study and the results showed a good fit
for the CFA model (χ2 = 874.452, df = 309, χ2/df = 2.743,
Data analysis GFI = .908, TLI = .951, CFI = .956, RMSEA = .053 [.049,
.057]). As shown in Table  1, all the items were well
SPSS 23.0 was used to calculate descriptive statistics includ- loaded on their respective latent variables with standard
ing mean and standard deviation. Then using AMOS 20.0, we factor loadings for each item ranging from .630 to .897.
tested psychometric quality of the instruments and performed The reliability of each construct was proven good with
SEM models.

13
Modeling undergraduate STEM students’ satisfaction with their programs in China: an empirical… 217

Table 2  Discriminant validity AVE 1 2 3 4 5 6


results
1. Classroom instruction method (CIM) .574 (.758)
2. Support from faculty members (SFFM) .676 .547 (.822)
3. Alternative assessment method (AAM) .589 .551 .746 (.767)
4. Resource and service (RS) .663 .614 .726 .777 (.814)
5. Course satisfaction (CS) .743 .709 .728 .711 (.690) (.862)
6. Program satisfaction (PS) .732 .601 .743 .776 (.768) .756 (.856)
Mean / 3.952 4.797 4.777 4.611 4.346 4.610
Std / 1.152 .859 .856 .951 1.036 1.006

Values in parentheses on the diagonal are the square roots of AVE. The lower triangular is the correlation
matrix for all the factors

composite reliability (CR) for each variable ranging from CFI = .957, RMSEA = .055. All predictors such as CIM
.877 to .912, all greater than the recommended .700 (Nun- (β = .131, p < .001), SFFM (β = .241, p < .001), AAM
nally and Bernstein 1994). The average variance extracted (β = .318, p < .001) , and RS (β = .266, p < .001) were found
(AVE) values for all constructs ranged from .574 to .743, to positively and significantly impact PS (Table 3), thus sup-
which evidenced good convergence validity as an AVE of porting H1a, H1b, H1c, and H1d.
.500 or higher suggests adequate convergence (Fornell and To test other hypotheses, CS was put in as a hypothesized
Larcker 1981). Therefore, no item needs to be deleted for mediator between CIM, SFFM, AAM, RS, and PS to form
investigating structural models. a new structural model (Fig. 2). Model 2 also exhibited
The discriminant validity was examined by the correla- a good data fit with χ2 = 874.452, df = 309, χ2/df = 2.743,
tion of one construct with the remaining five constructs. GFI = .908, TLI = .951, CFI = .956, RMSEA = .053.
According to Fornell and Larcker (1981), if a construct’s Seen from Table 4, CS was a significant predictor of PS
AVE is greater than the square of the construct’s correla- (β = .258, p < .001), thus proving H2 that course satisfaction
tions with other constructs, then the discriminant validity significantly and positively impacts program satisfaction.
can be proven. The correlation matrix (Table 2) showed that CIM was found a non-significant predictor of PS (β = .033,
for each construct except ‘alternative assessment method’ p > .05) but a significant predictor of CS (β = .380, p < .001),
(AAM), the square root of its AVE value was greater than illuminating that classroom instruction method’s impact on
all the correlations between the construct itself and other program satisfaction was totally through course satisfaction,
constructs, indicating every construct significantly discri- supporting H3a. As the variance of CS was significantly
minant from other variables. The scrutiny of AAM revealed explained by SFFM (β = .312, p < .001), and PS was also
that its AVE square root value was indeed greater than its predicted by SFFM (β = .161, p < .01), it indicates that fac-
correlation with most variables including CIM, SFFM, and ulty support to students has a significant but indirect effect
CS and was only slightly lower than its correlation with RS on program satisfaction. That means, course satisfaction
and PS. In fact, the items of RS investigated the hardware partially mediates the impact of support from faculty mem-
and service resources while AAM examined how students bers on program satisfaction, thus supporting H3b. The same
were assessed on their academic performances. Thus, the
observed variables of these two constructs focused on dis-
tinctive aspects of students’ learning process. For AAM and CIM
PS, as PS was a key-dependent variable rather than a predic-
tor, and the observed variables of AAM and PS examined
distinct aspects, then AAM as a predictor was kept as it was. SFFM
Overall, the discriminant validity of all constructs was at an
acceptable level. PS

AAM
Structural model results

SEM was conducted on AMOS 20.0. Model 1 (Fig. 1) was RS


firstly examined to see the direct impacts of CIM, SFFM,
AAM, and RS on PS. The model was proven good with
χ2 = 698.773, df = 242, χ2/df = 2.887, GFI = .915, TLI = .951, Fig. 1  Hypothesized SEM Model (Model 1)

13

218 T. Zhuang et al.

Table 3  Results for structural Model 1


Unstd. coeffi (B) Std. coeffi (B) S.E Z value

Classroom instruction method (CIM)  →  Program satisfaction ( PS) 0.119 0.131 0.034 3.462***
Support from faculty members (SFFM)  →  Program satisfaction ( PS) 0.307 0.241 0.063 4.851***
Alternative assessment methods (AAM)  →  Program satisfaction ( PS) 0.387 0.318 0.070 5.540***
Resources and service (RS)  →  Program satisfaction ( PS) 0.276 0.266 0.058 4.732***

CIM CIM R2 = .695


.380
CS .312 CS
SFFM SFFM .230
.161 Significant
.258
Non-significant
AAM AAM
.259
PS PS

RS .252 R2 = .729
RS

Fig. 3  Overall SEM model results (Model 2)


Fig. 2  Hypothesized SEM Model (Model 2)

effects as a triangulation because it yields the most accu-


applied to AAM which significantly predicted CS (β = .230, rate effect result (MacKinnon et al. 2004). As shown in
p < .001) and PS (β = .259, p < .001) in Model 2, thus illus- Table 5, CIM, SFFM, AAM, and CS all have significant
trating that part of alternative assessment method’s influence total and indirect effects on PS, indicating that CS medi-
on program satisfaction was also through course satisfac- ates CIM-PS, SFFM-PS, and AAM-PS. For RS, its total
tion, hence H3c proven. RS, however, did not significantly effects on PS are significant, but no indirect effects exist.
predict CS (β = .052, p > .05) but significantly predicted PS This means CS does not mediate between RS’ impact on
(β = .252, p < .001), which revealed that course satisfaction PS. The results generated from the bootstrap estimation
did not impinge on the impact of resource and service on procedure fully substantiates the significant/insignificant
program satisfaction at all, thus rejecting H3d. The model paths in Fig. 3, thus supporting H3a, H3b and H3c and
paths with statistical results are presented in Fig. 3. Overall, rejecting H3d again.
72.9% of the variance of PS was explained by the other five
components.
Bootstrap estimation procedure (bootstrap sam-
ple = 1000) was used to test the significance of the indirect

Table 4  Results for structural Model 2


Unstd coef- Std coefficient S.E Z value
ficient

Classroom instruction method (CIM)  →  Course satisfaction (CS) .372 .380 .04 9.247***
Support from faculty members (SFFM)  →  Course satisfaction (CS) .429 .312 .071 6.05***
Alternative assessment methods (AAM)  →  Course satisfaction (CS) .300 .230 .076 3.92***
Resources and service (RS)  →  Course satisfaction (CS) .058 .052 .064 .906
Course satisfaction (CS)  →  Program satisfaction (PS) .241 .258 .053 4.547***
Classroom instruction method(CIM)  →  Program satisfaction (PS) .03 .033 .039 .762
Support from faculty members (SFFM)  →  Program satisfaction (PS) .207 .161 .066 3.134**
Alternative assessment methods (AAM)  →  Program satisfaction (PS) .315 .259 .07 4.51***
Resources and service (RS)  →  Program satisfaction (PS) .262 .252 .057 4.622***

13
Modeling undergraduate STEM students’ satisfaction with their programs in China: an empirical… 219

Table 5  Standardized total Point estimate SE Z value Bias-corrected percen- p


and indirect effects of the tile 95% CI
hypothesized model (bootstrap
sample = 1000) Lower Upper

Std. total effects


 CIM-PS 0.131 0.042 3.119 0.048 0.206 0.003(**)
 SFFM-PS 0.242 0.082 2.951 0.087 0.411 0.004(**)
 AAM-PS 0.318 0.099 3.212 0.087 0.483 0.009(**)
 RS-PS 0.265 0.095 2.789 0.072 0.445 0.011(*)
 CS-PS 0.258 0.082 3.146 0.098 0.417 0.002(**)
Std. indirect effects
 CIM - PS 0.098 0.034 2.882 0.038 0.169 0.002(**)
 SFFM-PS 0.08 0.034 2.353 0.03 0.164 0.001(***)
 AAM-PS 0.059 0.033 1.788 0.008 0.143 0.015(**)
 RS-PS 0.013 0.027 0.481 -0.034 0.077 0.525

Table 6  Comparison of students of first-tier (n = 143) and non-first- Table 7  Comparison of higher-academic year students (n = 289) and
tier (n = 476) Chinese universities lower-academic year students (n = 330) of Chinese Universities
Model Δdf ΔCMIN(χ2) p Model Δdf ΔCMIN (χ2) p

Fully constrained 9 19.23 .023* Fully constrained 9 12.325 .196


CIM-CS constrained 1 2.528 .112 CIM-CS constrained 1 .004 .951
SFFM-CS constrained 1 .576 .448 SFFM-CS constrained 1 .222 .638
AAM-CS constrained 1 5.863 .015* AAM-CS constrained 1 3.229 .072
RS-CS constrained 1 2.046 .153 RS-CS constrained 1 .956 .328
CS-PS constrained 1 1.858 .173 CS-PS constrained 1 1.971 .160
CIM-PS constrained 1 .140 .709 CIM-PS constrained 1 .420 .517
SFFM-PS constrained 1 .001 .974 SFFM-PS constrained 1 .002 .966
AAM-PS constrained 1 2.447 .118 AAM-PS constrained 1 .008 .930
RS-PS constrained 1 1.641 .200 RS-PS constrained 1 4.382 .036*

(Assuming model unconstrained to be correct) (Assuming model unconstrained to be correct)

Comparing students’ program satisfaction more influenced by how varied their learning outcomes are
across different tiers of Chinese universities assessed.

Multigroup analysis was conducted across students of dif- Comparing students’ program satisfaction
ferent tiers of universities given the participants’ affiliation across academic years
backgrounds. Compared against the unconstrained model,
the model comparison results in Table 6 indicate that the For multiple group analysis across academic years, fresh-
fully constrained model is significantly different from the men and sophomores were labeled ‘lower-academic year
unconstrained model ( 𝜒diff2
(9) = 19.230, p = .023), indicating students’ while juniors and seniors were ‘higher-academic
that students of first-tier universities and non-first-tier uni- year students.’ The model comparison results in Table 7
versities are different at the model level. Scrutinizing it path shows that the fully constrained model is not significantly
by path, such difference is derived from a significant dif- different from the unconstrained model ( 𝜒diff2
(9) = 12.325,
ference in the path of AAM-CS ( 𝜒diff 2
(1) = 5.863, p = .015), p = .196), indicating that higher-academic year students
namely alternative assessment method’s impact on course and lower-academic year students are not different at the
satisfaction. There are no other significant path differences model level. It’s worth noting that in terms of the path of
found. As the path coefficient of AAM to CS for first-tier RS-PS, there exists significant difference between ‘juniors
university students (.389) is greater than that for non-first- and seniors’ and ‘freshmen and sophomore.’ As the path
tier university students (.158), it means for students of more coefficient of RS-PS for lower-academic year students (.186)
prestigious universities, their satisfaction with courses is is smaller than that for higher-academic year students (.366),

13

220 T. Zhuang et al.

it indicates that for junior and senior STEM-major students, are generally not independent learners (Zhang et al. 2013).
their program satisfaction is more influenced by the diversity Previous arguments that faculty support and a sound fac-
and quality of the hardware resources and service they can ulty–student relationship positively predicted overall satis-
access. faction (Husband 2013; Yin et al. 2016) are congruent with
this study. As STEM fields demand much effort, energy, and
commitment, students’ need of more support outside class-
Discussion rooms to keep confidence in learning is understandable.
While no significant relationship between appropriate
This study firstly investigated the impact of relevant vari- assessment and student satisfaction was found by Grace
ables, representing aspects of China’s NEE reform measures, et al (2012), our study reveals that alternative assessment
on undergraduate STEM students’ program satisfaction, and methods significantly impact students’ program satisfac-
then explored differences in such impact across different stu- tion, consistent with Rust et al. (2003) and Ramsden (2003)
dent groups. The results generally compared favorably with who found that students favor being assessed by different
those yielded by some previous research and yet showed methods. The inconsistence with Grace et al (2012) could be
particular interesting findings in the Chinese context. explained by their own arguments that the measures related
to appropriate assessment they used need redevelopment.
Relevant influences of predictors of program In fact, when assessed by multiple methods, students under-
satisfaction stand course contents deeply, including how the programs
comprising of these courses can apply in real scenarios, by
China’s attempt to upgrade the quality of STEM programs learning in a comprehensive way.
through diversifying means of instruction, as indicated in In line with some previous studies (Knight 2008; Price
NEE documents, seems to make sense. Though Yin and et al. 2003; Yin and Wang 2015), this study substantiates the
Wang (2015) concluded that good teaching had no sig- direct impact of resource and service on program satisfac-
nificant association with student’s overall satisfaction, this tion, which indicates that China’s heavy investment in equip-
study reveals that the method of instruction significantly and ping the campus with extensive information technology and
positively impacts program satisfaction for STEM students educational infrastructures has largely paid off. The finding
through course satisfaction. More importantly, CIM proves that RS does not significantly impact CS once again high-
to be a very solid predictor of PS, whose total impact on PS lights the time-honored educational thought that the core of
is stronger than that of SFFM, AAM, and RS (Table 5). This a university is not magnificent buildings but erudite scholars
finding identifies with some other previous studies showing who teach courses well (Kerr 2001). If a university merely
that effective teaching significantly predicts students’ overall emphasizes hardware resources and non-academic service
satisfaction (Grace et al. 2012) and that teachers’ effort to without having the best instructors to teach, students’ pro-
reinvigorate courses enhances student satisfaction (Carbone gram satisfaction could increase, but their course satisfaction
et al. 2015). It is also congruent with prior findings that may not, as indicated in this study.
student satisfaction heavily rests upon their perceptions of
instructors’ expertise (Bedggood and Donovan 2012; Ives The mediating effects of course satisfaction
and Rowley 2005). In fact, employing versatile teaching and group comparisons
methods is an important yardstick to measure the quality of
STEM programs in some international frameworks such as The finding that some antecedents’ impact on STEM stu-
the European Accredited Engineer framework (EUR-ACE) dents’ program satisfaction is not direct but mediated by
and ABET Accreditation Framework. course satisfaction signals two messages. Firstly, course does
This study also reveals that students’ satisfaction with not equate with program. The measures needed to improve
their programs significantly derives from the support they the quality of programs are different from those needed
get from faculty members, supporting previous studies for course improvement. Resources and service is a salient
(Newmann and Wehlage 1993; Yin and Wang 2015) that example which contributes to program satisfaction but not
recognize the prominence of faculty scaffolding students course satisfaction. Secondly, courses constitute the most
in learning and the socially supportive interactions that important position within a program with CS’ total effects
facilitate learning (National Research Council 2003). For on PS outweighing any other variable’s effect (Table 5). It
long, Chinese students have complained that most univer- substantiates Almarghani and Mijatovic (2017) who argued
sity instructors seem indifferent by just appearing in class that it was all about good teaching and courses regarding
and disappearing when bells ring. Such carelessness has students’ expectations of higher education.
contributed to a prevalent passive learning climate in which The group comparison results have also revealed interest-
students seldom discuss course contents outside classes and ing empirical findings from China. No significant differences

13
Modeling undergraduate STEM students’ satisfaction with their programs in China: an empirical… 221

in the impact of CIM, SFFM, and RS on PS between stu- China’s attempt to reform its engineering education sector
dents of different tiers of universities can be explained by through these aspects. It also shows that course satisfac-
three points. Firstly, the higher rankings and reputations tion mediates the relationship between the above variables
of first-tier institutions mainly result from their extensive and program satisfaction, as well as the differences across
research-related outputs such as publications, projects, and university tiers and academic years, thus deepening our
grants rather than teaching. Faculty tend to spend minimum understanding of invisible forces that contribute to program
time on teaching and meet the lowest teaching require- satisfaction in STEM fields at undergraduate level.
ments under the current evaluation system, irrespective of We therefore propose several practical implications based
whether they are working at first-tier or non-first-tier insti- on the above empirical findings. Firstly, educational policy-
tutions (Huang et al. 2018). Secondly, such downplaying of makers and institution leadership should always bear in mind
teaching largely confines faculty’s major support to students that the core of running a program or carrying out any edu-
within classrooms, as in the current context of managerial- cational reform lies in the provision of high-quality courses,
ist reforms prioritizing research and administrative power, the ‘soul’ of higher education (McCluskey and Winter 2012).
few faculty members track student learning after class and Effort in improving other aspects is supplementary to course
provide extra support. Thirdly, as Chinese higher educa- quality improvement. Secondly, as instructors in STEM pro-
tion overall has received heavy funding over the past two grams normally graduate from STEM fields without much
decades, institutions of various rankings have extensively training in pedagogical methods, professional development
built and renewed their infrastructures such as the network, for faculty members should be emphasized in all types of
laboratories, facilities, and praxis bases compared with universities, and institutional effort must be made to system-
their own past. As such, students across tiers of universities atically train faculty members to develop various pedagogi-
all feel direct influences of growing infrastructure on their cal skills and employ various means for instruction. Thirdly,
learning process in an age of massive development of higher universities should incorporate instructors’ extracurricular
education. However, first-tier universities have more often guidance to students into the evaluation of faculty’s over-
than not used multiple methods such as group learning, oral all performance and provide them with substantive rewards.
examinations, and praxis feedback to assess student learning Fourthly, Chinese universities, especially non-first-tier uni-
due to their deeper internationalization, while non-first-tier versities must develop and employ multiple measures to
institutions mostly stick to traditional paper examinations, assess student learning in complementarity with the prevalent
and  that could explain why there exists significant difference final-term examination. Knowing that they will be assessed
in the impact of AAM on CS between students of different periodically in various forms will incentivize students to
tiers of universities. overcome procrastination and take learning more seriously.
Regarding comparison in terms of academic years, the Fifthly, institutions should continue to invest in facilities
significant differences in the impact of RS on PS between regarding learning, experiments, internship, and so forth.
higher-academic year and lower-academic year students Viewing the system from a larger picture, Chinese gov-
are understandable. For freshmen and sophomores, their ernment should allocate more resources to non-first-tier
main tasks are taking basic and prerequisite theoretical universities as they constitute the majority of the higher
courses such as advanced mathematics and linear algebra. education sector and are in greater need of transformation.
These fundamental courses must be taught in classrooms to Though this study has found no significant difference in
lay students a solid academic foundation. Not many extra RS’ impact on STEM undergraduates’ program satisfac-
resources such as laboratory equipment or praxis bases are tion across students of different tiers of universities, it could
needed at this stage. For juniors and seniors, however, no result from students’ comparison of their institutions’ cur-
matter whether it is for conducting experiments, working on rent infrastructure (i.e., library resources, internet resources,
capstone projects, or having internships, they need various flipped classrooms, and services) with their own past rather
resources outside classrooms to improve their application than comparing them with other institutions when they
and practical capabilities, and hence, there is the difference filled the questionnaire. However, dramatic gap has been
of RS’ impact on PS for students across academic years. identified between different tiers of higher education insti-
tutions in attracting external investment and financial sup-
port for development in literature (Zhuang and Xu 2018).
Conclusion Matthew effect1 not only leads to stratification across tiers

1
The present study reveals the impact of method of instruc-  Matthew effect refers to the phenomenon that those who already
tion, support from faculty members, alternative assessment have status are often placed in  situations where they gain more,
and those that do not have status typically struggle to achieve more.
methods, and resources and service on STEM students’ (Source: https​://study​.com/acade​my/lesso​n/matth​ew-effec​t-defin​ition​
program satisfaction, thus evidencing the reasonability of -examp​les.html)

13

222 T. Zhuang et al.

of institutions, but also undermines the possibility of sys- Support from Faculty Members 教师对学生的学业支
tematic synergetic reform as NEE proposes. While top-tier 持情况 (7 items).
universities nowadays easily recruit top students and gain (Composite Reliability = .912, M = 4.797, SD = .859).
financial support, whether lower-tier institutions can catch
up with top ones or at least make progress should be a real 1. Instructors generally value the importance of students’
concern for China’s educational administrative bodies. academic performances and improvements.
For non-first-tier institutions to change, they need tangible   教师普遍重视帮助学生提高学习成绩和效果.
resources rather than mere expectations. 2. Instructors generally are willing to keep communications
One limitation of this study is that it has only explored with students in and outside class.
one specific direction of causalities as a cross-sectional   教师普遍愿意与学生保持课堂及课外的持续交流.
study. Apart from the several important aspects of China’s 3. Instructors generally encourage students to innovate
NEE reform that impact program satisfaction, there could be ways of learning and overcome learning barriers.
other factors exerting influences on program satisfaction as   教师普遍会鼓励学生创新学习方式、突破学业困
well. In addition, as China’s hundreds of higher education 难.
institutions produce 1.2 million STEM graduates at under- 4. Instructors generally reward students for their innovative
graduate level each year, the limit of our sample size may ways of thinking and academic achievements.
also limit the generalizability of our findings. Future studies   教师普遍会奖励学生的创造性思维和学习成就.
may consider employing longitudinal research and inves- 5. Instructors generally have high expectations of students
tigating other possible causalities by collecting data from in academic performances.
more sampled participants.   教师普遍对学生有较高的学业期待.

Alternative Assessment Methods 其他学业评价方式


Appendix: factors and items (5 items).
(Composite reliability = .877, M = 4.777, SD = .856).
Classroom Instruction Method 课程授课方式 (6 items).
(Composite Reliability = 0.888, M = 3.952, SD = 1.152). 1. Multiple assessment methods are used to assess stu-
dents’ learning outcomes besides exams (e.g., using
1. Instructors generally teach in laboratories or praxis paper exams, oral exams, thesis, praxis capability test,
bases. etc.)
  授课主要在实验室、实习或实训场所进行.   除笔试外, 学生专业课学习评价方式多元多样 (如
2. Instructors generally teach via the Internet. 包括笔试、口试、论文、实践能力测试等)
  授课主要在互联网进行. 2. Group projects generally influence students’ final course
3. Instructors generally adopt ‘group learning’ to facilitate grades.
teaching and learning in the class.   小组作业对各门课程的学业成绩普遍具有影响.
  授课教师会普遍采用 “小组学习”的方式开展课堂 3. Praxis bases or enterprises generally give feedbacks to
教学. students’ professional skills and applying abilities.
4. Instructors generally adopt simulation software, applica-   实习基地或企业会对学生的专业应用能力提供较
tion programs, and technology of virtual reality(VR) or 为专业的回馈.
augmented reality (AR) to deliver course instruction. 4. Supplementary exams are as strict as first exams for a
  授课教师会普遍利用模拟软件、应用程序、虚拟 course.
现实技术 (VR), 增强现实技术 (AR) 等信息技术开展   补考会像首次考试一样严格.
课堂教学. 5. The assessment system motivates students’ learning.
5. Instructors generally adopt flipped classroom to deliver   评价体系确实有助于激励学生认真学习.
course instruction.
  授课教师会普遍采用“翻转课堂”模式开展教学. Resources and Service 学校资源与服务 (5 items).
6. Instructors generally invite industry or business experts (Composite Reliability = .907, M = 4.611, SD = .951).
to teach or lecture in class.
  授课教师会普遍聘请行业/企业专家到课堂授课或 1. I can get access to necessary hardware resources (e.g.,
开设讲座. facilities and devices) from the university when needed.
  我在需要时可以及时从学校获得必要的硬件资源
­(设备)

13
Modeling undergraduate STEM students’ satisfaction with their programs in China: an empirical… 223

2. I can get access to necessary soft resources (e.g., ser- Baird, M., Buchinsky, M., & Sovero, V. (2016). Decomposing the
vice) from the university when needed. racial gap in STEM major attrition: A course-level investigation.
Bedggood, R. E., & Donovan, J. D. (2012). University performance
  我在需要时可以及时从学校获得相关的软件资源 evaluations: What are we really measuring? Studies in Higher
­(服务) Education, 37(7), 825–842. https ​ : //doi.org/10.1080/03075​
3. I’m satisfied with the courses and career counseling ser- 079.2010.54922​1.
vice provided by the university. Butt, B. Z., & Rehman, K. (2010). A study examining the students
satisfaction in higher education. Procedia—Social and Behav-
  我对学校开设的专业课程及就业指导建议感到满 ioral Sciences, 2(2), 5446–5450. https​://doi.org/10.1016/j.sbspr​
意. o.2010.03.888.
4. I’m satisfied with the medical, psychological service and Carbone, A., Ross, B., Phelan, L., Lindsay, K., Drew, S., Stoney, S.,
student welfares provided by the university. et al. (2015). Course evaluation matters: Improving students’
learning experiences with a peer-assisted teaching programme.
  我对学校提供的医疗、福利及心理咨询服务感到 Assessment & Evaluation in Higher Education, 40(2), 165–180.
满意. Chen, X. (2013). STEM attrition: College students’ paths into and
5. I benefit a great deal from library resources at the uni- out of STEM Fields. Statistical analysis report. NCES 2014-001.
versity. National Center for Education Statistics.
Crawley, E. F., Malmqvist, J., Östlund, S., Brodeur, D. R., & Edström,
  学校图书馆的资源对我的专业课学习很有帮助. K. (2014). Rethinking engineering education. Cham: Springer.
https​://doi.org/10.1007/978-3-319-05561​-9.
Course Satisfaction 对课程的满意情况 (3 items). Dearing, R. (1997). The Dearing Report. The National Committee of
(Composite Reliability = .896, M = 4.346, SD = 1.036). Enquiry into Higher Education.
DeChenne, S. E., Enochs, L. G., & Needham, M. (2012). Science, tech-
nology, engineering, and mathematics graduate teaching assistants
1. After the courses, I generally miss the courses very teaching self-efficacy. Journal of the Scholarship of Teaching and
much. Learning, 12(4), 102–123.
  课程结束后, 我总体上对课程流连忘返, 意犹未 Devinder, K., & Datta, B. (2003). A study of the effect of perceived
lecture quality on post-lecture intentions. Work Study, 52(5),
尽。 234–243.
2. After the courses, I master the theories and knowledge Dorman, J. P. (1998). The development and validation of an instrument
structure of most courses. to assess institutional-level environment in universities. Learning
  课程结束后, 我掌握了大多数课程的知识体系和理 Environments Research, 1(3), 333–352.
Douglas, J., Douglas, A., & Barnes, B. (2006). Measuring student satis-
论基础。 faction at a UK university. Quality Assurance in Education, 14(3),
3. After the courses, I can proficiently apply what I learn 251–267. https​://doi.org/10.1108/09684​88061​06785​68.
into practice or R&D. Elhadary, O. (2016). Student satisfaction in STEM: An exploratory
  课程结束后, 我能够熟练地将所学应用到实践或者 study. American Journal of Educational Research, 4(2), 195–199.
Elliott, K. M., & Shin, D. (2002). Student satisfaction: An alternative
研发中。 approach to assessing this important concept. Journal of Higher
Education Policy and Management, 24(2), 197–209.
Program Satisfaction 对专业的满意情况 (3 items). Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation
(Composite Reliability = .891, M = 4.610, SD = 1.006). models with unobservable variables and measurement error. Jour-
nal of Marketing Research, 18, 39–50.
Grace, D., Weaven, S., Bodey, K., Ross, M., & Weaven, K. (2012). Put-
1. I’m very satisfied with the program in which I’m ting student evaluations into perspective: The course experience
enrolled provided by the university. quality and satisfaction model (CEQS). Studies in Educational
  本校本专业让我非常满意. Evaluation, 38(2), 35–43.
Graham, R. (2018). The global state of the art in engineering educa-
2. I’m willing to recommend the program in which I’m tion. Cambridge, MA: Massachusetts Institute of Technology.
enrolled to prospective students. Green, H. J., Hood, M., & Neumann, D. L. (2015). Predictors of stu-
  我愿意将本校本专业推荐给未来的求学者. dent satisfaction with university psychology courses: A review.
3. The program provided by the university makes me will- Psychology Learning & Teaching, 14(2), 131–146. https​://doi.
org/10.1177/14757​25715​59095​9.
ing to pursue further studies in my major or field. Hair, J. F., Anderson, R. E., Tatham, R. L., & William, C. (Ed.). (2014).
  本校本专业使我愿意在相关领域进一步学习深造. Multivariate data analysis (7th ed.). Harlow: Pearson.
Haller, M. (2006). Emerging technologies of augmented reality: Inter-
faces and design: Interfaces and design. Igi Global.
Hallinger, P., & Lu, J. (2013). Learner centered higher education in
East Asia: Assessing the effects on student engagement. Inter-
References national Journal of Educational Management, 27(6), 594–612.
Hill, Y., Lomas, L., & MacGregor, J. (2003). Students’ perceptions
Almarghani, E. M., & Mijatovic, I. (2017). Factors affecting student of quality in higher education. Quality Assurance in Education,
engagement in HEIs-it is all about good teaching. Teaching in 11(1), 15–20.
Higher Education, 22(8), 940–956. Holdren, J., Lander, E., & Varmus, H. (2010). Report to the presi-
dent prepareandinspıre: K-12 educatıon in scıence, technology,
engıneerıng, andmath (STEM) for Amerıca’s future.

13

224 T. Zhuang et al.

Holdren, J. P., Marrett, C., & Suresh, S. (2013). Federal science, tech- International Journal of Educational Management, 19(6),
nology, engineering, and mathematics (STEM) education 5-year 505–526.
strategic plan. National Science and Technology Council: Com- Newmann, F. M., & Wehlage, G. G. (1993). Five standards of authentic
mittee on STEM Education. instruction. Educational Leadership, 50(7), 8–12.
Huang, Y., Pang, S.-K., & Yu, S. (2018). Academic identities and uni- Nikolic, S., Ritz, C., Vial, P. J., Ros, M., & Stirling, D. (2015). Decod-
versity faculty responses to new managerialist reforms: Experi- ing student satisfaction: How to manage and improve the labora-
ences from China. Studies in Higher Education, 43(1), 154–172. tory experience. IEEE Transactions on Education, 58(3), 151–
https​://doi.org/10.1080/03075​079.2016.11578​60. 158. https​://doi.org/10.1109/TE.2014.23464​74.
Husband, T. (2013). Improving the quality of instruction through a Nunnally, J. C., & Bernstein, I. (1994). Psychometric theory (McGraw-
service teaching framework. Journal of Effective Teaching, 13(2), Hill Series in Psychology) (Vol. 3). McGraw-Hill, New York.
73–82. NVAO. (2017). Delft University of technology advisory report. The
Ives, G., & Rowley, G. (2005). Supervisor selection or allocation Hague, Netherlands: NVAO, Department The Netherlands, Insti-
and continuity of supervision: Ph.D. students’ progress and out- tutional Audit.
comes. Studies in Higher Education, 30(5), 535–555. https​://doi. O’Donovan, B. (2017). How student beliefs about knowledge and
org/10.1080/03075​07050​02491​61 knowing influence their satisfaction with assessment and feed-
Kerr, C. (2001). The uses of the university. Cambridge: Harvard Uni- back. Higher Education, 74(4), 617–633. https​://doi.org/10.1007/
versity Press. s1073​4-016-0068-y.
Kline, R. B. (2011). Principles and practice of structural equation O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms
modeling (3rd ed.). New York: Guilford Press. in higher education: A scoping review. The Internet and Higher
Knight, J. (2008). Higher education in turmoil: The changing world of Education, 25, 85–95.
internationalization. Brill Sense. Ohland, M. W., Sheppard, S. D., Lichtenstein, G., Eris, O., Chachra,
Kreber, C. (2003). The relationship between students’ course percep- D., & Layton, R. A. (2008). Persistence, engagement, and migra-
tion and their approaches to studying in undergraduate science tion in engineering programs. Journal of Engineering Education,
courses: A Canadian experience. Higher Education Research & 97(3), 259–278. https​://doi.org/10.1002/j.2168-9830.2008.tb009​
Development, 22(1), 57–75. 78.x.
Leng, Y. (1996). Cong Jiangshou Weizhu Dao Zixue Weizhu [From the Ortiz, A. M., & Sriraman, V. (2015). Exploring faculty insights into
dependence on classroom lecture to self-study]. Journal of Higher why undergraduate college students leave STEM fields of study-A
Education, 17(2), 59–65. three-part organizational self-study. American Journal of Engi-
Li, W. S., & Hui, K. F. S. (2007). Conceptions of assessment of main- neering Education, 6(1), 43.
land China college lecturers: A technical paper analyzing the Chi- Price, I., Matzdorf, F., Smith, L., & Agahi, H. (2003). The impact
nese version of CoA-III. The Asia-Pacific Education Researcher, of facilities on student choice of university. Facilities, 21(10),
16(2), 185. 212–222.
MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confi- Ramsden, P. (2003). Learning to teach in higher education. Abingdon:
dence limits for the indirect effect: Distribution of the product and Routledge.
resampling methods. Multivariate Behavioral Research, 39(1), Rust, C., Price, M., & O’donovan, B., (2003). Improving students’
99–128. learning by developing their understanding of assessment criteria
Marginson, S. (2011). Imagining the global. In Handbook on Globali- and processes. Assessment & Evaluation in Higher Education,
zation and Higher Education, 10–39. 28(2), 147–164.
McCluskey, F. B., & Winter, M. L. (2012). The idea of the digital uni- Smith, S. S., Saunders, K. P., Antonenko, P., Green, T., Peterson, N.,
versity: Ancient traditions, disruptive technologies and the battle & Thompson, D. (2007). Experiences in using virtual reality in
for the soul of higher education. Washington, DC: Westphalia design and graphics classrooms. International Journal of Engi-
Press. neering Education, 23(6), 1192.
MOE. (2017). Xingongke Jianshe ’Tianda’ Xingdong [‘Tianda Action’ So, H.-J., & Brush, T. A. (2008). Student perceptions of collaborative
for ‘New Engineering Education’ Development]. Retrieved learning, social presence and satisfaction in a blended learning
December 31, 2018 from https​://www.moe.edu.cn/s78/A08/ environment: Relationships and critical factors. Computers &
moe_745/20170​4/t2017​0412_30242​7.html Education, 51(1), 318–336.
MOE. (2018a). Jieshao Putong Gaodeng Xuexiao Benke Zhuanyelei Sum, V., McCaskey, S. J., & Kyeyune, C. (2010). A survey research
Jiaoxue Zhiliang Guojia Biaozhun Youguan Qingkuang [Briefng of satisfaction levels of graduate students enrolled in a nationally
on National Standards of Teaching Quality for Undergraduate ranked top-10 program at a mid-western university. Research in
Programs]. Retrieved September 2, 2019 from https​://www.moe. Higher Education Journal, 7, 1.
gov.cn/jyb_xwfb/xw_fbh/moe_2069/xwfbh​_2018n​/xwfb_20180​ Tam, K. Y., Heng, M. A., & Jiang, G. H. (2009). What undergraduate
130/20180​1/t2018​0130_32592​8.html students in China say about their professors’ teaching. Teaching
MOE. (2018b). Putong Gaodeng Xuexiao Benke Zhuanyelei Jiaoxue in Higher Education, 14(2), 147–159.
Zhiliang Guojia Biaozhun [National Standards of Teaching Webster, B. J., Chan, W. S., Prosser, M. T., & Watkins, D. A. (2009).
Quality for Undergraduate Programs]. Beijing: Higher Educa- Undergraduates’ learning experience and learning process:
tion Press. Quantitative evidence from the East. Higher Education, 58(3),
National Academies of Sciences, Engineering, and Medicine. (2017). 375–386.
Undergraduate research experiences for STEM students: Suc- Wilson, K. L., Lizzio, A., & Ramsden, P. (1997). The development,
cesses, challenges, and opportunities. Washington D.C.: National validation and application of the course experience questionnaire.
Academies Press. Studies in Higher Education, 22(1), 33–53.
National Research Council. (2003). Evaluating and improving under- Xu, Y. J. (2015). Attention to retention: Exploring and addressing the
graduate teaching in science, technology, engineering, and math- needs of college students in STEM majors. Journal of Education
ematics. Washington DC: National Academies Press. and Training Studies. https​://doi.org/10.11114​/jets.v4i2.1147.
Navarro, M., Pedraja I, M., & Rivera T, P. (2005). A new management Ye, X. (2011). Cong Meiguo Daxue Jiaoxue Tedian Kan Woguo
element for universities: Satisfaction with the offered courses. Daxue Jiaoxue Mangdian [Exploring the weaknesses of China’s

13
Modeling undergraduate STEM students’ satisfaction with their programs in China: an empirical… 225

University teaching from the characteristics of university teach- Zepke, N., & Leach, L. (2010). Improving student engagement: Ten
ing in the US]. Journal of Higher Education (Chinese), 32(11), proposals for action. Active Learning in Higher Education, 11(3),
68–75. 167–177.
Yin, H., & Lu, G. (2014). Development and validation of an instrument Zhang, J., Xue, L., & Lu, R. (2013). Jiyu Xuesheng Shijiao De Daxue
for assessing mathematics classroom environment in tertiary insti- Ketang Jiaoxue Zhuangtai Diaocha Yanjiu [Investigating the
tutions. The Asia-Pacific Education Researcher, 23(3), 655–669. University classroom teaching from the perspective of students].
https​://doi.org/10.1007/s4029​9-013-0138-1. Higher Education Research & Appraisal, 9, 15–17.
Yin, H., & Wang, W. (2015). Assessing and improving the quality of Zhuang, T., & Xu, X. (2018). ’New engineering education’ in Chinese
undergraduate teaching in China: The course experience ques- higher education: Prospects and challenges. Tuning Journal for
tionnaire. Assessment & Evaluation in Higher Education, 40(8), Higher Education, 6(1), 69–109.
1032–1049. https​://doi.org/10.1080/02602​938.2014.96383​7.
Yin, H., Wang, W., & Han, J. (2016). Chinese undergraduates’ percep- Publisher’s Note Springer Nature remains neutral with regard to
tions of teaching quality and the effects on approaches to studying jurisdictional claims in published maps and institutional affiliations.
and course satisfaction. Higher Education, 71(1), 39–57. https​://
doi.org/10.1007/s1073​4-015-9887-5.

13

You might also like