You are on page 1of 16

Educ Inf Technol (2018) 23:741–755

DOI 10.1007/s10639-017-9633-y

Perceptions of technological, pedagogical and content


knowledge (TPACK) among pre-service
teachers in Estonia

Piret Luik 1 & Mere Taimalu 2 & Reelika Suviste 1

Received: 25 April 2017 / Accepted: 12 July 2017 / Published online: 20 July 2017
# Springer Science+Business Media, LLC 2017

Abstract Most countries stress that preparing quality teachers for twenty-first century
students is an essential task for teacher training institutions. Besides the skills for how
to teach subjects effectively, teachers should also know how to integrate digital
technology into their teaching. Several studies have been done based on the TPACK
framework. Some of these studies use this framework for specific subject domains. In
this study a generally applicable instrument for measuring TPACK was created. The
aim of the paper was to validate the created instrument and to find out how pre-service
teachers perceive their technological, pedagogical and content knowledge regarding the
TPACK framework in Estonia, in a technologically highly developed country where
technology is broadly used in general education. Conducting factor analysis all items
with technology merged into one factor meaning that young generation perceives
technology already integrated with content and pedagogy. The results indicate that
pre-service teachers lack pedagogical knowledge, but they perceive that they are good
at integrating technology into their teaching. Differences in perceptions were also found
according to gender, age and curricula.

Keywords Pre-service teacher . TPACK . Perception . Validation

* Piret Luik
piret.luik@ut.ee

Mere Taimalu
merle.taimalu@ut.ee

Reelika Suviste
reelika.suviste@ut.ee

1
University of Tartu, Institute of Computer Sciences, J. Liivi 2, 50409 Tartu, Estonia
2
University of Tartu, Institute of Education, Salme 1a, 50103 Tartu, Estonia
742 Educ Inf Technol (2018) 23:741–755

1 Introduction

A variety of problems in relation to teacher education have been highlighted in Estonia


and other European countries. One of the greatest concerns in Estonia is that the
average age of our teachers is high and the next generation of teachers does not seem
to be emerging. Only 60% of the graduates of teacher education programmes in Estonia
enter the profession (Ministry of Education and Science 2015). The situation that good
teachers leave schools and choose another career path is similar in other countries
(Howes and Goodman-Delahunty 2015). One explanation could be that teaching is a
highly complex activity that draws on many kinds of knowledge and the teaching job is
quite task demanding (Grossman et al. 2009; Watt and Richardson 2012). Nowadays, it
is not sufficient if a teacher is only a good specialist in a subject or good pedagogically.
Effective use of digital technology as one important aspect of a teacher’s knowledge for
the twenty-first century is also highlighted in several policy papers (Groff 2013;
Estonian Lifelong Learning Strategy 2020 2014). Therefore, the question arises of
how our pre-service teachers perceive their knowledge about the subject content,
pedagogy and integration of digital technology in their teaching.

2 Literature review

2.1 Knowledge domains

Society needs competent teachers; therefore, preparing quality teachers is a concern for
most countries. Teacher knowledge is complex phenomena, which is difficult to define.
There is no consensus about how teacher knowledge should be defined and what
competent teachers should know and be able to do (Goodwin and Kosnik 2013; Murray
2001), and this depends on the specific demands of the school (Berliner 2005).
Historically, the basis of teacher education started out focusing on the content
knowledge of the teacher, which means the knowledge of the subject to be taught
(Shulman 1987). Later, it shifted more towards pedagogy independent of subject matter
(Ball and McDiarmid 1990; Sung and Yang 2013). Teacher knowledge could be seen
these days in terms of two areas – pedagogy and content – independent of each other.
Shulman (1987) proposed thinking about teacher knowledge as pedagogical content
knowledge, which represents content and pedagogy linked. So, his model consisted of
three areas: pedagogical knowledge (PK), content knowledge or domain-specific
knowledge (CK) and pedagogical content knowledge (PCK). PCK includes knowledge
of student conceptions, preconceptions and misconceptions and knowledge of instruc-
tional strategies to teach a subject in different contexts (e.g. Lee and Luft 2008),
knowledge about structures, analogies, illustrations, and examples of content to help
make it comprehensible (Shulman 1987). However, when analysing teacher preparation
models, PCK is not always separated, but viewed as one part of pedagogy (e.g. Boyd
et al. 2007).
PK was defined as knowledge of teaching and learning processes independent of
subject matter (Phillips et al. 2009) including the nature of student learning processes
and class management (Boyd et al. 2007), and so it could be practical, specific and
personal (Cain 2015). Voss et al. (2011) extended PK in Shulman’s (1987) model to
Educ Inf Technol (2018) 23:741–755 743

include pedagogical/psychological knowledge (PPK) adding psychological aspects


relating to the classroom as a social group and the heterogeneity of individual students.
Their model of teacher knowledge also consisted of three broad areas (CK, PPK and
PCK), which encompass various sub-dimensions, where PPK includes knowledge
about classroom management, teaching methods, classroom assessment, and the learn-
ing processes and individual characteristics of students. They subsequently combined
the last two sub-dimensions together in their instrument to form what they called
knowledge of student heterogeneity (Voss et al. 2011). The resulting three dimensional
model (Voss et al. 2011) (CK, PPK and PCK) has also been validated by other authors
(e.g. Paulick et al. 2016).
Grossman (1995) added a fourth area that covered knowledge of the context
(school). Context also figures in the model by Phillips et al. (2009). In this model
circles are drawn with overlap between the content, pedagogy and context, so that
seven different areas of knowledge are presented: CK, PK, PCK, context knowl-
edge, content in context knowledge, pedagogical context and PCK in context.
Grossman’s (1995) knowledge of school context is similarly identified by van der
Schaaf and Stokking (2011) in their three part model: general pedagogical knowl-
edge, pedagogical content knowledge and knowledge of school context. CK is
almost missing from this model.
As we can see these models of teacher knowledge are similar and most are based on
Shulman’s (1987) model, which has been elaborated upon by different researchers. As
the advent of digital technology has changed our understanding of teaching and teacher
knowledge, Mishra and Koehler (2006) elaborated Shulman’s (1987) model adding
technology to content and pedagogy, claiming that in teacher education the primary
focus should be studying how technology is pedagogically used to teach content. They
named the model TPACK (formerly referred to as TPCK), which consists of seven
parts and is often presented as a Venn diagram with overlapping circles. The three
circles in this model describe basic areas of teacher knowledge (Content knowledge,
Technology knowledge, Pedagogical knowledge) and the four overlapping parts indi-
cate the integration between these three circles (e.g. Koehler et al. 2007; Mishra and
Koehler 2006:

& Technological Content Knowledge (TCK) – knowledge of subject matter integrated


using technology;
& Technological Pedagogical Knowledge (TPK) – knowledge of using technology to
support teaching method;
& Pedagogical Content Knowledge (PCK) – knowledge of teaching methods in
different subject contexts;
& Technological Pedagogical Content Knowledge (TPACK) – knowledge of using
technology to implement teaching methods in different subject contexts.

Graham et al. (2012) claim that in the original framework technology did not only
mean digital technology, but included besides digital technologies all other tools, even
down to the simple pencil. To emphasize digital technology, ICT has been added to
TPACK to form the term ICT-TPCK (Angeli and Valanides 2009), or when talking
about TPACK with respect to the educational use of the World Wide Web the term
TPCK-W has been used (Lee and Tsai 2010). Because a narrower definition of
744 Educ Inf Technol (2018) 23:741–755

technology is important for the clarity of the framework (Graham 2011), and
most researchers currently use technologies in the TPACK framework to mean
digital technologies (Graham et al. 2012), we define technology as digital
technology in our study.
It is important that in teacher training all student teachers must obtain knowledge
and skills for how to teach their subject efficiently using technology (Marino et al.
2009), but teachers often fail to successfully integrate technology in the classroom (Liu
2011). Because the digital competences of teachers and students are highlighted in
Estonia (Estonian Lifelong Learning Strategy 2020 2014) and TPACK is widely
accepted and applied (e.g. Graham 2011; Lin et al. 2013), we also use the TPACK
framework in our study.

2.2 Measuring teacher knowledge using the TPACK framework

There are several papers, which deal with developing valid and reliable instruments for
measuring teacher evaluations of their knowledge according the TPACK model in
different countries such as Singapore (e.g. Koh et al. 2010; Lin et al. 2013), Taiwan
(Shih and Chuang 2013), China (e.g. Dong et al. 2015), Ghana (e.g. Agyei and
Keengwe 2012), Turkey (many studies e.g. Cengiz 2015; Hüseyin 2015; Kazu and
Erten 2014) and the USA (e.g. Kopcha et al. 2014; Lee and Tsai 2010; Shinas et al.
2013). However, more studies of teachers from different countries are still needed to
explore possible cultural differences in TPACK perceptions among pre-service and in-
service teachers (Koh et al. 2010).
Instruments using the TPACK framework have been used to measure evalua-
tions of knowledge areas in the case of pre-service teachers (e.g. Hüseyin 2015;
Koh et al. 2010; Pamuk et al. 2015; Shinas et al. 2013), in-service teachers (e.g.
Archambault and Barnett 2010; Kazu and Erten 2014; Lee and Tsai 2010) and
both (e.g. Dong et al. 2015; Koh et al. 2015; Lin et al. 2013). Shih and Chuang
(2013) studied student perceptions of college teacher knowledge according to the
TPACK framework. Some of these studies have investigated teacher perceptions
using TPACK among participants from different subject areas (e.g. Kazu and
Erten 2014; Koh et al. 2010) and some have constructed TPACK instruments
for teachers with similar content backgrounds (e.g., Hüseyin 2015 for English as
foreign language teachers Cengiz 2015 for physical education teachers;
Canbazoğlu Bilici et al. 2013, and Lin et al. 2013 for science teachers;
Zelkowski et al. 2013 for secondary school mathematics teachers). Schmidt
et al. (2009) developed a scale where CK was divided into different subjects:
literacy, mathematics, science and social studies. This scale is translated into
different languages and used in different countries (e.g. in Singapore Lin et al.
2013; in Turkey Kaya et al. 2013; in USA Shinas et al. 2013).
Mostly these scales use a 5-point Likert scale (e.g. Agyei and Keengwe 2012;
Cengiz 2015; Hüseyin 2015; Kaya et al. 2013; Kopcha et al. 2014; Pamuk et al.
2015) or a 7-point Likert scale (e.g. Chai et al. 2011; Dong et al. 2015; Lin et al. 2013),
but also a 6-point Likert scale (e.g. Lee and Tsai 2010) and even a scale from 0 to 100
(Canbazoğlu Bilici et al. 2013) have been used. Because these self-reported question-
naires do not measure real knowledge levels, results obtained with these instruments are
called TPACK perceptions in some studies (e.g. Koh et al. 2010; Lin et al. 2013) or
Educ Inf Technol (2018) 23:741–755 745

teacher opinions on TPACK self-efficacy in others (e.g. Canbazoğlu Bilici et al. 2013;
Kazu and Erten 2014; Lee and Tsai 2010).
Some of these studies only use theory based factors constructed without exploring
the construct validity of the instrument (e.g. Agyei and Keengwe 2012; Hüseyin 2015).
Still, in many papers, the exploratory factor analysis (EFA) (e.g. Kazu and Erten 2014;
Koh et al. 2010; Lee and Tsai 2010; Shinas et al. 2013), confirmatory factor analysis
(CFA) (e.g. Dong et al. 2015; Kaya et al. 2013) and confirmatory structural equation
models (SEM) (e.g. Lin et al. 2013; Pamuk et al. 2015) have been used. These analyses
have yielded different structures of the TPACK framework and most have had difficulty
identifying the seven factors of TPACK. The model with seven factors was supported
by Kazu and Erten (2014), Lin et al. (2013), Pamuk et al. (2015) and Schmidt et al.
(2009). All of the mentioned studies supported Mishra and Koehler’s (2006) frame-
work. Kaya and Dağ (2013) confirmed the model by Schmidt et al. (2009). In both
studies (Kaya and Dağ 2013; Schmidt et al. 2009) authors reached on 10-factor model:
consisting of TK, PK, PCK, TCK, TPK, TPACK, and four separate factors of CK
representing CK in Mathematics, CK in Social Studies, CK in Science, and CK in
Literacy. The study by Dong et al. (2015) revealed a nine-factor structure, but seven of
these were the original factors from Mishra and Koehler’s (2006) framework and the
other two were the teachers’ constructivist beliefs and design disposition. However,
these models (Dong et al. 2015; Kazu and Erten 2014; Lin et al. 2013; Pamuk et al.
2015; Schmidt et al. 2009) indicate strong relationships between different dimensions.
There are several studies indicating a greater or lesser number of factors than
seven: Archambault and Barnett (2010) reported the existence of three factors,
Zelkowski et al. (2013) reached a four-factor structure. The studies by Koh et al.
(2010) and Chai et al. (2011) revealed five factors. In all these studies some of the
original factors were merged together. In addition, it was found that pre-service
teachers did not link CK to TPACK (Chai et al. 2011). An eight-factor structure in
TPACK was confirmed in three studies (Kaya et al. 2013; Shinas et al. 2013;
Canbazoğlu Bilici et al. 2013). Again moderate to high correlations between all
factors were found (Canbazoğlu Bilici et al. 2013).
In some studies only items describing TPACK as overlapping with all three basic
forms of knowledge have been used (e.g. Koh et al. 2015; Yurdakul et al. 2012)
emphasizing the importance of integrating these three parts. The study by Yurdakul
et al. (2012) indicated that the TPACK part itself has a four-factor structure: design
(designing teaching in a way that all components are integrated), exertion (using
technology in the teaching process and evaluating this process), ethics (technology-
related ethical issues) and proficiency (leadership ability to integrate technology).

2.3 Previous studies about pre-service teacher knowledge according the TPACK
framework

Both pre-service teachers (e.g. Cengiz 2015; Koh et al. 2010; Lin et al. 2013) and in-
service teachers (e.g. Kazu and Erten 2014; Koh et al. 2015; Lin et al. 2013) have rated
all their perceptions significantly higher than neutral. Results vary in regard to the
highest and the lowest ratings in different studies. In the study by Hüseyin (2015), the
results indicated that pre-service teachers in Turkey perceived the highest PK and TK,
whereas the lowest mean scores were ascribed to TCK and PCK factors, but in the
746 Educ Inf Technol (2018) 23:741–755

study by Dong et al. (2015), Chinese pre-service teachers perceived themselves as


strongest in terms of their TPK and weakest in terms of their CK. Chai et al. (2011)
found that CK was the highest rated factor, which is contrary to the study by Dong et al.
(2015). In-service teachers rated the highest factor PK (Dong et al. 2015; Koh et al.
2015) and CK (Koh et al. 2015). The lowest rated factor among in-service teachers was
TPACK (Dong et al. 2015; Koh et al. 2015).
The relationship between age and TK perceptions is more evident in the case of in-
service teachers (e.g. Kazu and Erten 2014; Lee and Tsai 2010; Lin et al. 2013), but not
so much in the case of pre-service teachers (e.g. Koh et al. 2010; Lin et al. 2013). At the
pre-service level, Lin et al. (2013) found that only TK was positively correlated, and
Koh et al. (2010) did not find any statistically significant relationships with age. At the
in-service level in study by Lin et al. (2013) all factors including technology were
negatively correlated with age, while in study by Koh et al. (2015) age was negatively
correlated only with factors including technology and positively with PCK. In addition,
Kazu and Erten (2014) concluded that when the age ranges of teachers increase, ratings
for PCK were higher.
The results exploring gender differences are also controversial in the case of in-
service teachers, even within the same country. Koh et al. (2010) and Hüseyin (2015)
have found that male pre-service teachers have rated their TK higher than females and
Hüseyin (2015) added that females scored higher than males in PK. Koh et al. (2015)
concluded that in all constructs that were related to technology male teachers rated
themselves higher than female teachers. But Lin et al. (2013) did not find any gender
differences at the pre-service level. More similarities exist at the in-service level.
Female in-service teachers seem to rate PK (Kazu and Erten 2014; Lin et al. 2013)
and TPK (Kazu and Erten 2014) higher and TK lower (Lin et al. 2013) than male in-
service teachers.
As the results of the previous studies indicate, there is no scale using the TPACK
framework, which is suitable for all settings – in-service and pre-service teachers,
different subjects and different countries. Controversial results have also been found
in terms of how demographic data correlates with TPACK components and which
TPACK components are rated higher. Therefore, more studies of TPACK in different
countries are needed (Koh et al. 2010). This study aims to:

(1) develop the TPACK scale and to validate it in the Estonian context;
(2) describe the perceptions of TPACK by Estonian pre-service teachers and to find
relationships between TPACK components and pre-service teacher demographics
(age, gender, study level).

3 Method

3.1 Sample

The sample included 413 pre-service teachers from the University of Tartu, who
took the teacher education course ‘Designing Learning and Instruction’. There
were 355 (86.0%) female respondents and 54 (13.1%) male respondents in our
study (4 respondents did not indicate their gender). In Estonia, only 12% of the
Educ Inf Technol (2018) 23:741–755 747

teachers working in our educational institutions are male (Haridussilm 2016), so


the gender division in the sample was in accordance with the actual situation in
schools. A total of 196 (47.5%) pre-service students were at bachelor level, 171
(41.4%) at master’s level and 38 (9.4%) were students from integrated BA and
MA curricula (7 did not answer that question). The average age of the respon-
dents was 26.0 (SD = 7.34). The age range of the respondents was between 18
and 53 years.

3.2 Instrument and procedure

We used a questionnaire for measuring self-reported knowledge because tests


fail in part due to the complexity of classroom environments and rely on one
correct answer to questions for which many answers are appropriate (Berliner
2005). The composing of the questionnaire consisted of several steps. First, an
initial pool of items was constructed as items from different studies (Graham
et al. 2009; Schmidt et al. 2009; Shih and Chuang 2013). One hundred and
three items were formed within the pool of items. Second, experts from Estonia
and Finland (two researchers, one in-service teacher and one educational tech-
nologist from a school) evaluated these items. All the experts were familiar
with the design of the TPACK model. They marked, which part of the TPACK
model a particular item represents (e.g. TK or PCK) and how representative
this item is in describing the model on a 5-point scale. The experts also added
items, which they considered to be important. Third, the items marked accord-
ing to TPACK by the experts were collated with the parts of TPACK according
to the original study. If the item, marked by the experts, coincided with the
parts of TPACK in the original study, the item was selected. In the fourth step,
an average evaluation was calculated on the basis of the input from the four
experts and items with an average less than 3.5 were eliminated from the study.
In this way we selected 56 items.
The selected items were then translated into Estonian. To get a sense of how
effective the translation was, another independent person translated the translat-
ed questionnaire back into English and the two English versions were com-
pared. As the seventh step, one lecturer of the Estonian language corrected the
Estonian items. We employed a 5-point Likert-type scale: (1) Strongly disagree;
(2) Disagree; (3) Neither Agree Nor Disagree; (4) Agree; and (5) Strongly
Agree. Then the pilot study was conducted with 23 pre-service and 78 in-
service teachers. In the pilot study respondents wrote comments about how they
understood each item. Items which were misunderstood were corrected. In
addition, Cronbach’s alphas were calculated using the theoretical model with
the data from the pilot study. Seven questions about the respondents’ back-
ground (sex, age, curricula, level of studies, year of studies etc.) were added to
the questionnaire.
The data was collected in September at the first seminar of the course ‘Designing
Learning and Instruction’. All pre-service teachers from this course in academic years
2014/15 and 2015/16 received a questionnaire on paper with time to fill in the
questionnaire. Researchers emphasized that participation is voluntary and this research
is not related to the course assessment.
748 Educ Inf Technol (2018) 23:741–755

3.3 Data analysis

A statistical analysis was carried out as follows. At first an exploratory factor analysis
(EFA) using the Robust Maximum Likelihood method with Varimax rotation and
Kaiser Normalization was conducted using IBM SPSS Statistics 23. Then, the factorial
structure of the perceptions of TPACK was evaluated using a Confirmatory Factor
Analysis (CFA). The goodness-of-fit of the CFA models of the estimated models were
evaluated using the following three absolute goodness-of-fit indices: the Chi-Square
Test (χ2), the Root Mean Square Error of Approximation (RMSEA), and the
Standardized Root Mean Square of Residual (SRMR). Due to the fact that the χ2- test
is quite sensitive to sample size, the use of relative goodness-of-fit indices is also
strongly recommended in the case of large sample sizes (Bentler and Bonett 1980). We
used the following indices: the Comparative Fit Index (CFI) and Tucker Lewis Index
(TLI). The following cut-off points were also used: for the RMSEA ≤ .08 (Kelley and
Lai 2011), for the SRMR ≤ .08 (Hu and Bentler 1999), for the TLI ≥ .90 (Jöreskog and
Sörbom 2006) and for the CFI ≥ .95 (Schreiber et al. 2006). In addition, the chi-squared
per degree of freedom was calculated (Kline 2011). For all of the analyses, robust
estimation was used. The CFA analyses were performed using the Mplus statistical
package (Version 6; Muthén and Muthén 1998–2010).
According to these three factors, mean factor scores as means of the items belonging
in a particular factor were calculated. To compare the mean scores of the factors a
Multivariate analyses of variance with the Bonferroni corrections was applied. Pearson
and Spearman correlations were calculated by finding correlations between the factor
scores and the respondents’ background data.

4 Results

4.1 Factor structure of the TPACK scale

The EFA achieved a three-factor model. Two items were excluded because the commu-
nality of these items was less than .35. The total variance described by these three factors
was 55.24%. Confirmatory Factor Analyses with continuous factor indicators were per-
formed with the TPACK perceptions to examine the factor structure reached by the EFA. A
three-factor model for TPACK perceptions was estimated, consisting of Technology,
Content and Pedagogy factors. The first model showed a lack of fit: χ2 = 326.538, χ2/
df = 2.69, TLI = .84, CFI = .85, RMSEA = .06, SRMR = .07. After taking into account
several modification indices, the model showed nearly acceptable results with:
χ2 = 2196.271, χ2/df = 1.87, TLI = .92, CFI = .92, RMSEA = .05, SRMR = .07. In
addition, standardised factor loadings (range from .58 to .84) and item reliabilities (ranging
from .33 to .71) were moderate or high, suggesting that all the items seem to be good
indicators of latent factors.
The first factor, Technology, consisted of 29 items and all items from TK, TPK, TCK
and TPACK belonged to this factor. The second factor, Pedagogy, consisted of 14 items.
All PK, five PCK items and one CK item belonged in this factor. Eight items from which
one PCK item and all other CK items formed the Content factor. Cronbach’s alpha for the
Technology factor was .969, the Pedagogy factor .928, and the Content factor .910.
Educ Inf Technol (2018) 23:741–755 749

4.2 TPACK perceptions

All three factors were highly correlated with each other: the Pearson correlation
between Technology and Pedagogy was .517, between Technology and Content was
.523, and between Pedagogy and Content was .741 (p < .01 for all). The descriptive
statistics for the TPACK factors are given in Table 1.
All factors were statistically significantly different from each other (F (2,
411) = 38.797, p < .01; multiply comparison with Bonferroni adjustment, all p < .01).

4.3 Relationships between TPACK perceptions and demographics

Perceptions of Technology and Content among male respondents were higher com-
pared with the perceptions of female participants, but the difference for Pedagogy was
not statistically significant (see Table 2).
In the case of male respondents, both Technology and Content were perceived
significantly higher than Pedagogy, (F (2, 52) = 17.961, p < .01; multiply comparison
with Bonferroni adjustment both p < .01), but there was no statistically significant
difference between Technology and Content. In the case of female respondents, all
factors were statistically significantly different from each other (F (2, 353) = 24.176,
p < .01; multiply comparison with Bonferroni adjustment all p < .01).
There were statistically significant negative relationships between age and
Technology (r = −.196; p < .01) and positive relationships between age and Content
(r = .200; p < .01), but Pedagogy was not significantly related to age (r = .089; p = .072).
The perceptions of MA level students were higher compared with the perceptions of
BA level students for all factors (see Table 3).
At BA level, perceptions of Technology were significantly higher than for Content
and Pedagogy (F (2, 194) = 14.220, p < .01; multiply comparison with Bonferroni
adjustment both p < .01), but there was no statistically significant difference between
Pedagogy and Content. At MA level, both Technology and Content were perceived
significantly higher than Pedagogy (F (2, 169) = 28.251, p < .01; multiply comparison
with Bonferroni adjustment both p < .01), but there was no statistically significant
difference between Technology and Content.

5 Discussion

The first aim was to develop a TPACK scale and to validate it in the Estonian context.
The results from exploratory factor analysis yielded three factors and this was con-
firmed by the confirmatory factor analysis. The goodness-of-fit for the model was
acceptable, taking into account the modification of the indices. All items related with

Table 1 Means and standard de-


Factor Min Max M SD
viations for TPACK factors
(n = 413)
Technology 1.00 5.00 3.46 .73
Content 1.13 5.00 3.33 .81
Pedagogy 1.14 4.39 3.17 .73
750 Educ Inf Technol (2018) 23:741–755

Table 2 Gender differences in TPACK perceptions

Factor Male (n = 54) Female (n = 355) t-statistic p-value


M SD M SD

Technology 3.80 .73 3.40 .72 3.719 < .001


Content 3.75 .68 3.27 .81 4.164 < .001
Pedagogy 3.26 .77 3.15 .72 .992 .322

technological knowledge and its integration with content and pedagogy (TK, TPK,
TCK and TPACK) merged into one factor representing technology. PK and PCK items
merged into the second the factor, which also included one CK item taken from the
questionnaire by Schmidt et al. (2009) (‘I have various ways and strategies of
developing my understandings of my subject(s)’). The result that this item belonged
in a factor with PK + PCK could be because this item includes strategies for developing
thinking, which could be a part of pedagogy. So in our case the item could also
represent PCK. The third factor consisted of CK items, except one – ‘I am familiar
with common student understandings and misconceptions’ – which was PK according
to Schmidt et al. (2009), but in our case all four experts classified this item as PCK
because content is also included. The CFA results indicated that this item is related
purely to content. Among previous studies, Archambault and Barnett (2010) reported
the existence of three factors, but the factors of their study were different from current
study. In their study CK, PK and PCK items were merged within the PCK factor; TCK,
TPK and TPACK merged into a technological–curricular content knowledge factor,
and only construct validity of TK was established.
This model is in contrast to the seven models reported by Schmidt et al. (2009),
Pamuk et al. (2015) and Kaya and Dağ (2013), who also studied pre-service teachers.
In all these three studies, the sample consisted of pre-service teachers with a similar
background; in our case pre-service teachers with different major subjects and from
diverse academic backgrounds participated. As Lin et al. (2013) declare, TPACK may
be domain-specific rather than generally applicable in teacher education involving
various disciplines.
Items for PK and PCK merged into the same factor as in the study by Koh et al.
(2010). The authors (Koh et al. 2010) claim that pre-service teachers lack deep
knowledge and experience of teaching practice, and therefore, they are less able to
consider linkages between content and pedagogy, which might explain the merging
of pedagogical knowledge and pedagogical content knowledge items into the factor.

Table 3 Comparison of TPACK perceptions between study levels

Factor BA level (n = 196) MA level (n = 171) t-statistic p-value


M SD M SD

Technology 3.29 .71 3.61 .75 −4.244 < .001


Content 3.04 .84 3.67 .68 −7.867 < .001
Pedagogy 3.02 .72 3.34 .73 −4114 < .001
Educ Inf Technol (2018) 23:741–755 751

In our case pedagogy and didactics (pedagogical-content knowledge) are often


taught together, and this might indicate that the organisation of the curriculum
might have an influence on teachers’ knowledge perceptions. However, this result
indicates that the participants of this study perceived conceptual differences be-
tween teaching with and without technology – the first factor with technology and
two factors without technology. The result that all items with technology merged
into one factor could be a result of the highly developed technology integration in
Estonia (OECD 2015). In addition, Prensky (2001) claims that individuals from this
generation, Bthink and process information fundamentally differently from their
predecessors^ (p.1). They are used to living surrounded by different types of
technology, and use it in the classroom and at home. Therefore, it might be that
they just differentiate activities with and without technology.
The second aim was to describe the TPACK perceptions of Estonian pre-service
teachers and to find relationships between the components of TPACK and pre-service
teacher demographics (age, gender, curriculum). As with previous studies of pre-
service teachers (Cengiz 2015; Koh et al. 2010; Lin et al. 2013), the participants rated
all their perceptions significantly higher than neutral. In our case the participants
perceived technology integration knowledge the strongest and pedagogical knowledge
the weakest. This result is in contrast to previous studies by Hüseyin (2015), who found
that PK and TK factors had the highest mean scores; Chai et al. (2011), who found that
CK was the highest rated factor; and the study by Dong et al. (2015), where Chinese
pre-service teachers perceived themselves as weakest in terms of their CK. The result in
our case, that technology integration knowledge was perceived as the highest, could
again be due the highly developed technology integration in our comprehensive
schools. Pre-service students are used to using technology in learning, and therefore,
they perceive themselves as competent in this area. Chai et al. (2010) also found that
participating in ICT courses increased the ratings on technology dimensions. The fact
that pedagogical knowledge is the lowest rated knowledge could be due the organisa-
tion of teacher education in Estonia. In most of the cases pre-service teachers first study
subject content for 3 years (BA studies) and then they enter teacher education curricula
at master’s level, where they start to study pedagogy and didactics (pedagogical-content
knowledge) besides subject content (Taimalu et al. 2017).
Comparing the perceptions of the male and female respondents, we found that male
perceptions of technology and content were higher compared with the female
participants. Hüseyin (2015) also found that male pre-service teachers rated their TK
higher than females, and Koh et al. (2015) concluded that in all constructs related to
technology male pre-service teachers rated themselves higher than female. Our result
that there was no statistically significant difference in pedagogy comparing the ratings
of male and female participants is in contrast to Hüseyin’s (2015) result, who found that
females scored higher than males in PK.
Based on age, statistically significant negative relationships were found between age
and Technology and positive relationships between age and Content, but Pedagogy was
not significantly related to age. This result is also in contrast to previous studies, such as
Lin et al. (2013) where TK was positively correlated with age.
As expected, all MA level student perceptions were higher compared with BA level
student perceptions, which is not a surprising result. At BA level, pre-service teachers
perceived their Technology knowledge higher compared with Pedagogy and Content.
752 Educ Inf Technol (2018) 23:741–755

Again this indicates that our students use ICT extensively for schoolwork and for
leisure in comprehensive schools (OECD 2015), and therefore, on graduating from
secondary school, they perceive their Technology knowledge higher. At MA level, pre-
service teachers perceive both Technology and Content knowledge higher compared
with Pedagogy. Again this would be explained via our teacher education system. As
mentioned above, in our case pre-service teachers study their content fields for three
years and then learn pedagogy and didactics at MA level.

6 Conclusion and limitations

We can conclude that the TPACK scale used in the study is valid and useable in the
Estonian context. The CFA confirmed the three-factor model of technology, content
and pedagogy and overlapping areas. Estonian schools and students seem to be highly
Btechnologized^ and the highest perceptions in the Technology factor (TK, TPK, TCK
and TPACK) were not surprising. Therefore, our teacher training should pay more
attention to those elements which are not an integral part of pre-service teachers’
everyday life – pedagogy and didactics. In addition, pedagogy should already be
integrated into bachelor level studies, so that students can study pedagogy alongside
content and to identify more clearly whether they are interested in teaching or not. It
was not surprising that male pre-service students perceived their technology knowledge
higher than females; however, we did not expect that their evaluations on Content
knowledge to also be higher.
In terms of the limitations of the study, we should mention that the scale used
involved a self-assessment instrument, which might not measure their real knowledge
and the respondents could under or overestimate themselves. Participating pre-service
teachers were studying different major subjects and came from diverse academic
backgrounds. Previous studies (Kaya and Dağ 2013; Lin et al. 2013; Pamuk et al.
2015; Schmidt et al. 2009) claim that TPACK could be domain-specific, and this might
be one reason why we could not confirm the seven-factor model. To validate our model
using pre-service teachers with similar backgrounds might be one valid future direction.

Compliance with ethical standards

Conflict of interest The authors declare that they have no conflict of interest.

References

Agyei, D. D., & Keengwe, J. (2012). Using technology pedagogical content knowledge development to
enhance learning outcomes. Education and Information Technologies, 19(1), 155–171.
Angeli, C., & Valanides, N. (2009). Epistemological and methodological issues for the conceptualization,
development, and assessment of ICT-TPCK: Advances in technological pedagogical content knowledge
(TPCK). Computers & Education, 52, 154–168.
Archambault, L. M., & Barnett, J. H. (2010). Revisiting technological pedagogical content knowledge:
Exploring the TPACK framework. Computers & Education, 55, 1656–1662.
Ball, D., & McDiarmid, G. W. (1990). The subject matter preparation of teachers. In W. R. Houston (Ed.),
Handbook for Research on Teacher Education. New York: Macmillan.
Educ Inf Technol (2018) 23:741–755 753

Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance
structures. Psychological Bulletin, 88, 588–606.
Berliner, D. C. (2005). The near impossibility of testing for teacher quality. Journal of Teacher Education,
56(3), 205–2013.
Boyd, D., Goldhaber, D., Lankford, H., & Wyckoff, J. (2007). The effect of certification and preparation on
teacher quality. The Future of Children, 17(1), 45–68.
Cain, T. (2015). Teachers’ engagement with published research: Addressing the knowledge problem.
Curriculum Journal, 26(3), 488–509. doi:10.1080/09585176.2015.1020820.
Canbazoğlu Bilici, S., Yamak, H., Kavak, N., & Guzey, S. S. (2013). Technological pedagogical content
knowledge self-efficacy scale (TPACK-SeS) for preservice science teachers: Construction, validation and
reliability. Egitim Arastirmalari - Eurasian Journal of Educational Research, 52, 37–60.
Cengiz, C. (2015). The development of TPACK, technology integrated self-efficacy and instructional tech-
nology outcome expectations of pre-service physical education teachers. Asia-Pacific Journal of Teacher
Education, 43(5), 411–422.
Chai, C. S., Koh, J. H. L., & Tsai, C.-C. (2010). Facilitating preservice teachers’ development of technological,
pedagogical, and content knowledge (TPACK). Educational Technology & Society, 13(4), 63–73.
Chai, C. S., Koh, J. H. L., Tsai, C.-C., & Tan, L. L. W. (2011). Modeling primary school pre-service teachers’
technological pedagogical content knowledge (TPACK) for meaningful learning with information and
communication technology (ICT). Computers & Education, 57, 1184–1193.
Dong, Y., Chai, C. S., Sang, G.-Y., Koh, H. L., & Tsai, C.-C. (2015). Exploring the profiles and interplays of
pre-service and Inservice teachers’ technological pedagogical content knowledge (TPACK) in China.
Educational Technology & Society, 18(1), 158–169.
Estonian Lifelong Learning Strategy 2020. (2014). Available: https://www.hm.ee/sites/default/files/estonian_
lifelong_strategy.pdf. Accessed 25 Apr 2017.
Goodwin, A. L., & Kosnik, C. (2013). Quality teacher educators = quality teachers? Conceptualizing essential
domains of knowledge for those who teach teachers, Teacher Development. An international journal of
teachers’ professional development, 17(3), 334–346. doi:10.1080/13664530.2013.813766.
Graham, C. R. (2011). Theoretical considerations for understanding technological pedagogical content
knowledge (TPACK). Computers & Education, 57, 1953–1960.
Graham, C. R., Burgoyne, N., Cantrell, P., Smith, L., St. Clair, L, & Harris, R. (2009). TPACK development in
science teaching: measuring the TPACK confidence of inservice science teachers. Techtrends, 53(5), 70–
79. doi:10.1007/s11528-009-0328-0.
Graham, C. R., Borup, J., & Smith, N. B. (2012). Using TPACK as a framework to understand teacher
candidates’ technology integration decisions. Journal of Computer Assisted Learning, 28, 530–546.
Groff, J. (2013). Technology-rich innovative learning environments. OECD CERI Innovative Learning
Environments Project. Available: http://www.oecd.org/edu/ceri/Technology-Rich%20Innovative%20
Learning%20Environments%20by%20Jennifer%20Groff.pdf Accessed 25 Apr 2017.
Grossman, P. L. (1995). Teachers’ knowledge. In L. W. Anderson (Ed.), International encyclopaedia of
teaching and teacher education (pp. 20–24). Oxford: Pergamon.
Grossman, P., Hammerness, K., & McDonald, M. (2009). Redefining teaching, re-imagining teacher educa-
tion. Teachers & Teaching, 15(2), 273–289. doi:10.1080/13540600902875340.
Haridussilm. (2016). Education statistics. Available: http://haridussilm.ee/. Accessed 25. Apr 2017.
Howes, L. M., & Goodman-Delahunty, J. (2015). Teachers’ career decisions: Perspectives on choosing
teaching careers, and on staying or leaving. Issues in Educational Research, 25(1), 18–35.
Hu, L., & Bentler, M. P. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional
criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55.
Hüseyin, Ö. (2015). Assessing pre-service English as a foreign language teachers’ technological pedagogical
content knowledge. International Education Studies, 8(5), 119–130.
Jöreskog, K. G., & Sörbom, D. (2006). LISREL 8.80 for windows [computer software]. Lincolnwood:
Scientific Software International, Inc..
Kaya, S., & Dağ, F. (2013). Turkish adaptation of technological pedagogical content knowledge survey for
elementary teachers. Educational Sciences: Theory & Practice, 13(1), 302–306.
Kaya, Z., Kaya, O. N., & Emre, I. (2013). Adaptation of technological pedagogical content knowledge scale to
Turkish. Educational Sciences: Theory and Practice, 13(4), 2367–2377.
Kazu, I. Y., & Erten, P. (2014). Teachers’ technological pedagogical content knowledge self-Efficiaces.
Journal of Education and Training Studies, 2(2), 126–144.
Kelley, K., & Lai, K. (2011). Accuracy in parameter estimation for the root mean square error of approximation:
Sample size planning for narrow confidence intervals. Multivariate Behavioural Research, 46(1), 1–32.
Kline, R. B. (2011). Principles and practice of structural equation modeling (Third ed.). New York: Guilford Press.
754 Educ Inf Technol (2018) 23:741–755

Koehler, M. J., Mishra, P., & Yahya, K. (2007). Tracing the development of teacher knowledge in a design
seminar: Integrating content, pedagogy and technology. Computers & Education, 49, 740–762.
Koh, J. H. L., Chai, C. S., & Tsai, C. C. (2010). Examining the technological pedagogical content
knowledge of Singapore pre-service teachers with a large-scale survey. Journal of Computer
Assisted Learning, 26, 563–573.
Koh, J. H. L., Chai, C. S., Hong, H.-Y., & Tsai, C. C. (2015). A survey to examine teachers’ perceptions of
design dispositions, lesson design practices, and their relationships with technological pedagogical
content knowledge (TPACK). Asia-Pacific Journal of Teacher Education, 43(5), 378–391. doi:10.1080
/1359866X.2014.941280.
Kopcha, T. J., Ottenbreit-Leftwich, A., Jung, J., & Baser, D. (2014). Examining the TPACK framework
through the convergent and discriminant validity of two measures. Computers & Education, 78, 87–96.
Lee, E., & Luft, J. A. (2008). Experienced secondary science teachers’ representation of pedagogical content
knowledge. International Journal of Science Education, 30(10), 1343–1363.
Lee, M. H., & Tsai, C. C. (2010). Exploring teachers’ perceived self efficacy and technological pedagogical
content knowledge with respect to educational use of the world wide web. Instructional Science, 38, 1–21.
Lin, T.-C., Tsai, C.-C., Chai, C. S., & Lee, M.-H. (2013). Identifying science teachers’ perceptions of
technological, pedagogical, and content knowledge (TPACK). Journal of Science Education and
Technology, 22, 325–336.
Liu, S. H. (2011). Factors related to pedagogical beliefs of teachers and technology integration. Computers &
Education, 56(4), 1012–1022.
Marino, M. T., Sameshima, P., & Beecher, C. C. (2009). Enhancing TPACK with assistive technology:
Promoting inclusive practices in preservice teacher education. Contemporary Issues in Technology and
Teacher Education, 9(2), 186–207.
Ministry of education and science. (2015). Haridus- ja Teadusministeeriumi aasta-analüüs 2015. Available:
https://www.hm.ee/sites/default/files/aastaanalyys2015_kokkuvote_16sept.pdf. Accessed 25 Apr 2017.
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher
knowledge. Teachers College Record, 108(6), 1017–1054.
Murray, F. B. (2001). The overreliance of accreditors on consensus standards. Journal of Teacher Education,
52(2), 211–222.
Muthén, L., & Muthén, B. O. (1998–2010). Mplus users guide & Mplus version 6.0. Available: http://www.
statmodel.com. Accessed 25 Apr 2017.
OECD. (2015). Students, Computers and Learning. Paris: Making the Connection, OECD Publishing.
doi:10.1787/9789264239555-en.
Pamuk, S., Ergun, M., Cakir, R., Yilmaz, H. B., & Ayas, C. (2015). Exploring relationships among TPACK
components and development of the TPACK instrument. Education and Information Technologies, 20(2),
241–263.
Paulick, I., Großschedl, J., Harms, U., & Möller, J. (2016). Preservice teachers’ professional knowledge and its
relation to academic self-concept. Journal of Teacher Education, 67(3), 173–182. doi:10.1177
/0022487116639263.
Phillips, K. R., De Miranda, M. A., & Shin, J. (2009). Pedagogical content knowledge and industrial design
education. Journal of Technology Studies, 35(2), 47–55.
Prensky, M. (2001). Digital natives, digital immigrants part 1. Horizon, 9(5), 1–6.
van der Schaaf, M. F., & Stokking, K. M. (2011). Construct validation of content standards for teaching.
Scandinavian Journal of Educational Research, 55(3), 273–289.
Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technology
pedagogical content knowledge (TPACK): The development and validation of an assessment instrument
for preservice teachers. Journal of Research on Technology in Education, 42(2), 123–149.
Schreiber, B. J., Stage, K. F., King, J., Nora, A., & Barlow, A. E. (2006). Reporting structural equation
modeling and confirmatory factor analysis results: A review. Journal of Educational Research, 99(6),
323–338.
Shih, C.-L., & Chuang, H.-H. (2013). The development and validation of an instrument for assessing college
students’ perceptions of faculty knowledge in technology-supported class environments. Computers &
Education, 63, 109–118.
Shinas, V. H., Yilmaz-Ozden, S., Mouza, C., Karchmer-Klein, R., & Glutting, J. J. (2013). Examining
domains of technological pedagogical content knowledge using factor analysis. Journal of Research on
Technology in Education, 45(4), 339–360.
Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational
Review, 57, 1–23.
Educ Inf Technol (2018) 23:741–755 755

Sung, P.-F., & Yang, M.-L. (2013). Exploring disciplinary background effect on social studies teachers’
knowledge and pedagogy. The Journal of Educational Research, 106, 77–88.
Taimalu, M., Luik, P., & Täht, K. (2017). Teaching motivations and perceptions during the first year of teacher
education in Estonia. In H. M. G. Watt, P. W. Richardson, & K. Smith (Eds.), Global perspectives on
teacher motivation (pp. 189–219). New York: Cambridge University Press.
Voss, T., Kunter, M., & Baumert, J. (2011). Assessing teacher candidates’ general pedagogical/psychological
knowledge: Test construction and validation. Journal of Educational Psychology, 103(4), 952–969.
Watt, H. M. G., & Richardson, P. W. (2012). An introduction to teaching motivations in different countries:
Comparisons using the FIT-choice scale. Asia-Pacific Journal of Teacher Education, 40, 185–197.
Yurdakul, I. K., Odabasi, H. F., Kilicer, K., Coklar, A. N., Birinci, G., & Kurt, A. A. (2012). The development,
validity and reliability of TPACK-deep: A technological pedagogical content knowledge scale.
Computers & Education, 58, 964–977.
Zelkowski, J., Gleason, J., Cox, D. C., & Bismarck, S. (2013). Developing and validating a reliable TPACK
instrument for secondary mathematics preservice teachers. Journal of Research on Technology in
Education, 46(2), 173–206.
Reproduced with permission of copyright owner.
Further reproduction prohibited without permission.

You might also like