You are on page 1of 7

Computers & Education 55 (2010) 1656–1662

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

Revisiting technological pedagogical content knowledge: Exploring the TPACK


framework
Leanna M. Archambault a, *, Joshua H. Barnett a
a
Arizona State University, Mary Lou Fulton Teachers College, PO Box 37100, Mail Code 3151, Phoenix, AZ 85069, United States

a r t i c l e i n f o a b s t r a c t

Article history: This study examines the nature of technological pedagogical content knowledge (TPACK) through the use
Received 16 November 2009 of a factor analysis. Using a survey with 24 items designed to measure each of the areas described by the
Received in revised form TPACK framework, and measuring the responses of 596 online teachers from across the United States,
18 July 2010
data suggest that while the framework is helpful from an organizational standpoint, it is difficult to
Accepted 19 July 2010
separate out each of the domains, calling into question their existence in practice. Three major factors
become evident, but rather than being comprised of pedagogy, content, and technology, the only clear
Keywords:
domain that distinguishes itself is that of technology. This research examines the validity of the TPACK
Technological pedagogical content
knowledge model and suggests that measuring each of these domains is complicated and convoluted, potentially
TPACK due to the notion that they are not separate.
Technology framework Ó 2010 Elsevier Ltd. All rights reserved.
Online learning
Factor analysis

1. Introduction

Prior to the articulation of technological pedagogical content knowledge (TPACK) (Mishra & Koehler, 2006), the notion of a unifying
conceptual framework was lacking in the educational technology literature. As a result, the development of the TPACK framework has taken
the technology field by storm (Cox & Graham, 2009), and various researchers have developed related curriculum, texts, professional
development models, methods of measurement, as well as advancements to the framework itself (Angeli & Valanides, 2009; Harris, Mishra,
& Koehler, 2009; Niess, 2008; Schmidt et al., 2009). However, while TPACK is potentially useful, especially when conceptualizing how the
affordances of technology might be leveraged to improve teaching and learning, it requires additional examination to understand if
technology, content, and pedagogy meld together to form the unique domains described by framework.
The purpose of this study is to explore the nature of technological pedagogical content knowledge (TPACK), defined as understanding the
connections and interactions between and among content knowledge (subject-matter that is to be taught), technological knowledge
(computers, the Internet, digital video, etc.), and pedagogical knowledge (practices, processes, strategies, procedures and methods of
teaching and learning) to improve student learning (Koehler & Mishra, 2005). This framework describes seven unique factors: pedagogy,
content, technology, pedagogical content, technological pedagogy, technological content, and technological pedagogical content. To date,
two survey instruments have been created to measure TPACK, one tailored toward undergraduate students (Schmidt et al., 2009) and the
other specific to online teaching (Archambault & Crippen, 2009). However, further analysis is necessary to determine if data from either
survey support the identification of the seven factors as described by the TPACK framework. This study uses survey responses collected from
596 K-12 online teachers to explore the nature of the factors comprising the TPACK model.

2. Related literature

In order to understand the origins of the TPACK framework and its impact on the field of educational technology, it is necessary to
examine its roots in pedagogical content knowledge (PCK). Introduced by Shulman (1986), he recognized the need for a more coherent

* Corresponding author. Tel.: þ1 602 543 6338; fax: þ1 602 543 6350.
E-mail addresses: leanna.archambault@asu.edu (L.M. Archambault), jhbarnett@asu.edu (J.H. Barnett).

0360-1315/$ – see front matter Ó 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compedu.2010.07.009
L.M. Archambault, J.H. Barnett / Computers & Education 55 (2010) 1656–1662 1657

theoretical framework concerning what teachers should know and be able to, including what content knowledge they needed to possess
and how this knowledge is related to that of good teaching practices. Shulman developed the idea of pedagogical content knowledge (PCK)
to describe the relationship between the amount and organization of knowledge of a particular subject-matter (content) and knowledge
related to how to teach various content (pedagogy). According to Shulman, PCK includes knowledge on how to teach a specific content or
subject-matter knowledge, extending beyond simply knowing the content alone. PCK is described as encompassing “the most useful forms
of representation of those ideas, the most powerful analogies, illustrations, examples, explanations, and demonstrationsdin a word, the
ways of representing and formulating the subject that make it comprehensible to others” (p. 9).
Shulman’s articulation of pedagogical content knowledge has become “common currency” in the field of teacher education and in the
related literature (Segall, 2004). However, as Segall points out, “Yet while it [pedagogical content knowledge] has often been cited, much
used, seldom has the term or the lens it provides for the educative endeavor been questioned, engaged critically” (p. 490). While the teacher
education community acknowledges the usefulness of the framework, especially with examining what teachers know and how that might
impact the ways in which they teach, there are some valid concerns, especially concerning the distinct nature of each of the domains,
pedagogy and content. Are they two distinct areas or are they inherently meshed? Can teachers consider a content area without thinking
about how they might go about teaching it? According to McEwan and Bull (1991), “We are concerned, however, that his [Shulman’s]
distinction between content knowledge and pedagogical content knowledge introduces an unnecessary and untenable complication to the
conceptual framework on which the research is based.” (p. 318). The authors go on to argue that content, in the form of scholarship, cannot
exist without pedagogy, and that explanations of concepts are inherently pedagogical in nature (McEwan & Bull, 1991; Segall, 2004). This
perplexity has made it difficult to validate pedagogical content knowledge as a framework, to define what constitutes knowledge from each
of the domains of pedagogy, content, and the complex notion of pedagogical content knowledge.
Despite problems with the initial framework, Koehler and Mishra built on PCK, and added technology as a key component to the
framework, creating technological pedagogical content knowledge (TPACK). TPACK, as described in the literature involves an understanding
of the complexity of relationships among students, teachers, content, technologies, practices, and tools. According to Koehler and Mishra
(2005), “We view technology as a knowledge system that comes with its own biases, and affordances that make some technologies
more applicable in some situations than others” (p. 132). Koehler and Mishra define TPACK as the connections and interactions between
content knowledge (subject-matter that is to be taught), technological knowledge (computers, the Internet, digital video, etc.), pedagogical
knowledge (practices, processes, strategies, procedures and methods of teaching and learning), and the transformation that occurs when
combining these domains: “Good teaching is not simply adding technology to the existing teaching and content domain. Rather, the
introduction of technology causes the representation of new concepts and requires developing a sensitivity to the dynamic, transactional
relationship between all three components suggested by the TPCK framework” (p. 134).
The TPACK framework considers three distinct and interrelated areas of teaching, as represented by Fig. 1.
The notion of TPACK is quickly becoming ubiquitous within the educational technology community, becoming popular among researchers
and practitioners alike, as it attempts to describe the complex-relationship between and among the domains of content, pedagogy, and
technology-related knowledge. However, while the theory of TPACK is compelling, more work measuring the relationship between these
domains is necessary before curriculum and textbooks are re-written. Specifically, before this model is offered as the proverbial panacea for
redressing the challenges of teaching the 21st century student, scholarship investigating the confusion between and among each of the
domains described by the framework is needed. Cox and Graham (2009) acknowledge the difficulty and necessity in conducting such work:
While Koehler, Mishra, and others have attempted to define and measure TPACK, the framework is not yet fully understood (Angeli &
Valanides, 2009). Thus far, the explanations of technological pedagogical content knowledge and its associated constructs that have

Fig. 1. Graphic representation of technological pedagogical content knowledge (TPACK).


1658 L.M. Archambault, J.H. Barnett / Computers & Education 55 (2010) 1656–1662

been provided are not clear enough for researchers to agree on what is and is not an example of each construct.the boundaries between
them are still quite fuzzy, thus making it difficult to categorize borderline cases (p. 60).
Specifically, more research regarding the validity and applicability of the framework is needed. This study provides one step towards that
goal in asking the following research question: What do online teachers’ ratings of their perceived knowledge levels related to TPACK
suggest regarding the nature of the framework itself?

3. Methodology

A Web-based survey instrument was developed encompassing questions regarding 24 items concerning online teachers’ technological
pedagogical content knowledge (Archambault & Crippen, 2009). Responses were on a Likert-type scale, ranging from 1 ¼ Poor, 2 ¼ Fair,
3 ¼ Good, 4 ¼ Very Good, and 5 ¼ Excellent. To establish construct validity, the instrument underwent expert review and two rounds of
think-aloud piloting (Archambault & Crippen, 2009).

3.1. Procedure

The survey was deployed to 1795 online teachers employed at virtual schools from across the nation using Dillman’s (2007) Tailored
Design survey methodology. Email addresses for these teachers were gathered via virtual school websites. A total of 596 responses from 25
different states were gathered, which represented an overall response rate of 33%. While the response rate is modest, it is recognized as
acceptable for a web-based survey (Manfreda, Bosnjak, Berzelak, Hass, & Vehovar, 2008; Shih & Fan, 2008). In addition, this survey
represents a unique examination of practitioner’s responses to the TPACK theoretical framework and should be regarded as an initial yet
integral step in understanding if this theory is tenable.

3.2. Respondents

Participants were predominantly female, with 456 responses (77%) versus 139 (23%) male (consistent with the averages among
educators), and were between the ages of 26 and 45 (63%). The majority of respondents (559, 92%) reported having a bachelor’s degree and
380 (62%) indicated that they had earned a master’s degree, while 7 (2%) reported they were currently working toward their master’s
degrees. Of the 62% with master’s degrees, 148 (48%) were education (M Ed) degrees, including those in curriculum and instruction, while 73
(19%) reported having a degree in a particular content area, such as mathematics, science, social studies, or English.

3.3. Analytic strategy

The responses to the survey were analyzed using the Statistical Program for Social Sciences version 16. A factor analysis using varimax
rotation was performed on the total survey. The purpose of a factor analysis, according to Gorsuch (1983), is to “summarize the inter-
relationships among the variables in a concise but accurate manner as an aid in conceptualization” (p. 2). This method assists the researchers
in establishing a level of construct validity (Bryman & Cramer, 1990). Coefficients of internal consistency were obtained for the total survey
and by the seven expected constructs. Additionally, the relationship between each of the 24 items in the survey with each of the computed
subscale variables was analyzed using a Pearson r correlation to conduct a Corrected Item–Total Correlation analysis.

4. Results

To determine if the survey items were reliable, internal consistency (Cronbach’s alpha) values were computed. The values are presented
alongside descriptive statistics in Table 1 and indicate that the subscales, which have alpha values from 0.89 to 0.70, are reasonable for
internal reliability (Morgan, Leech, Gloeckner, & Barrett, 2004).
Additionally, a factor analysis was performed on the 24 item survey. This analysis confirmed the existence of three separate factors within
the survey, using the Kaiser Normalization as indicated by the components with eigenvalues greater than one. The amount of variance
explained by the three factors was 58.21%. Tables 2–4 illustrate how the survey items loaded by factor, as indicated by the rotated
component matrix, which converged in five iterations. The communalities for each item are also presented.
These results indicate that the highly accepted seven mutually exclusive domains of the TPACK theory may not exist in practice. Specifically,
the respondents reported the existence of three factors: pedagogical content knowledge, technological–curricular content knowledge, and
technological knowledge. From the responses provided, practitioners indicated strong connections between content knowledge and peda-
gogical knowledge, noted by the interconnection of responses to the content, pedagogy, and pedagogical content questions. Respondents also
reported a connection between technological content, technological pedagogy, and technological pedagogical content questions. However,

Table 1
Summary of descriptive statistics for subscales.

Domain Number of survey items Mean Standard deviation Cronbach’s alpha


Pedagogy 3 4.04 0.78 0.77
Technology 3 3.23 1.12 0.89
Content 3 4.02 0.89 0.76
Pedagogical content 4 4.04 0.81 0.80
Technological content 3 3.87 1.03 0.70
Technological pedagogy 4 3.65 1.03 0.77
Technological content pedagogy 4 3.79 0.95 0.79
L.M. Archambault, J.H. Barnett / Computers & Education 55 (2010) 1656–1662 1659

Table 2
Rotated component matrix – factor 1: pedagogical content knowledge.

Survey item Subscale Factor 1 Communalities


(m) My ability to plan the sequence of concepts taught within my class Content 0.75 0.61
(s) My ability to comfortably produce lesson plans with an appreciation for the topic Pedagogical content 0.74 0.61
(j) My ability to determine a particular strategy best suited to teach a specific concept Pedagogical content 0.74 0.59
(d) My ability to decide on the scope of concepts taught within in my class Content 0.72 0.55
(c) My ability to use a variety of teaching strategies to relate various concepts to students Pedagogy 0.71 0.56
(u) My ability to assist students in noticing connections between various concepts in a curriculum Pedagogical content 0.69 0.55
(r) My ability to adjust teaching methodology based on student performance/feedback Pedagogy 0.69 0.51
(f) My ability to distinguish between correct and incorrect problem solving attempts by students Pedagogical content 0.67 0.52
(i) My ability to anticipate likely student misconceptions within a particular topic Pedagogical content 0.64 0.45
(b) My ability to create materials that map to specific district/state standards Content 0.55 0.39

respondents did not distinguish among these constructs. Instead, responses to these items loaded together with no clear separation. Finally,
respondents did separate the technological knowledge items, where no reference to content or pedagogy was used.
In an effort to test if these findings were a reflection of the items within the instrument or an accurate reflection of the perception of
respondents, each of the 24 items was analyzed using a Pearson r correlation to conduct a Corrected Item–Total Correlation analysis. The overall
internal consistency of the instrument was, r ¼ 0.94, and as indicated by Table 5, the removal of any one item does not improve the reliability of
the survey, which is an indication that the survey is most reliable retaining all of the 24 items. This also gives credibility to the conclusion that the
previously described factors accurately reflect the perceptions of the respondents rather than any issues with the instrument.

5. Discussion and implications

Although the TPACK framework is helpful from an organizational standpoint, the data from this study suggest that it faces the same
problems as that of pedagogical content knowledge in that it is difficult to separate out each of the domains, calling into question their
existence in practice. The fact that three major factors become evident is noteworthy, but rather than being comprised of pedagogy, content,
and technology, the only clear domain that distinguishes itself is that of technology. Based on these results, it seems that the technological
pedagogical content knowledge (TPACK) framework experiences the same difficulty as Shulman’s quarter century old conception of
pedagogical content knowledge.
It is possible that when experienced educators consider teaching a particular topic, the methods of doing so are considered as part and
parcel of the content, and when considering an online context, the domain of technology is added to the equation as a natural part of the
medium, making it difficult to separate aspects of content, pedagogy, and technology. This was illustrated by the second phase of the think-
aloud pilot, for which the lead researcher met with three different teachers who all taught various classes online. After being provided with
definitions of each TPACK construct, the online teachers were asked to read each item of the instrument and decide under which TPACK
domain they thought the item fits. In doing so, they encountered difficulty when trying to decipher issues of pedagogy and content. Three
online teachers were challenged with separating out specific issues of content and pedagogy. For example, Item d – “My ability to decide on
the scope of concepts taught within my class” was interpreted by two teachers as belonging to the domain of pedagogical content rather
than content alone. The same misinterpretation happened with Item b – “My ability to create materials that map to specific district/state
standards.” The participants saw this item as a part of pedagogy content (Archambault & Crippen, 2009). These examples, coupled with the
results from the factor analysis, support the notion that TPACK creates additional boundaries along and already ambiguous lines drawn
between pedagogy and content knowledge.
Confounding the measurement of TPACK is the difficulty in developing an instrument or methodology to assess for each of the domains
described by the framework that will apply in different contexts. One of the major problems surfaces when attempting to measure content
knowledge, defined by Mishra and Koehler (2006) as knowledge of the subject-matter to be taught (e.g. earth science, mathematics,
language arts, etc.). Items that were developed to measure this construct within the current instrument were written with the intent of
being generalizable so that teachers could apply them to their own subject-matter. These included questions regarding the ability to create
materials that map to specific district/state standards and the ability to decide on the scope and sequence of concepts taught within in

Table 3
Rotated component matrix – factor 2: technological-curricular content knowledge.

Survey item Subscale Factor 2 Communalities


(n) My ability to moderate online interactivity among students Technological content 0.79 0.65
(p) My ability to encourage online interactivity among students Technological pedagogy 0.78 0.63
(l) My ability to implement different methods of teaching online Technological pedagogy 0.69 0.66
(v) My ability to use various courseware programs to deliver instruction Technological content 0.67 0.56
(e.g., Blackboard, Centra)
(h) My ability to create an online environment which allows students to Technological pedagogy 0.65 0.62
build new knowledge and skills
(x) My ability to meet the overall demands of online teaching Technological pedagogical content 0.60 0.53
(o) My ability to use technological representations (i.e. multimedia, Technological content 0.59 0.53
visual demonstrations, etc.) to demonstrate specific concepts in my content area)
(w) My ability to use technology to create effective representations of content that Technological pedagogical content 0.58 0.50
depart from textbook knowledge
(e) My ability to use online student assessment to modify instruction Technological pedagogical content 0.58 0.50
(t) My ability to implement district curriculum in an online environment Technological content 0.54 0.53
(k) My ability to use technology to predict students’ skill/understanding of a particular topic Technological pedagogical content 0.48 0.51
1660 L.M. Archambault, J.H. Barnett / Computers & Education 55 (2010) 1656–1662

Table 4
Rotated component matrix – factor 3: technological knowledge.

Survey item Subscale Factor 3 Communalities


(a) My ability to troubleshoot technical problems associated with hardware Technology 0.88 0.81
(g) My ability to address various computer issues related to software Technology 0.85 0.81
(q) My ability to assist students with troubleshooting technical problems with their personal computers Technology 0.82 0.79

a class. The challenge becomes creating and validating an instrument that is applicable in a multitude of contexts, including different
content areas. If this is not possible, then the conceptualization of TPACK may need to be different for every imaginable content area,
including subject domains within each of these areas. This questions the value of the framework itself as a cohesive, overarching model.
Despite the issue with content-related items, the inability to differentiate between and among the constructs of the TPACK framework is
significant, and it calls into question the its precision, namely whether or not the domains described by the model exist independently
(Gess-Newsome & Lederman, 1999). Without the ability to separate the components of the framework, it suggests that further refinement to
the framework may be necessary. This is echoed by Angeli and Valanides (2009):
.Koehler et al.’s (2007) conceptualization of TPACK needs further theoretical clarity. It is argued that if TPACK is to be considered as an
analytical theoretical framework for guiding and explaining teachers’ thinking about technology integration in teaching and learning,
then TPACK’s degree of precision needs to be put under scrutiny.Furthermore, the boundaries between some components of TPACK,
such as, for example, what they define as Technological content knowledge and Technological pedagogical knowledge, are fuzzy indi-
cating a weakness in accurate knowledge categorization or discrimination, and, consequently, a lack of precision in the framework
(p. 157).
Because the TPACK domains do not statistically distinguish themselves, this also leads to the heuristic value of the model being
diminished. Specifically, the heuristic value describes the extent to which the framework helps researchers predict outcomes or reveal new
knowledge. This is a weakness in the current model, as effective models can be judged on their ability to explain and predict various
phenomena (Järvelin & Wilson, 2003).
In addition to a model’s explanatory power, Järvelin and Wilson (2003) lay out the following criteria for evaluating effective conceptual
models:

 Simplicity: simpler is better, other things being equal.


 Accuracy: accuracy and explicitness in concepts are desirable.
 Scope: a broader scope is better because it subsumes narrower ones
 Systematic power: the ability to organize concepts, relationships, and data in meaningful, systematic ways is desirable.
 Reliability: the ability, within the range of the model, to provide valid representations across the full range of possible situations.
 Validity: the ability to provide valid representations and findings is desirable.
 Fruitfulness: the ability to suggest problems for solving and hypotheses for testing is desirable.

In addition to weaknesses in TPACK’s precision and heuristic value, the framework is also limited in its ability to assist researchers in
predicting outcomes or revealing new knowledge. While it focuses on three major areas of teaching, namely content, pedagogy, and
technology, it does not represent the causative interaction or the direction of the relationship between and among these domains. This
makes it difficult for TPACK to be a fruitful model, as it does not suggest problems for solving or hypotheses for testing within the field of

Table 5
Online teacher TPACK survey item analysis.

Survey item Subscale Corrected item–total correlation Cronbach’s alpha if item deleted
a Technology 0.474 0.935
b Content 0.531 0.933
c Pedagogy 0.586 0.933
d Content 0.536 0.933
e Technological pedagogical content 0.644 0.931
f Pedagogical content 0.582 0.933
g Technology 0.561 0.933
h Technological pedagogy 0.723 0.930
i Pedagogical content 0.517 0.933
j Pedagogy 0.565 0.933
k Technological pedagogical content 0.653 0.931
l Technological pedagogy 0.734 0.930
m Content 0.596 0.932
n Technological content 0.602 0.932
o Technological content 0.642 0.932
p Technological pedagogy 0.587 0.933
q Technology 0.577 0.933
r Pedagogy 0.549 0.933
s Pedagogical content 0.549 0.933
t Technological content 0.645 0.932
u Pedagogical content 0.599 0.932
v Technological content 0.582 0.933
w Technological pedagogical content 0.633 0.932
x Technological pedagogical content 0.660 0.931
L.M. Archambault, J.H. Barnett / Computers & Education 55 (2010) 1656–1662 1661

educational technology. It would appear from this study that there is room to continue to build on TPACK or even conceptualize other
models that provide a less complex, more precise way of representing the effective integration of technology to improve student learning.
Despite its fuzzy boundaries, the TPACK framework has theoretical appeal, providing an analytical structure highlighting the importance
of content knowledge when incorporating the use of technology. As Koehler and Mishra (2008) recognize, “Instead of applying technological
tools to every content area uniformly, teachers should come to understand that the various affordances and constraints of technology differ
by curricular subject-matter content or pedagogical approach” (p. 22). This focus on subject-matter content is important when considering
the effective use of technology.
However, this appeal is tempered by the difficulty in measuring each of the constructs described by the framework. Further, using this
model, what changes can colleges of education enact to produce more skilled teachers? As Harris et al. (2009) point out:
TPACK is a framework for teacher knowledge, and as such, it may be helpful to those planning professional development for teachers by
illuminating what teachers need to know about technology, pedagogy, and content and their interrelationships. The TPACK framework
does not specify how this should be accomplished, recognizing that there are may possible approaches to knowledge development of this
type. (p. 403)
There is confusion among the field of educational technology, not only concerning the definitions, but also the specific activities and
methods to develop TPACK. This makes it difficult to implement knowledge from a framework that is yet to be fully defined, which limits its
practical application. This is an important area for future research, including detailed examples of TPACK as it pertains to teacher practice
(Cox & Graham, 2009).

5.1. Limitations

A national quantitative study, although rich in data, is not without its drawbacks. Because survey research consists of self-report rather
than the measurement of observable behavior, concerns of accuracy are intrinsic. Although a survey methodology is appropriate when
seeking to examine characteristics from a given population, it is not as accurate as actual observable behavior. This is a dilemma now facing
the field as various methodologies are being developed in an attempt to measure and validate the TPACK model. That is, how do researchers
balance the need for a valid and reliable instrument that can be readily distributed across settings along with alternate, field-based methods
that can document the notion of technological pedagogical content knowledge in practice?
In an effort to create a quantitative instrument to measure TPACK, it is worth noting that any survey is limited by its items and scales. To
be certain, items contained within the current survey would benefit from additional refinement, including the consideration of content-
specific questions and an update of the technology-related questions to include the advancements in the read/write Web and Web 2.0 tools.
Along with the limitations of instrument development and survey research, there are also inherent difficulties in attempting to measure
a relatively new conceptual model with a relatively small, but burgeoning literature base. This study represents an important step in
beginning to question the model itself, how the data might support or refute the notion of TPACK, and how the field of educational
technology might critically consider what has become a widely known and accepted framework.

6. Conclusion

This research examines the validity of the TPACK model, which might be effective in the hallways of academia, but perhaps provides
limited benefit to administrators, teachers, and most importantly, students. From the practitioner data contained within this research, it
seems that from the onset, measuring each of these domains is complicated and convoluted, potentially due to the notion that they are not
separate. The data emerging from the current study support such a conclusion. This leads the researchers to consider what type of model
might more accurately describe teachers’ content, pedagogical, and technological knowledge, and how this model might better inform
colleges of education and teacher education programs in preparing future educators for the challenges of teaching in the 21st century.

Appendix. Supplementary data


Supplementary data associated with this article can be found, in the online version, at doi:10.1016/j.compedu.2010.07.009

References

Angeli, C., & Valanides, N. (2009). Epistemological and methodological issues for the conceptualization, development, and assessment of ICT–TPCK: advances in technological
pedagogical content knowledge (TPCK). Computers & Education, 52(1), 154–168.
Archambault, L., & Crippen, K. (2009). Examining TPACK among K-12 online distance educators in the United States. Contemporary Issues in Technology and Teacher Education,
9(1). Retrieved from. http://www.citejournal.org/vol9/iss1/general/article2.cfm.
Bryman, A., & Cramer, D. (1990). Quantitative data analysis for social scientists. London: Routledge.
Cox, S., & Graham, C. R. (2009). Diagramming TPACK in practice: using an elaborated model of the TPACK framework to analyze and depict teacher knowledge. Tech-Trends, 53
(5), 60–69.
Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed.). New York: Wiley.
Gess-Newsome, J., & Lederman, N. G. (Eds.). (1999). Examining pedagogical content knowledge. Dordrecht: Kluwer.
Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
Harris, J., Mishra, P., & Koehler, M. (2009). Teachers’ technological pedagogical content knowledge and learning activity types: curriculum-based technology integration
reframed. Journal of Research on Technology in Education, 41(4), 393–416.
Järvelin, K., & Wilson, T. D. (2003). On conceptual models for information seeking and retrieval research. Information Research, 9(1). http://InformationR.net/ir/9-1/paperl63.
html Retrieved from.
Koehler, M., & Mishra, P. (2005). What happens when teachers design educational technology? The development of technological pedagogical content knowledge. Journal of
Educational Computing Research, 32(2), 131–152.
Koehler, M., & Mishra, P. (2008). Introducing TPCK. In AACTE Committee on Innovation and Technology. (Ed.), Handbook of technological pedagogical content knowledge (TPCK).
New York: Routledge.
Manfreda, K. L., Bosnjak, M., Berzelak, J., Hass, I., & Vehovar, V. (2008). Web surveys versus other survey modes: a meta-analysis comparing response rate. International Journal
of Market Research, 50(1), 79–104.
McEwan, H., & Bull, B. (1991). The pedagogic nature of subject matter knowledge. American Educational Research Journal, 28(2), 316–334.
1662 L.M. Archambault, J.H. Barnett / Computers & Education 55 (2010) 1656–1662

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: a framework for integrating technology in teacher knowledge. Teachers College Record, 108(6),
1017–1054.
Morgan, G. A., Leech, N. L., Gloeckner, G. W., & Barrett, K. C. (2004). SPSS for introductory statistics: Use and interpretation. Mahwah, NJ: Lawrence Erlbaum Associates.
Niess, M. L. (2008). Guiding preservice teachers in developing TPCK. In N. Silverman (Ed.), Handbook of Technological Pedagogical Content Knowledge (TPCK) for Educators.
(pp. 223–250). New York: Routledge.
Schmidt, D., Baran, E., Thompson, A., Koehler, M. J., Shin, T, & Mishra, P. (2009, April). Technological Pedagogical Content Knowledge (TPACK): the development and validation
of an assessment instrument for preservice teachers. Paper presented at the 2009 annual meeting of the American educational research association. April 13–17, San
Diego, California.
Segall, A. (2004). Revisiting pedagogical content knowledge: the pedagogy of content/the content of pedagogy. Teaching and Teacher Education, 20(5), 489–504.
Shih, T., & Fan, X. (2008). Comparing response rates from Web and mail surveys: a meta-analysis. Field Methods, 20(3), 249–271.
Shulman, L. (1986). Paradigms and research programs in the study of teaching: a contemporary perspective. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed.).
(pp. 3–36) New York: MacMillan.

You might also like