You are on page 1of 10

International Journal of Educational Research 95 (2019) 190–199

Contents lists available at ScienceDirect

International Journal of Educational Research


journal homepage: www.elsevier.com/locate/ijedures

Attending to assessment problems of practice during community-


T
centered professional development

Marcelle A. Siegela, , Suleyman Citeb, Nilay Musluc, Christopher D. Murakamid,
Shannon M. Burcksa, Kemal Izcie, Phuong D. Nguyenf
a
University of Missouri, United States
b
Kastamonu University, Turkey
c
Muğla Sıtkı Koçman Üniversity, Turkey
d
Bethel University, United States
e
Necmettin Erbakan University, Turkey
f
Oklahoma State University, United States

A R T IC LE I N F O ABS TRA CT

Keywords: This study focused on problems of practice of science teachers participating in a two-year,
Assessment community-centered professional development and assessment design program. We examined
Instructional practices problems of practice related to assessment. To explore this type of problem of practice, we
Professional development employed an intrinsic case study approach and a community of practice theoretical lens. Five
high school chemistry teachers and eight university personnel acted as participants in the study.
In addition to several secondary data sources, the primary data sources included: a) 12 audio
recordings of the collaborative meetings (over 28 h), b) 18 video recordings of teachers’ as-
sessment practice (over 31 h), and c) 42 discussion board postings online. Three themes emerged
based on iterative cycles of analysis for problems of practice for teachers’ assessment: 1) meeting
students where they are, 2) students who do understand, and 3) the culture of the classroom. Our
investigation contributes to the limited literature on ways teachers problematize and refine the
enactment of classroom assessment.

1. Introduction

We know that quality teaching relies on assessment practices that monitor and foster learning. A substantial amount of research
on classroom assessment has shown that student learning can be enhanced through assessment practices that interpret student ideas
and provide specific, formative feedback (Black & Wiliam, 1998; Herman, Osmundson, Dai, Ringstaff, & Timms, 2015; William, Lee,
Harrison, & Black, 2004). Yet, classroom assessment sometimes plays a secondary role in reform efforts, even while teachers lack
assessment expertise (Brookhart, 2004; Herman et al., 2015). We view assessment as a way to leverage changes in instruction because
of its focus on evidence and learning. In this study, we conceptualize assessment as a way to drive and monitor learning, provide
evidence to plan modifications in instruction, and provide effective feedback to foster learning (e.g., Bell & Cowie, 2001; Black &
Wiliam, 2004; Gotwals, Philhower, Cisterna, & Bennett, 2015; Hattie & Timperley, 2007; Siegel & Gottheiner, 2012). The work took
place in the context of new national standards, that focus on authentic scientific practices, disciplinary core ideas, and cross-cutting
concepts, called the Next Generation Science Standards (NGSS; Achieve Inc., 2013). We are interested in the understudied approach of
assessment-centered professional development as a way to drive change in classrooms, and we therefore examine teachers’ problems


Corresponding author.

https://doi.org/10.1016/j.ijer.2019.02.012
Received 19 November 2018; Received in revised form 24 February 2019; Accepted 25 February 2019
Available online 12 March 2019
0883-0355/ © 2019 Elsevier Ltd. All rights reserved.
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

of assessment practice in this paper.

2. Problems of practice

Problems of practice are complex issues that are regularly encountered and are often reflective of larger and persistent challenges
of a profession. The literature deliberately applies the concept in a non-theoretical way to engage the field in addressing and learning
from these challenges. In terms of teaching and assessment, problems of practice are part of classroom practice, critical to teaching, and
not malleable to a researcher’s perspective. In other words, the problem stems from the teacher’s perspective (Lampert, 1985;
Lampert, 2001). Rather than a researcher identifying them as deficits, a professional teacher claims the problems as persistent
challenges of teaching (Horn & Little, 2010; Lampert, 2001). Because of this, problems of practice are an important tool for pro-
fessional development (Thompson, Hagenah, Lohwasser, & Laxton, 2015). Focusing on problems of practice is strategic because the
problems highlight areas in which old approaches have reached the end of their usefulness, and new approaches, or new ways of
understanding and addressing these problems are needed (Hatch & Grossman, 2009).
To examine these issues, this study documents a community-centered PD program that prioritizes problems of assessment practice
from the classroom. We posed the overarching research question: How can we characterize problems of practice related to assessment?

3. Theoretical perspectives

3.1. Problems of assessment practice

We argue that teachers’ assessment practices develop in the process of grappling with the problems that they face in their
classrooms. Thus, professional development efforts aiming to enhance assessment practices should create space for discussing pro-
blems of practice. The current research landscape emphasizes teacher practices, and in mathematics, literacy, and science education,
researchers are striving to conceptualize core practices and identify specific teaching routines (e.g., McDonald, Kazemi & Kavanaugh,
2013; Stein, Engle, Smith, & Hughes, 2015; Windschitl, Thompson, Braaten, & Stroupe, 2012). Many broad stroke studies and reviews
of important aspects of assessment practice and principles for quality assessment have been discussed (e.g., Black & Wiliam, 1998;
Brookhart, 2011; Edwards, 2013; Hargreaves, Earl, & Schmidt, 2002; Harlen, 2007; Hattie & Timperley, 2007), but there is a need for
further fine-grained analyses of assessment practices. One fine-grained example is evident in Sato, Wei, and Darling-Hammond,
(2008) that framed assessment practices and evaluated change over time during National Board Certification. Their framework
included six dimensions of formative assessment practices, such as “Quality and appropriateness of feedback to students,” that were
scored with a rubric. Quasi-experimental findings showed that nine teachers undergoing certification had higher gains in assessment
practices over three years compared to a control group of seven teachers. Dimensions of assessment practice most improved were the
variety of assessments used and the way assessment information was used to support student learning (Sato et al., 2008).
More research is essential to examine and unpack teaching practices associated with classroom assessment. For example, what are
the problems of assessment practice that are most difficult for teachers? How might problems of assessment practice be addressed by
enhancing teachers’ pedagogical knowledge and pedagogical content knowledge (e.g., Loughran, 2013; Sonmark et al., 2017)? What
are the complexities of practice when monitoring students’ ideas, probing an idea, adapting to ideas, and providing feedback? Such
complex instructional practices are under-researched (Windschitl et al., 2012), and yet would be very useful for professional de-
velopers to understand more fully.
To enhance teacher practices, there is growing interest in community-centered professional development (Harrison, 2013; Little,
2002). Elucidating problems of practice from a classroom perspective and using these to focus instruction can be an effective activity
for teacher educators and professional developers alike. The nuances of these problems of practice are particular to sociocultural
contexts and influenced by a variety of cultural factors. Lampert (2001) explains, “to study teaching practice as it is enacted in school
classrooms, we need an approach to analysis that can focus on the many levels in action at once…” (p. 2). We adopted a communities
of practice (Lave & Wenger, 1991; Wenger, 1998) theoretical approach in this study to investigate problems of practice within a
community.

3.2. Communities of practice

To study problems of practice related to assessment, we viewed teachers’ and university colleagues’ collaborative work as a
community of practice. The PD community of practice provided a rich opportunity to examine and engage teachers in purposeful
reflections, collaborations, and negotiations of assessment problems of practice. We drew from Wenger (1998) construct of com-
munity as involving three elements, in particular, joint enterprise in which participants are jointly developing and negotiating the
meanings of a shared practice (Lave & Wenger, 1991; Wenger, 1998). Sato et al. (2008) found that to change assessment practices, PD
should include models of different types of assessment tasks, opportunities to critique colleagues’ practices and receive feedback on
their own practice. With this lens, our study aimed to identify and characterize problems of practice related to assessment.

4. Methods

We developed an intrinsic case study (Stake, 1995). The study was approved by the Institutional Review Board of the university,
and all participants were informed of the study goals and procedures and consented to participate, based on a process approved by

191
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

Fig. 1. Student’s Clay Model.

the Board. Below, we discuss our model of PD, participants, primary and secondary data sources, and analysis.

4.1. Community-centered model of professional development

High school chemistry teachers and university personnel developed assessments for high school chemistry classes in this two-year
project. We aimed for assessments that were supportive of learning, which is based on the literature on classroom assessment (e.g.,
Bell, 2007; Black & Wiliam, 1998). We intended that assessment should be used to monitor science learning, modify instruction,
engage students in the learning process, and provide feedback to support learning (Abell & Siegel 2011; Bell & Cowie, 2001; Black &
Wiliam, 2004; Furtak & Ruiz-Primo, 2008; Gotwals & Birmingham, 2015; Gotwals et al., 2015; Hattie & Timperley, 2007; Otero,
2006; Siegel & Gottheiner, 2012; Willis, 2011). During PD meetings, each teacher discussed an assessment with the group. Then they
tried it out in their classes and brought student work to the next meeting. This enabled us to examine student work, offer ideas to
adapt instruction based on student work, and discuss how to refine the assessment task.
To provide a sense of the collaborative assessment development process, next we provide two examples. During the PD meetings,
one teacher expressed the desire of assessing chemical reactions and chemical bonds with a hands-on activity. She shared her idea
that this would enable students to show what they know. Other members helped her by providing ideas for possible chemical
reactions she could use. Based on the feedback, the teacher developed a 5E learning experience (e.g., Bybee, 2002) and embedded
formative assessments into each step (Engage, Explain, Explore, Elaborate, Evaluate = 5E). During the Elaborate assessment, students
used clay models to explain the reaction and demonstrate their understanding of chemical bonds and the Law of Conservation of
Matter. An example of student work is shown in Fig. 1.
In a second example, a teacher intended to evaluate how well the students understand the role and structure of surfactants and
wanted to link to the previous unit where students discuss how the size of the reactant can influence the speed of the reaction. For this
purpose, the teacher used a real life example, an explosion on the Deepwater Horizon oil rig in the Gulf of Mexico (US) on April 20th,
2010. After the discussions during the PD meetings, he designed a group discussion assessment with enough information provided to
students to be able to reasonably answer the question:

• “On April 20 th
, 2010 there was an explosion on the Deepwater Horizon oil rig in the Gulf of Mexico. In the months following the
explosion, millions of gallons of crude oil were released into the gulf. In response BP (the company that ran the oil rig) applied a
large amount of an oil dispersing chemical to the resulting oil slick. The dispersant used, Corexit, allowed the oil to be broken up
into tiny droplets which could then spread out over a greater area in the water.
• At the time of spill the manufacturer of Corexit kept the exact recipe of six different chemicals that compose the dispersant a secret.
Given the nature of the dispersant, allowing water to essentially mix with oil, what can be concluded about the types of substances
that make up Corexit?
• Crude oil is mostly biodegradable. Naturally occurring bacteria and other microorganisms feed on the crude oil and break it down
over long periods of time. In theory, why would the use of dispersants help speed up this process?”

During the group discussion assessment, they were also expected to think as if they were the manufacturer and think about which
substances they would use in terms of cost and availability in the area. Therefore students did some research on the event and
substances. Students presented their findings to friends, and the teacher used rubrics to assess their presentations. In the following PD
meeting, group members discussed ways to revise the rubric.
While studies have examined the understanding and assessment practices of chemistry teachers (Harshman & Yezierski, 2015;
Izci, 2013; Lyon, 2011), no study has attempted to examine problems of practice that chemistry teachers face in the course of shifting
their assessment practices. Given the strong link between classroom assessment practices and student learning, we believe this study
will speak to educators more generally than chemistry or science teacher educators.

4.2. Teacher and university participants

As listed in Tables 1 and 2, our PD community consisted of practicing teachers and university personnel (pseudonyms used for
manuscript review). Each year, three participant teachers were selected from a pool of applicants. We purposefully selected teachers
to include a range of teaching experiences at different types of schools (e.g., urban/rural). The variety of educational backgrounds
and school contexts is displayed in Table 2.

192
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

Table 1
Participating University Personnel Background and Experiences.
Name (Position) Background Teaching and PD Experience

Ana Biology, Biochemistry, and Science Education (Assessment) Formal teaching (college level - 4 years), PD (K-12 level 15 years,
(Prof, PI) college level 8 years)
Mark Chemistry Formal teaching (college level-16 years)
(Prof, Co-PI)
Juan Chemistry and Science Education (Assessment) Formal teaching (high school - 2 years), PD (2 years)
(GS, Founder)
Serena General Biology, Cell and Molecular Biology, and Science Formal teaching (college level- 8 years), PD (2 years)
(GS) Education (Assessment)
Ozair Science Education Informal teaching (middle school level- 2 years), PD (1 year)
(GS)
Tura Physics Education and Science Education (Assessment) Informal teaching (middle and high school level, 3 years), PD (3
(GS) years)
Andre Chemistry Pre-service high school teacher
(UgS) Biochemistry, Science education Formal teaching (college level – 2 years, PD (3 Years)
Wyatt
(GS)

Note: Prof = Professor, GS = Graduate Student, UgS = Undergraduate student.

Table 2
Participating Teachers’ Experience and Subject Areas.
Teacher Experience Subject Area

James 9 years, mid-sized public high school Chemistry, Chemical Biology, Medical Chemistry, Biology, and Physical Science
-urban
Madison 2 years, small private rural military high Chemistry, AP Chemistry, and Biology
school
Stacy◊ 7 years, a mid-sized urban public high Honors Chemistry, General Chemistry, 9th grade Physics, Senior Level AP Physics and Physical
school Science, and Applied Science
Vanessa* 1st year, small rural public high school Chemistry (BS in Nutrition and work in area before), general science classes
Peter* 6 years, small, rural public high school Chemistry, AP Chemistry, Biology, Physical Science, Honors Physical Science, and Physics

Note: *Year 2 participant, ◊Year 1 and 2 participant.

4.3. Data sources

To explore problems of assessment practice, we employed three primary data sources. These included: 1) audio recordings of the
PD meetings, 2) video recordings of teachers’ assessment practice, and 3) discussion board postings on the Blackboard learning
management system.
The first source of data was audio recordings of the PD collaborative meetings with teacher and university participants during the
two-year project. We collected 12 audio recordings of the meetings and each recording lasted between 2–3 hours. These recordings
provided an authentic data source for describing and identifying the problems of practice that teachers shared.
The second source of data included video recordings by researchers and teachers themselves. Recordings were collected in
classrooms during the assessment tasks that were designed in the PD. The teachers decided which classes for researchers to videotape,
and selected classes to videotape themselves based on the assessments planned: whether it was an assessment that we developed
together or not. These 18 classroom videotapes were analyzed and also used to stimulate reflections with teachers on their practices
in the PD collaborative meetings. These video recordings and the associated reflections provided an opportunity for teachers to
explain the problems of practices in their classroom and helped university participants understand teachers’ perspectives.
Finally, teacher participants also interacted with other teacher and university personnel in the learning community using online
asynchronous discussion tools on Blackboard. Forty-two of these discussions were used as a source of data. The discussion board
offered a space where teachers posted their problems, concerns, or questions that they had related to assessment practices. Teachers
and/or university staff discussed these. Additionally, teacher participants posted their assessment templates for each of the assess-
ments that they developed/modified as part of the PD. These templates included scaffolds to facilitate teachers’ decision making in
regards to design, implementation, and reflection phases of the process of assessment development.
Along with the primary data sources described above, we used several secondary data sources: interviews with teachers, written
surveys (such as background forms and exit surveys), and assessment templates. These secondary data sources assisted in showing a
broader context for the interactions in the primary data sources and were drawn upon to support discussions and confirmation and
exclusion of themes in research analysis meetings.

193
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

4.4. Data analysis

Recordings were roughly transcribed using Du Bois (2006) “preliminary” level of transcription and after initial themes were
identified, selected sections were transcribed with details such as pauses and laughter using Du Bois (2006) “basic” conventions.
Previous work by Lampert (2001) characterized the problems of practice for teachers that included myriad complex issues that
teachers face in the classroom, such as how to simultaneously support learning for students with diverse experiences, motivations,
and content knowledge background. Based on this concept, we wanted to refine what problems of practice meant related to as-
sessment. We analyzed data looking for these challenges of practice that are likely to be widely experienced by teachers related to
classroom assessment design and enactment (including interpreting student responses, adjusting instruction, etc.).
After narrowing down the problems of practice for assessment, two researchers open coded the transcript of the first PD colla-
borative meeting. Viewing data using a community of practice theoretical framework, we focused on the group discussions and how
ideas developed over time. Then, the initial themes were presented in research meetings and discussed by the research group. Each of
the PD collaborative meetings was analyzed using this cycle of independent coding and research group discussion/debriefing. As the
coding process progressed, Ana wrote analytic memos where she questioned the emergent interpretations and tried to find the
connections between sources and previous memos (Bogdan & Biklen, 1998). These memos became the foundation for themes re-
ported in the findings and later the other two primary data sources were used to confirm, refute, and refine these themes.
We employed both methods-triangulation and analyst-triangulation (Patton, 1999) to judge validity of emergent themes. Spe-
cifically, to triangulate the methods we checked the consistency of our interpretations of the problems of practice discussed in our
collaborative meetings by searching for confirmation of findings across our multiple data sources. For analyst-triangulation, we
exercised a strategy in which multiple researchers coded the same data simultaneously and discussed the similarities, differences and
implications of this coding to reach interpretive consensus. Limited member checking was conducted by inviting the teachers to read
and comment on preliminary interpretations and draft results (Lincoln & Guba, 1985).

5. Results

5.1. Teachers’ problems of assessment practice

In this section, we elaborate on difficult issues teachers prioritized related to assessment practices. These were identified as
problems of practice that any teacher might face, rather than struggles that might have only been pertinent or tricky for this group of
teachers1 . We selected three problems of practice to discuss below that aligned with multiple pieces of evidence: 1) meeting students
where they are, 2) students who do understand, and 3) the culture of the classroom.

5.1.1. Meeting students where they are: “Can’t throw questions at them”
Formative classroom-level assessments are often described as tools that teachers can use to help determine student conceptions
and adjust instruction to help support learning goals. However, this requires some prior understanding of the knowledge, experiences,
and skills that students bring to the classroom. In many cases, advanced teachers might be able draw on their knowledge of learners
to decide the types of questions or learning experiences that will help elicit student ideas, but in other cases, there is an ongoing
challenge of determining where and how best to initiate a formative assessment cycle and support students in the assessment process.
The problem of practice that we identified involves finding the balance between teaching through assessment and finding out what
students think. Teachers discussed that if you give no information it can have the effect of intimidating students, such as when you
give a pre-assessment related to an NGSS practice, but students do not have a lot of experience with the practice. We discussed the
need to intentionally teach new assessment strategies to students who may not be able to represent their knowledge of scientific
practices or concepts, or may be too intimidated to try. This assessment problem of practice points to the issue of reaching students
where they are and not beginning with an assessment that is too intimidating or undoable.
During a collaborative conversation in a second year meeting, the group was considering strategies for using formative assess-
ments to support learning about the particle model of matter. Stacy, an experienced teacher, explained her frustration with this task
by exclaiming, You can’t just throw questions at them. In the excerpt below, Mark, a chemistry professor, addressed Stacy’s concerns.
Stacy: My only, I guess I am full on modeling. I’m bought on it. My question, I guess my problem is that I don’t know how. If you
don’t start them out doing this process, by the time I get to something like solubility, you can't just throw those questions at them.
Because they don’t have, they will have no context to write that model. Since you have to start building their idea[…]
Later, Stacy returns to the idea that students need prior experience with the modeling process before using assessment questions
about modeling, “because they are just not used to thinking on that level.” Juan, a university collaborator, contributes an idea about
using assessment questions to scaffold their formative assessment experiences:
Juan: We can lead them by the question. And even if they don’t have any modeling experience they can reflect on the questions,
and they can have some kind of initial modeling which shows their prior knowledge of their ideas. At least we can use their ideas

1
For example, one of the teachers was a new teacher and brought up issues from her perspective as a beginning teacher. We addressed these issues
during the collaborative meetings, but did not focus on them for this study. (Also see 4.4)

194
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

to build our observations…


Throughout this interaction, Stacy brought attention to the idea that formative assessment strategies might also be novel ex-
periences for students. This suggests the importance of supporting students as they learn how to participate in a new assessment, not
for grades but for monitoring learning. As Juan and Mark explained, these forms of assessment can provide different types of
information, such as student prior knowledge, that can inform instruction.
In another example from a Year 2 collaborative PD meeting, Vanessa, a second year teacher, explained the challenges of providing
responsive and timely written feedback to students. This problem involved meeting students’ knowledge related to the assessment
practice itself, rather than the disciplinary knowledge. While Vanessa explained that she has tried and struggled with peer review as
an assessment practice in the past, stating that, “either they give like, awesome 5! You know, or they [give] like zero.” The learning
community helped generate ideas for using different prompts to scaffold students in this assessment practice. For example, Peter, an
experienced teacher, offered a suggestion, “If you are doing peer review, you could say two things that are good and two things that
need improvement.” In these examples, we found that teachers thought it was important to be aware of student previous content
knowledge and abilities, but also students’ knowledge and abilities of the assessment practice itself.
Overall, we identified a problem of practice related to assessing closely to the students’ level of knowledge and intentionally
teaching new assessment strategies. Teachers had concerns about intimidating students with assessments at the beginning of units, as
well as concerns about students not fulfilling the purpose of new assessment types. It is important to introduce new assessment
strategies without “throwing questions” at students but being able to explain that certain aspects of assessment are designed for
certain purposes, such as eliciting prior knowledge.

5.1.2. Students who do understand: “What happens when they have it right?”
A second problem of practice for assessment that we found in the analyses of discussions involved the learners who learn.
Formative assessment tends to focus on students who misunderstand. What about the students who are progressing; how does a
teacher build on their knowledge and challenge them? At a PD meeting, a teacher inquired:
Stacy: And my other question is…okay. What happens when you get the smartie that has it right?
Mark: That’s…why…Why is that bad?
Stacy: I don’t know. What do you do with them though? Because they got it. [anxious laugh]
Mark questioned why this was a difficulty which led to later conversation to define the problem. Defining and interpreting the
task is a common element of joint enterprise within communities of practice. Stacy explained that these students were able to draw a
diagram without difficulty and could examine it and see that it makes sense. Andre, an undergraduate preservice teacher who also
had some teaching experience before entering the degree program, suggested:
Andre: Can you have them explain, if we’re guiding, to have them explain this process? Honestly if you have a smart student, uh…
You’re trying to get them to think about things at a deeper level. If they…if they’re the ones who explain that’s going to help them
in the future anyway. So it’s…
Stacy: Well yeah, but…
Andre: You could really get them involved in helping other students…
…Stacy: [laughs] But…I mean…You know a lot of times…I…I mean with my honors students I’m going to have kids who draw
that picture getting [it] near right. And from when we first start it. And they’ll go through the process and they’ll say, “Yeah I got
it!” And now we’re done. This thing of working everybody else through that, you know….
Above, Stacy described that her high-level students were sometimes reluctant. They did not want to answer further questions to
demonstrate their understanding or to help out the students who were struggling with an assessment. The group pursued ways to
engage the high-level students next by focusing on content. A discussion about chemicals that could be used as prompts for the
students who “got it right” ensued. Mark suggested, “Now that you’ve got this model for NaCl, what about magnesium?”
Mark: You wanna…you wanna be even more complicated? …to glacial acetic acid and vinegar. Which is the same thing…but
glacial acetic acid does not have any ions in it and vinegar does. Why is that? If they can explain that then they should be in
college. …
Here, Mark is recommending higher level content to engage the students who already “got it right” and need a challenge. Stacy
also brought up the difficulty of having the students explain in a way that is not just telling the other students, but pushing them to
develop their own model.
Stacy: … And that’s the place that I feel like a lot of times we fall down and let them…let those guys just chill out or let them be a
teacher. And that’s not helping. I mean it is helping them. But if they get it and they get through it and they can understand it
[and] they can explain it like that [snaps fingers] then that isn’t really helping the other kids develop their own model.
Mark: Yeah.
Stacy: They’re just listening to Rafiq [the student who understands].

195
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

Mark: Right.
Mark then recommended other questions to ask “Rafiq,” such as asking them to come up with the substance or to explain why oil
doesn’t dissolve in water. We found these exchanges illustrative of joint enterprise because the group was building a shared in-
terpretation and holding each other accountable for tackling the problems of practice raised. In fact, the emphasis went beyond this
because the group was negotiating ways to hold students accountable for their own learning.
We found this problem of challenging the student who has succeeded on an assessment in several sources of data. For example, in
Blackboard conversations, teachers would discuss what to do with their “Rafiq.” In the case below, Madison brought up potential
difficulties with another teacher’s assessment draft online:
Madison:. …I think they might start goofing off or some would finish really fast while others struggled. Maybe you could shorten
up their creativity time? I'll be very interested to see how this works out….
Again, we see the issue of students performing differently on an assessment. Often a teacher will move to the next topic when a
few students have demonstrated learning. What should a teacher do when the formative assessment reveals that a few students have
difficulties and a few understand very well? We discussed research that addresses this point and advised practices such as purpo-
sefully pairing peers into heterogeneous groups so that the knowledgeable students will explain to the ones who need more help.
What should the formative assessment cycle look like in this situation? This problem of practice challenged teachers to envision
assessment practices to support learners at multiple levels.

5.1.3. Classroom culture for assessment: not just “Barrel through”


Our analyses also pointed to the issue of changing the culture of the classroom to enact assessments that focus on learning. This
problem of practice relates to creating time and space for attending to students’ ideas. We discuss the tensions related to teachers’
views of student expectations and time limitations.
One struggle in changing the classroom culture for assessment was students’ expectations. Teachers had goals for the culture of
the classroom, such as: rather than science class revolving around producing correct answers with assessments that emphasize
memorization of science facts, it needs to focus on analyzing, synthesizing and evaluating by weighing ideas and connecting science
ideas to real world experiences. This brought up a tension for teachers between their goals and how they viewed student ex-
pectations—students expect help in getting a right answer. For example, while discussing Peter’s classroom video, about questioning,
Peter explained why he thought assessment questions should be rephrased, “Most of these kids don’t have a lot of self-confidence.”
Stacy asked, “How do you get them to try?” Teachers discussed that when students do not get an answer right the first time it can be
paralyzing for the students.
We know this is an issue in science education, because the national reform documents are aimed at student reasoning and
authentic practices and not just getting a right answer (National Research Council, 1996; Achieve Inc., 2013). At one meeting, both
Stacy and Peter claimed that they really care about the process of discussing ideas while ultimately assuming that assessments should
help students arrive at the ‘right’ answer. In the course of this conversation, teachers were able to share ways to address this tension.
A similar conversation regarding applied and conceptual questions that may not have a right or wrong answer also occurred between
Peter, Juan, and Stacy:
Peter: There is a right answer, and a wrong answer.
Juan: We need to value their efforts, not finding the answers.
Stacy: But it’s so engrained in them by the time they get to high school.
Juan: Was it difficult to change the culture?
Peter: If you don't tell them if they're right or wrong, they get really mad.
While Stacy admitted she has trouble with this specific issue, she also shared a strategy to address the pressure to find the “right”
answer: “I give them questions that I haven't solved yet.” In this way, Stacy transferred some power to the students in the classroom.
Then she added, “I don't give specific directions with labs anymore, there are different answers.” These conversations illustrated the
tension between what the teachers value and how they perceive students and student expectations. As teachers negotiated these
issues, they demonstrated mutual engagement as a community of practice. Furthermore, participating in the CoP helped teachers
think about the important tension between their expectations about student learning and their assessment practices.
We saw this theme again in a discussion about changing the classroom culture to incorporate group assessment. Group assess-
ment, or providing a quiz, test, or other summative assessment that students can discuss with a group (Siegel, Roberts, Freyermuth,
Witzig, & Izci 2015) changes the power structure associated with testing. It was previously recommended by the university colla-
borators as one way to incorporate formative assessment practices with a focus on learning into summative assessments.
During a group meeting in Year 2, Stacy replied to Peter, “Doing some individual, group work. Could there be group testing?”
However, Peter pointed out that his perception of these students is that, “These are some of the kids that never want to be wrong,”
suggesting that group work introduces an element in the assessment that students would not like. (Notice again the tension in Peter’s
discussion between his goals and his view of students.) This led to a very fruitful discussion about Peter’s group assessment. Later, we
found Peter shifting to more authentic assessments in his class, and he noticed students were not familiar with this type of assessment.
Peter discussed with Ozair the group question he used in class:

196
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

Peter: Open ended question was too hard for them to answer.
Ozair: Do you think your assessment wasn't clear? You had to clarify verbally. What was your aim for grouping?
Peter: Give someone to talk to, reinforce what they already knew because assessment was different [emphasis added – the type of
assessment was more authentic – open ended and real world situation].
Ozair: What kind of interactions were there?
Peter: Checking work with each other, see if they got the same answer, clarifying questions, probably 3–4 groups throughout the
day where one person did most of the work.
Ozair: I'm seeing two purposes, the content and students to decide what is the best way to go. Which is the most important to you?
Peter: Taking stoich [stoichiometry topic in chemistry] to an applied situation and make a decision in a different setting.
After the university collaborators got a better idea of Peter’s goals and what occurred in his classroom they offered more sug-
gestions. Ultimately, Peter believed the problem stemmed from the wording of the last question, and Peter offered scaffolding
regarding the unit conversion through verbal feedback. Ana provided an example of how to provide alternate forms of scaffolding and
how to gradually remove these over time. Overall, the data analysis demonstrated the difficulty of changing classroom culture for
assessments for learning, including “authentic” assessments and “group” assessments. Entangled with this difficulty of changing
classroom culture was the significant tension between a teacher’s view of what is important and her/his view of students.
Peter believed that this type of classroom culture change would need to have broad support, “Get them used to dealing with these
types of questions, two or three times, having a whole science department focused on that,” and Stacy focused on helping students
understand the learning goals by “explaining about 21 st century skills, decision making, analyzing.” Time limitations were also
discussed as a tension for teachers related to assessments because of increased grading time. For example, Peter added, “I was already
pressed for time, because I was 3 weeks behind.” Stacy agreed with the time constraints and elaborated, “How do you find time to
work in problems like these, we're just trying to barrel through.” Yet, these teachers recognized that there are ways to resolve the
conflict.
Specifically, Madison was hopeful that the newly created assessments during the program could impact her classroom culture, “I
think that by making these group test questions, students will be able to talk each other through it and just maybe calm themselves a
little for the rest of the individual test. I'm really looking forward to trying these out in that unit later.” Madison recognized that
changes in assessment could have a broad impact on the culture of learning in her classroom. Madison was able to combine what she
believed about her students, with the scaffolding previously discussed with the group, to support student learning and try to resolve
the conflict between what she thinks is important and what she believes her students are able to do, impacting the classroom culture.
The third problem of practice related to assessment was thus changing the culture of the classroom to enact assessments focused
on supporting learning. We found two aspects contributing to this problem, the culture of schools and expectations of students. It is
not a minor issue to alter the culture of a classroom, especially in more traditional chemistry classrooms that rely on rote procedural
learning. Teachers in our study were further constrained by their school, for example Stacy in a large district had support for new
instruction and Vanessa in a small rural district faced resistance from administrators and parents. Clarifying this difficult problem of
practice and associated obstacles should aid teachers and professional developers to be aware of the need to shift classroom culture
toward more student-centered assessments.

6. Discussion

We explored problems of practice in a particular context, an ongoing community-centered PD focused on assessment. The findings
of this study offer guidance for teacher educators and researchers interested in teachers’ assessment practices.
Our analysis has distinguished new problems of assessment practice for teachers. For teachers, we found three problems of
assessment practice: 1) meeting students where they are, 2) instructing students who do well on assessments, and 3) creating a culture
for classroom assessment. The first dilemma points to finding the sweet spot (Furtak, Heredia, Morrison, & Renga, 2012) where
students’ needs can be met, similar to the zone of proximal development (Vygotsky, 1978). The second dilemma is about assessment
practices to support learners at multiple levels. The third problem involves establishing time and space for attending to students’ ideas
while focusing on changing cultural expectations of getting the right answer. Looking across these three problems of practice, teacher
work will have to be done continually, based on the needs of each classroom cohort. These are not the kinds of dilemmas that one
solves and moves on, but are deep issues of practice to consider over again as the classroom context and student population changes.
In a way, these problems point to the need for establishing an understanding of what the purposes of classroom assessments are and
mutual engagement (between students/teachers/PDers) in learner-centered teaching.
We do not claim that these are the only problems or the major problems of assessment practice for all teachers. However, these
were difficult problems for our group of teachers that, compared to the other problems we uncovered and did not write about, appear
to be relevant for teachers in other subject areas and with varying experiences. For example, assessment issues regarding how to
continue to challenge the students who “get it” is applicable to teachers in other scientific fields and humanities and social sciences.
We could have chosen to highlight problems with ceilings, those that novices proposed that are easily fixed (Thompson et al., 2015). (In
fact, several of these issues were brought up in our meeting by one of our teacher participants. These issues were discussed during
meetings; however, we chose to focus our data analysis on the most challenging problems.) Focusing on the most challenging

197
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

problems provided more insight for teachers generally. In Thompson and colleagues’ study (2015) they compared problems of
practice with and without ceilings for supporting professional development. Problems of novices did not lead to productive con-
versations with mentors in the study. Problems without ceilings, however, “drove innovations in planning, teaching, and debriefing”
(Thompson et al., 2015, p. 14).
The problems of assessment practice thus provide excellent starting points for professional development and teacher education
programs. They highlight three areas in need of further research to determine core practices for new visions of classroom assessment.
While teachers are familiar with the idea of formative assessment, a major challenge, especially in science education, has been
helping teachers interpret student responses and adapt instruction in ways that meet students’ learning needs (e.g., Gearhart et al.,
2006; Gottheiner & Siegel, 2012; Otero, 2006). Moving from a ‘students get it or they don’t’ conception of formative assessment
(Otero, 2006) to a more adaptive stance in which teachers are co-inquirers is a major shift.
How to make this shift is implied by the participation structure used in our collaborative group. The group’s approach to as-
sessment was designed and nourished by a participation structure based on “walking through” assessments (Horn & Little, 2010, p.
207). One teacher would take the lead on designing an assessment and the group would walk through it to further develop it. After
the teacher used the assessment in his/her classroom, s/he would bring student work back to the meeting to discuss the data and what
happened in the classroom, sometimes with a video of the class. Walking forward and backward through the assessment were routines,
similar to the Academic Literacy Group’s walk through of a lesson that supported teachers’ curricular goals (Horn & Little, 2010). Our
community became accustomed to discussing the findings based on evidence collected from assessments and from records of practice
(walking backwards) and to anticipating student responses while crafting assessment tasks (walking forwards). The attention to
evidence and solving problems of assessment practice reflected the goal for teachers to inquire similarly about their students’ un-
derstanding based on quality assessments and ways to adapt instruction.

7. Conclusion

In this study, we attended to assessment practices from the perspectives of teachers. Our analysis of collaborative work between
teachers and university staff demonstrated ways the community was engaged in joint enterprise around assessment. The collegial
exchanges challenged viewpoints about assessment, such as from an individual focus on producing grades and measuring achieve-
ment, to a group and individual focus on fostering learning and adapting instruction (Izci and Siegel, 2019). We contributed to the
limited literature on ways teachers enact assessments in classrooms, and more importantly, described the ways teachers problematize
and refine the enactment of classroom assessment. One of the implications for educators is to provide support in the three areas
identified: 1) ways to meet students where they are, 2) how to engage and challenge students who do well on assessments, and 3)
ways to create a culture for student-centered classroom assessment. Another important implication involves using the community-
centered approach as a model for developing PD that values teachers’ own problems of practice. Additionally, each of the problems of
practice identified points to important areas for further research.

References

Abell, S. K., & Siegel, M. A. (2011). Assessment literacy: What science teachers need to know and be able to do? In D. Corrigan, J. Dillon, & R. Gunstone (Eds.). The
professional knowledge base of science teaching (pp. 205–221). The Netherlands: Springer.
Achieve Inc (2013). Next generation science standards: For states, by states. Washington DC: The National Academies Press. www.nextgenscience.org/next-generation-
science-standards.
Bell, B. (2007). Classroom assessment of science learning. In S. K. Abell, & N. G. Lederman (Eds.). Handbook of research on science education (pp. 1105–1149). Mahwah,
NJ: Lawrence Erlbaum.
Bell, B., & Cowie, B. (2001). Formative assessment and science education. Dordrecht: Kluwer Academic Publishers.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Educational Assessment: Principles, Policy and Practice, 5(1), 7–74.
Black, P., & Wiliam, D. (2004). The formative purpose: Assessment must first promote learning. In M. Wilson (Ed.). Towards coherence between classroom assessment and
accountability: 103rd yearbook of the national society for the study of education (pp. 20–50). (2nd ed). Chicago, IL: University of Chicago Press.
Bogdan, R. C., & Biklen, S. K. (1998). Qualitative research in education: An introduction to theory and methods (3 ed.). Needham Heights, MA: Allyn & Bacon.
Brookhart, S. M. (2004). Classroom assessment: Tensions and intersections in theory and practice. Teachers College Record, 106(3), 429–458.
Brookhart, S. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement Issues and Practice, 30(1), 3–12.
Bybee, R. (2002). Scientific inquiry, student learning, and the science curriculum. Learning science and the science of learning. Arlington, VA: NSTA Press NSTA Press.
Du Bois, J. W. (2006). Transcription delicacy hierarchy for discourse transcription. Available:Albuquerque, NM: Paper presented at the Linguistic Society of America.
http://www.linguistics.ucsb.edu/projects/transcription/A01delicacy.pdf.
Edwards, F. (2013). Quality assessment by science teachers: Five focus areas. Science Education International, 24(2), 212–226.
Furtak, E., & Ruiz-Primo, M. (2008). Making students’ thinking explicit in writing and discussion: An analysis of formative assessment prompts. Science Education, 92,
799–824.
Furtak, E. M., Heredia, S., Morrison, D., & Renga, I. (2012). Teacher development in the collaborative design of Common formative assessment. Paper Presented at the
Annual Meeting of the American Educational Research Association.
Gearhart, M., Nagashima, S., Pfotenhauer, J., Clark, S., Schwab, C., Vendlinski, T., et al. (2006). Developing expertise with classroom assessment in K–12 science:
Learning to interpret student work. Interim findings from a 2-year study. Educational Assessment, 11(3-4), 237–263.
Gottheiner, D. G., & Siegel, M. A. (2012). Experienced middle school science teachers’assessment literacy: Investigating knowledge of students’ conceptions in genetics
and ways to shape instruction. Journal of Science Teacher Education, 23, 531–557.
Gotwals, A., & Birmingham, D. (2015). Eliciting, identifying, interpreting, and responding to students’ ideas: Teacher candidates’ growth in formative assessment
practices. Research in Science Education. https://doi.org/10.1007/s11165-015-9461-2.
Gotwals, A. W., Philhower, J., Cisterna, D., & Bennett, S. (2015). Using video to examine formative assessment practices as measures of expertise for mathematics and
science teachers. International Journal of Science and Mathematics Education, 13(2), 405–423. https://doi.org/10.1007/s10763-015-9623-8.
Hargreaves, A., Earl, L., & Schmidt, M. (2002). Perspectives on alternative assessment reform. American Educational Research Journal, 39(1), 69–95.
Harlen, W. (2007). Assessment of learning. London: Sage Publications.
Harrison, C. (2013). Collaborative action research as a tool for generating formative feedback on teachers’ classroom assessment practice: The KREST project. Teachers

198
M.A. Siegel, et al. International Journal of Educational Research 95 (2019) 190–199

and Teaching Theory and Practice, 19(2), 202–213. https://doi.org/10.1080/13540602.2013.741839.


Harshman, J., & Yezierski, E. (2015). Guiding teaching with assessments: High school chemistry teachers’ use of data-driven inquiry. Chemistry Education Research and
Practice, 16(1), 93–103.
Hatch, T., & Grossman, P. (2009). Learning to look beyond the boundaries of representation: Using technology to examine teaching (Overview for a digital exhibition:
Learning from the practive of teaching). Journal of Teacher Education, 60(1), 70–85.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
Herman, J., Osmundson, E., Dai, Y., Ringstaff, C., & Timms, M. (2015). Investigating the dynamics of formative assessment: Relationships between teacher knowledge,
assessment practice and learning. Assessment in Education Principles Policy and Practice, 22(3), 344–367.
Horn, I. S., & Little, J. W. (2010). Attending to problems of practice: Routines and resources for professional learning in teachers’ workplace interactions. American
Educational Research Journal, 47(1), 181–217.
Izci, K. (2013). Investigating high school chemistry teachers’ perceptions, knowledge, and practices of classroom assessment. (Unpublished PhD Thesis). Columbia, MO:
University of Missouri-Columbia.
Izci, K., & Siegel, M. A. (2019). Investigation of an alternatively certified new high school chemistry teacher’s assessment literacy. International Journal of Education in
Mathematics, Science and Technology (IJEMST), 7(1), 1–19. https://doi.org/10.18404/ijemst.473605.
Lampert, M. (1985). How do teachers manage to teach? Perspectives on problems in practice. Harvard Educational Review, 55(2), 178–194.
Lampert, M. (2001). Teaching problems and the problems of teaching. New Haven: Yale University Press.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York, NY: Cambridge University Press.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage Publications.
Little, J. W. (2002). Locating learning in teachers’ communities of practice: Opening up problems of analysis in records of everyday work. Teaching and Teacher
Education, 18(8), 917–946.
Loughran, J. (2013). Pedagogy: Making sense of the complex relationship between teaching and learning. Curriculum Inquiry, 43/1, 118–141. https://doi.org/10.
1111/curi.12003.
Lyon, E. G. (2011). Beliefs, practices, and reflection: Exploring a science teacher’s classroom assessment through the assessment triangle model. Journal of Science
Teacher Education, 22(5), 417–435. https://doi.org/10.1007/1097201192414.
McDonald, M., Kazemi, E., & Kavanagh, S. S. (2013). Core practices and pedagogies of teacher education: A call for a common language and collective activity. Journal
of Teacher Education, 64(5), 378–386.
National Research Council (1996). National science education standards. Washington, DC: National Academy Press.
Otero, V. K. (2006). Moving beyond the “get it or don’t” conception of formative assessment. Journal of Teacher Education, 57(3), 247–255.
Patton, M. Q. (1999). Enhancing the quality and credibility of qualitative analysis. Health Services Research, 34(5 Pt 2), 1189–1208.
Sato, M., Wei, R. C., & Darling-Hammond, L. (2008). Improving teachers’ assessment practices through professional development: The case of National Board
Certification. American Educational Research Journal, 45(3), 669–700.
Siegel, M. A., Roberts, T. M., Freyermuth, S. K., Witzig, S. B., & Izci, K. (2015). Aligning assessment to instruction: Collaborative group testing in large enrollment
science classes. Journal of College Science Teaching, 44(6), 75–82.
Sonmark, K., et al. (2017). Understanding teachers’ pedagogical knowledge: Report on an international pilot studyOECD Education Working Papers, No. 159. OECD
Publishinghttps://doi.org/10.1787/43332ebd-en.
Stake, R. E. (1995). The art of case study research. London: Sage Publications.
Stein, M. K., Engle, R. A., Smith, M. S., & Hughes, E. K. (2015). Orchestrating productive mathematics discussions: Helping teachers learn to better incorporate student
thinking. In C. A. L. Resnick, & S. Clarke (Eds.). Socializing intelligence through academic talk and dialogue (pp. 375–388). Washington, DC: American Educational
Research Association.
Thompson, J., Hagenah, S., Lohwasser, K., & Laxton, K. (2015). Problems without ceilings: How mentors and novices frame and work on problems-of-practice. Journal
of Teacher Education, 66(4), 363–381.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.
Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge, UK: University of Cambridge.
William, D., Lee, C., Harrison, C., & Black, P. J. (2004). Teachers developing assessment for learning: Impact on student achievement. Assessment in Education Principles
Policy and Practice, 11(1), 49–65.
Willis, J. (2011). Affiliation, autonomy and assessment for learning. Assessment in Education Principles Policy and Practice, 18(4), 399–415.
Windschitl, M., Thompson, J., Braaten, M., & Stroupe, D. (2012). Proposing a core set of instructional practices and tools for teachers of science. Science Education,
96(5), 878–903. https://doi.org/10.1002/sce.21027.

199

You might also like