You are on page 1of 11

Running head: Case Study 6: Evaluating the Impact of a K-12 Laptop Program 1

Case Study 6: Evaluating the Impact of a K-12 Laptop Program

Team 5:

Julia Heatherwick

James Cook

George Martinez

Colin McNee

California State University, Monterey Bay


Case Study 6: Evaluating the Impact of a K-12 Laptop Program 2

Abstract:

When implementing any changes to an established curriculum, care must be taken at the outset to

consult all stakeholders in order to establish the rubric upon which the project will be judged a

success or failure.
Case Study 6: Evaluating the Impact of a K-12 Laptop Program 3

Preliminary Analysis Questions

1. You are a member of Dr. Colm’s evaluation team. Consider questions raised by stakeholders

in the meeting (see Figure 6-4). What other questions might have been raised at the meeting? Be

sure all viewpoints are represented.

As with any project implementation, it is important it engage stakeholders early on in the

process. The challenge we face in this situation is the delay in stakeholder interviews and

engagement. We may find questions being raised during the meeting that need to be answered

that are out of scope of the facilitated conversation. To create buy-in from all stakeholders, it will

be important to address all questions and concerns. Additional questions that may have been

raised include:

• Teachers - How does the program impact my workload?

• Principles - How are we performing against like schools and programs?

• Parents - How will this prepare my child for their future? (High School, college, work…)

• PTA - How will the program be maintained? Will we need to fundraise to maintain it?

• Community Members - What does the company get from “donating” laptops to our

school?

• Mr. Cook - What is the ROI? How else can I be involved and visible in the program?

• Foundation Board - How will this program and our affiliation with it be shared with the

community?

• School Board - How does this improve student engagement in the learning process? Have

you seen changes in attendance?


Case Study 6: Evaluating the Impact of a K-12 Laptop Program 4

It may also be important to back-up and identify what specific capabilities or skills the

stakeholders seek to improve through this program. How will those improvements be measured?

Test scores are likely to be a main focus, but they aren’t the only means of measuring success.

What else is being done in order to achieve desired performance?

Taking Brinkerhoff’s success case method as a model (Reiser & Dempsey, 2018), the evaluation

team should ask three key questions:

• To what extent have you been able to use the laptop program to achieve success on

(insert overall goal here.)?

• Who is having the most success in using the laptop program?

• Who is having the least success in using the laptop program?

Additionally, the team should conduct interviews with success cases and cases where there was

nonsuccess.

2. Why do you think Mr. Cook decided to fund one more year of the laptop program even though

student achievement gains were not realized?

Mr. Cook likely decided to fund one more year because it was in the best interest of his business.

As mentioned in the case study one of Mr. Cook’s primary reasons for funding the program was

to increase employee loyalty. Which he felt would be accomplished by supporting a program that

benefited the employee's children. By that measure alone, the program was likely already

successful. Furthermore, taking the program away so soon may seem capricious to his employees

who may see it as a condemnation of their children. Mr. Cook also decided to surprise, rather

than consult with, the school district when presenting them with the computers. Doing this at the

Winter Teacher Meeting created a huge publicity opportunity, but to suspend the program and
Case Study 6: Evaluating the Impact of a K-12 Laptop Program 5

admit its failure would erase those gains just as rapidly and publicly. Finally, given that Mr.

Cook is a successful businessman and is likely very competitive by nature, he likely does not like

to fail. Providing funding for one more year offers the program another chance to succeed.

3. In what ways should the “formal” evaluation be different from the evaluation conducted by

Tina Sears?

A close examination of the wording of Tina’s assessment reveals her emphasis on subjective

measures of success that center on feeling. She held faculty focus groups to discuss how they felt

about laptops and student and parent surveys to discuss their feelings about the program. She

also provided videotape of successful employment of laptops and teacher observation reports.

This is all qualitative data. While this can be useful to give an outsider a more complete picture

of the success of a program, using it as the only (or even the primary) data type will not sit well

with many administrators or funding officials, who are likely to demand objective, quantitative

data. While the point could be made that the survey produced quantitative data, it was still

focused on subjective assessments of how the respondents felt about the program. This is

qualitative data presented as quantitative. There are quite likely a multitude of quantitative data

sources available such as standardized test scores. The difficulty in producing a legitimate

quantitative study lies in coming up with a valid control group. Mr. Cook’s decision to surprise

the school district hampered this, but it doesn’t necessarily make it impossible to create. Since all

of the 5th grade classes in the district were given laptops, using one as a control group will not

work. One solution to this would be to use a demographically similar group from a neighboring

district in a latitudinal study. Another possibility is to conduct a longitudinal study using the

class’ previous data. Most likely a combination of these approaches would be sufficient to
Case Study 6: Evaluating the Impact of a K-12 Laptop Program 6

convincingly show whether or not the program is a success. Tina needed to consider her

audience when creating the assessment, in this case a businessman who most likely is used to

quantitative “bottom line” based answers to his questions.

4. In addition to student achievement, what other factors could be examined to determine

whether or not student use of laptops affects student learning?

Student engagement is an area that definitely could be examined and indeed may be a more

accurate measure of the success or the program, since an improvement here has ramifications for

all the other areas being measured. The use of the laptops will almost certainly increase the

amount of attention, interest, and optimism students show toward learning. This would result in

an increase in the motivation to learn and progress. Student engagement is based on the belief

that learning improves when students are interested and learning suffers when students are bored

or disengaged. The use of the laptops could combat the boredom and disengagement that

students often experience in a traditional classroom setting. The measurable outcomes of

increased engagement would be improved student attendance and a reduction in the number and

severity of disruptive behaviors. When student attendance rates improve, their academic

expectations and chances for graduating improve as well. The addition of the laptops in the

classrooms could help improve overall attendance by providing a resource that is not easily

accessible.

5. Describe the evaluation method (instruments, data collection, and analysis) needed to measure

the achievement of program goals and to address the stakeholders’ questions.


Case Study 6: Evaluating the Impact of a K-12 Laptop Program 7

As touched upon in the previous answers, there are a number of options open to the researcher in

terms of instruments, data collection, and analysis. These could consist of a variety of subjective

data collection efforts such as opinion surveys to elicit attitudes toward the program among the

students, faculty and parents. Tina has established these and they do provide supporting evidence

for the success of the program. Other methods that don’t directly involve the students and their

parents would include test scores which would measure the specific academic benefits of the

program. Data such as attendance and the number and severity of any disciplinary actions taken

by the administration would act as a proxy measurement of the level of engagement among the

students. The fact that the program was put into place before any plan for measuring its success

was fully mapped out, poses a problem for the design of the study, but not an insurmountable

one. It prevented the creation of a control group for the study within the district. A combination

of longitudinal data from the four classes previous test scores and comparisons to other,

socioeconomically similar classes in other districts will provide a reasonably accurate picture of

the success of the program. Given that the data supports the contention that the program is a

success, the stakeholder’s concerns will be satisfied by this report.

Implications for ID Practice

1. What influence should testimonials have on making technology purchasing decisions?

When testimonials come from a larger interview about the technology implementation process,

they should be considered a valuable factor in decision making. Testimonials are useful

qualitative data that can give evaluators insight into the progress of students and educators, but

they are not meant to be the sole evaluating factor. In Ruggiero and Mong’s (2015) study of
Case Study 6: Evaluating the Impact of a K-12 Laptop Program 8

technology implementation in K-12 schools, they acknowledge that incorporating qualitative

data can be challenging, “Teaching practices and the process of integration is not easily

researched through surveys and interviews” (p. 174). However, they suggest adding on

classroom observations and do not imply that their interviews were useless. In Ertmer’s (2005)

article, she discusses how a teacher’s pedagogical beliefs could heavily influence the success of

technology implementation in the classroom, “... if we truly hope to increase teachers’ uses of

technology, especially uses that increase student learning, we must consider how teachers’

current classroom practices are rooted in, and mediated by, existing pedagogical beliefs” (p. 36).

Testimonials and related interviews might be the best way to learn about pedagogical beliefs and

predict whether successful technology implementation is a feasible goal.

2. How can evaluators convince stakeholders that alternative evaluation measures, other than

standardized tests, are needed to answer complex questions related to measuring student

achievement?

The stakeholder meeting is a great way to address measurement of student success. An artfully

facilitated conversation would allow stakeholders an opportunity to discover standardized tests

are not the only measurement of success. Questions like, “What does student success look like?”

could allow for deeper conversation. If the answers lean heavily toward good scores on

standardized tests, then the facilitator needs to dig deeper and ask, “What else?” until more is

uncovered.

Additionally, the team could send out a pre-meeting survey asking respondents to define student

achievement and share the results visually with the stakeholders. This would be a good primer

for a deeper conversation and Tina’s previous efforts will have paved the way for their use. It is
Case Study 6: Evaluating the Impact of a K-12 Laptop Program 9

always better to have the stakeholders tell you about the complexities of measuring student

success and you facilitate their own discovery of how the evaluation team could go about it.

3. What techniques can evaluators use to ensure that results are unbiased?

One technique an evaluator can use to make sure their results remain unbiased it to make sure

they do not become an advocate. “As soon as evaluators put on their advocacy hat, they are no

longer value-neutral toward the program (Mohan, 2014).” Another technique that can be used to

reduce bias is to make sure the right people are being surveyed. Selecting the right respondent

group may seem easy, but it often leads to selection bias. When conducting surveys, it is critical

to focus on a population that fits your survey goals. Incorrectly including or excluding

participants can lead to distorted results.

4. What are some of the challenge’s evaluators face when scaling up from a small program to a

larger program with the same goals? How would data collection and analysis methods differ

depending on the size of the program?

One challenge an evaluator may face with scaling up from a small program to a larger program is

the ability to collect valuable qualitative data. When a program is larger, it can be difficult for

evaluators to have the time and resources to interact with all of the educators and students

experiencing the implementation on a daily basis. In this case, the data collection method may

shift to be completely quantitative, which could be a problem for the longevity of the program if

students continue to earn low ITBS scores. An additional challenge with scaling up to a larger

program is that students’ abilities and educational goals are very different from the time they are

in kindergarten to twelfth grade. The expansion to a larger program would include multiple
Case Study 6: Evaluating the Impact of a K-12 Laptop Program 10

grades and require evaluators to assess each grade differently based on what is most appropriate

for the grade.


Case Study 6: Evaluating the Impact of a K-12 Laptop Program 11

References

Ertmer, P. A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for
technology integration? Educational Technology Research and Development, 53(4), 25-
39.

Mohan, R. (2014). Evaluator advocacy: It is all in a day’s work. American Journal of


Evaluation, 35(3), 397-403.

Reiser, R.A. & Dempsey, J.V. (2018). Trends and Issues in Instructional Design and
Technology (4th Ed.). New York: Pearson.

Ruggiero, D., & Mong, C. J. (2015). The teacher technology integration experience: Practice
and reflection in the classroom. Journal of Information Technology Education, 14, 161-
178.

You might also like