You are on page 1of 14

Billings

EVALUATING COMPUTER EDUCATION PROGRAMS IN THE SCHOOLS

Karen Bilhngs
Executive Editor, I n s t r u c t i o n a l Computing Dept.
School Division, Houghton Milflin Co., One Beacon St.
Boston~ Massachusetts 02108

OVERVIEW

While the word "evaluation" may evoke some unpleasant reactions among educators~ the
process of evaluation can indeed be bo~ positive and valuable. An evaluation not only helps
describe a program in a school, but also provides a technique for monitoring and improving that
program. It can also point out the achievements and attitudes of the participants and convey
information about the program to the public.

The major benefit of an evaluation process is that the program staff receive information
that helps them modify the computer education program for the next stage of the development.
These programs are newly developed and will undergo revision and modification as the staff
discover what aspects of lhe programs work best in a particular setting. With an evaluation
process~ the teachers and administrators can see the progress they are making as the ccxnputer
education program is implemented.

The o t h e r benefit is to the t a x p a y e r and those who support the program. Computer
education programs are not only costly initially, but are growing more expensive e a c h year.
Schools are continuing to increase their computer-related purchases. Educators and parents soon
will ask "How well is the computer education program going?" or "What are the students learning
in this course called 'Computer Literacy'?"

The purpose of this paper is to further describe the need for evaluating computer education
programs in the schools, to explore the nature and purpose of such an evaluation process and,
finally~ to describe an evaluation procedure that not only responds to the needs of the schools,
but can be implemented immediately.

h E E D FOR EVALUATION

Use of computers in schqols

The ntimber of microcomputers has risen dramatically over the past five years. Early
surveys by the National Center for Education Statistics, U.S. Deparlanent of Education, show that
the nt~nber of personal computers available for instructional use by public school students rose
from 31,000 in the fall of 1980 to 96,000 in the spring of 1982. Subsequent annual surveys of all
U.S. public schools by Quality Education Data, Inc., show the following growth p a t t e r n in
microcomputer use:

1981-82 I982-83 1983-84 198#-85


DI STRICTS
No. with micros 6#73 9379 12517 15153
No. without micros 10#59 7132 #111 1#15
% with micros 38 57 75 91
50-DCLS
No. with micros 14132 30859 5517.5 70255
Total schools 85747 83648 82592 8117I

page 22
gHlings

The most recent assessment of the number of microcomputers .shows that there are now
approximately one million machines in the schools (TALMIS, 1983). While purchasing patterns
have varied Irc~n school to school in the last five years) some general trends have emerged.
Schools t h a t purchased two or three computers one year would purchase 10-15 the next year
(TALM]S, 1933). Districts also started centralizing their purchases and receiving large quantity
discounts from the computer companies (TALMIS, 1995). The emphasis shifted away from the
hardware acquisition towards the instructional uses of the computer placed in the classrooms.
Software evaluation became an increasingly important task.

The role of computers in the curriculum increasingly was discussed and debated in college
course% professional journals and trade magazines) in books and at conferences. New journals
appeared that discussed computers uses by students and teachers. Familiar and established
journals incorporated computer topics into their issues. State-wide conferences were held for
computer-using teachers. Computer topics were found on national and regional conference
programs sponsored by mathematics, english, reading, social studies, business education, and
science educators.

School districts set up inservice opportunities for their stall to learn about computers.
CoLleges and universities started offering new courses at the undergraduate and graduate levels
to help prepare teachers to use computers and some of the institutions began offering advanced
degrees in computer education.

Emergence of Computer Education Programs

With increased hardware availability) computing started to grow as a separate subject area
in the schools. New computer courses were developed, new computer teacher and coordinator
positions were created, new professional organizations were started, and, in some states, new
certification requirements for computer teachers were developed.

Separate computer courses have been added to the curriculum in many schools, especially
in grades 7-12. While most of these courses are elective) some are required for high school
graduation. A survey by Electronic Learning (19g$) .showed that 20 states either recommended
or required the availability of computer courses for students. Texas now requires that students
obtain credit in a one-semester computer literacy course. Other states, such as Tennessee, have
added computer competency to their graduation requirements. The District of Cokrnbia's Board
of Education has approved a five-year plan that calls for education technology activities, as well
as computer literacy requirements for student graduation.

To carry out state requirements or recommendations, schools created a number of


computer-related teaching and administrative positions. According to Quality Education Data,
Inc. (1985), the number of microcomputer supervisor positions grew from 6,600 (19gl-g2) to
I0,765 ([992-g3) to 11,361 (I983-84) to 13,015 ([985-83). Many of these computer educators
joined organizations that addressed their classroom needs with conferences and professional
journals.
Sane states such as Texas and Washington, D.C.) developed new certification requirements
for the teachers of computer literacy or computer science courses. The District of Colmnbia's
Board Education five-year plan calls for new teacher certification and tenure requirements. An
SCREI survey (1985) showed that sixteen states had recommended, or had pending, legislation
concernin8 computer literacy requirements for teacher certification.

During the past five years, many schools or districts have been implementing plans to teach
students sane computer literacy skills, either (1) as a separate course or unit or (2) within an
already existing course. These two different processes have become known as "stand-alone" vs.
"integrated' approaches. Some schools deliberately choose one approach or the other, while some
are using both. Courses developed to teach computer related objectives can be called Computer
Awareness, Computer Literacy) Computer Applications) Computer Science or Computer
Programming. In the past, courses that "integrate" computer-related objectives into their

page 23
Billings

existing structure have been the mathematics and business education courses. However, English
t e a c h e r s have started incorporating word-processing uses into their writing courses and social
studies teachers have been asking students to use data base management systems or speadsheets
to explore certain ideas or draw conclusions Irc~n sets of data.

Need Ior Evaluation of Computer-Related Curricula

Computer education programs are costly and are growing more expensive. What i E the cost
so far? Given the number and types of microccmputers in schools, one can calculate the total
value of hardware to be $1.3 billion. Soltware sales to schools have been reported at $110
million for the 1983-84 school year and at $177 million for the t9gg-85 school year (TALMIS,
1985).

There are also the costs of ~ the computers in the schools. Every hardware purchase
carries with i t "hidden" costs ~ maintenance contracts, repair work, electrical work, added
insurance and security. Therefore, some computer cooordinators request an additonal 10% over
each hardware purchase to pay for these "set-up" costs. Assuming that z~0%of the schools have
realized these extra costs over the past three years, "set-up" costs could easily add another
$08.6 million to the estimated costs ~ a computer program.

There are also large costs involved in implementing curriculun and staff development. If
each school spends $I,000 for each of four years to train teachers, that expense adds up to $400
million. If half o[ the schools hire one computer resource ~acher to teach all the computer
classes) do additional staff training, amd erfw, administrative tasks, at a cost of $15,000 a year
for each of the past two years) the cost totals $1.5 billion. If, for the past two years, e a c h
school spends about $350 for a set of computer books or other instructional materials, this adds
$70 million to a computer education budget.

Adding up just the "real" and easily identified costs of computer use in the schools then
gives the following picture:
Total Through 84-85 School Year

Computer Hard,rare $ 1,300,000,000


Co~guter Software 327,000,000
Putting in Hard,rare 48,600,000
Teacher Training 400,000,000
Co-nputer Teachers 1,500,000,000
Instructional Materials 70,000,000

Total $ 3,645)600,000

Assume that one=half of the computer use in schools is to teach "computer literacy" or
other computer-related objectives, while the other half is for teaching traditional skills via
computer instruction. The chart would then be revised to the following:

Percentage Total Cost


of Total Cost for Conputer Literacy

Computer Har d~are 5(I'~ $ 650,000,000


Canputer Software 5(96 164)000)000
Putting in H a r d , re 5~ 24,300,000
Teacher Training 50% 200,000,000
C~nputer Teachers 80% 1,200,000,000
Instructional Materials 100% 70,000,000

Total $ 2,308,300,000

page 24
giRings

tf schools have already spent almost $2-1/2 billion to teach computer l i t e r a c y skills t o
students, what will the cost total by 19902

Besides justEying the cost, t h e r e is a n o t h e r r e a s o n to e v a l u a t e c o m p u t e r e d u c a t i o n


programs: the n e e d for i n f o r m a t i o n . E d u c a t o r s understand the importance of periodically
evaluating educational programs, and computer instruction is no exception. Watt (198#) states:

As c o m p u t e r s s e t t l e permanently in schools, the evidence that


students respond to the machines in the instructional process is easy to
find. But, as it is with all projects, the time is rapidly approaching
when we must evaluate the effects of computers in the classroom in
more detail.

Teachers want to review their computer courses or computer-related i n s t r u c t i o n to see


what needs to be changed frc~n one semester to the next. Principals and computer coordinators
want to review their computer program periodically to see if the goals and objectives should be
changed, to see what type. of t e a c h e r / s ~ f f training is needed next, to find out if the appropriate
hardware, software, or print materials is being used, and to determine if students are learning
w h a t ' s e x p e c t e d of them. District coordinators, program evaluators, and other administrators
want to know how the program is going so t h a t t h e y can share the information with the
community or make a report to the school board or the funding agency.

In fact, program development should always include an evaluation phase which provides
information that can be used in the next stage of planning.

program program program


planned - ~ implemented ~ evaluated
~r .................................

In this way, the evaluation information helps improve the computer education program from
year to year.

Schools, then, have a need to gather information about their computer-related curriculum,
whether it be a computer course, a set of courses, computer instruction that is integrated into
other instruction, or a district-wide K-12 computer education program. People in the schools
therefore need a procedure and a set of tools to gather and use information about their program.
This procedure will help diagnose the eflective, uncertain, and problem areas in a program so that
the people involved can then make specific recommendations for improvements in the program.

DEVELOPM£NT OF THE PROCESS

What is the process?

An evaluation process should include procedures and tools that will help educators gather
and use information about their computer education program. The o b j e c t i v e s of the p r o c e s s
should be twofold:

[. to gather information about the major components of a c o m p u t e r


education program, and

2. to use the information to develop conclusions and recommendations


about the components of the progrmn.

page 25
Billings

The computer education program may be at a very early stage of development° If so, the
process will help educators react to a recently written progredn plan or a list oL newly developed
goals and objectives. If a program has been implemented for a number of years, and has the
courses, staff, hardwar% softwar% and print materials all in plac% the evaluation process can be
used to examine all of the components of the program and see how well e a c h c o m p o n e n t is
working.

The process described in this paper is not designed to give final d a t a or e v i d e n c e for
c o n t i n u i n g a program. Instead, the process shows how to gather information about a progreun
which will help determine if~ and how~ it should be modifted for the next year.

The p r o c e s s may be s o m e w h a t d i f f e r e n t from one used to evaluate computer-assisted


instruction (CAI) programs~ where computers are used to teach traditional content topics in areas
such as reading or mathematics. While the same evaluation procedures and tools may be applied
to a CAI program, the process discussed here is designed to look at the use of computers in the
new content area called "computer education,"

Research vs. evaluation

Should schools be conducting a research or an evaluation project? It depends on the kinds


of questions they are asking and the kinds of decisions they need to make.

The goals of most research studies are to obtain generalizable knowledge or to test the
relationship between two or more things. Researchers investigate questions such as: 'How does
the use of a word processor affect a child's writing skill?" or "How does the programming process
affect a student's problem solving ability?" or "What is the computer's impact on our nationts
schools?" The process that researchers would use to answer those questions would take a great
deal of time and expertise in research techniques.

The goal of a program evaluation is generally to monitor a program to detennine whether it


can be improved or if it should be maintained or t e r m i n a t e d . Evaluators typically collect
information a b o u t the program to see how it is being implemented and to form an opinion its
value. Program evaluators answer such question as: "Are c o m p u t e r c o u r s e s offering t o our
students what they should?" or "Is the staff inservice program doing a sufficient job of giving our
teachers the computer-related skills they need?"

Evaluators can be teachers and ackninistrators from within the school system~ resource
personnel from regional education agencies or evaluation experts from outside sources such as a
university or state deparlment of education. A classroom teacher can evaluate his or her own
course to see how it should be changed~ if at all~ the next time it is taught. For a d i s t r i c t ' s
computer education programs the evaluator could be someone from the district's curriculum or
research staff or someone with training or experience in program evaluation.

So, if the questions relate to how well a program is working or if any modifications may be
needed in a program, an evaluation would be conducted.

Summative vs. formative evaluations

Evaluations are sometimes described as being formative or summative. What is the


difference? Which type of evaluation is appropriate for a computer education program?

A smmqative evaluation typically yields final information about the result of the program. A
summative r e p o r t will document the program's implementation at the end of a trial period or
evaluate a program's degree of success after it has been refined. The reports are more formally
written and provide sLm~nary statements to the program planners and decision makers.

A formative evaluation is typically designed to give the program planners and knplelnenters
an idea of how well their program is working, particularly in its initial stages. A formative report

page 26
Billings

usually describes program objectives and content9 and indicates the progress in student attitude
and achievement° It then identifies the components in the computer education program that work
well and the ones that need modification. A formative evaluation will be mo6t useful at the end
o~ the fLrst and second year of a program, but will also have use in the following years.

If the purpose of an evaluation is to gather inlormation about the progress of a computer


e d u c a t i o n program, a f o r m a t i v e evaluation should be conducted. The results of a formative
evaluation will also be good baseline data E a summative evaluation is done later.

What evaluation processes have occurred?

In planning a computer education program, educators typically state a rationale for t h e i r


program, then submit a budget for hardware and software and an outline for staff training. While
rno6t plans state the need to evaluate the program, few of them offer any s t r a i g h t f o r w a r d
procedures that will help monitor or assess the implementation of that program.

State leadership has not been as specific in the program evaluation phase as it has been in
the planning and i m p l e m e n t a t i o n phases of computer programs. For example, the State of
Minnesota passed the Technology and Educational Improvement Act which allocates funds for the
planning and impl~nentation of technology in the schools. Each district's request for funds must
also include prooedures for evaluating its eef~orts in technology and reporting to its community. In
discussing the components of the evaluation plan, the Department of Education suggests, "Each
school district will want to design an evaluation and reporting plan which best suits their needs."
O t h e r than to s t a t e t h a t the e v a l u a t i o n and reporting plans should address WHAT is being
evaluated, HOW the data are to be c o l l e c t e d , a n d WHO will be receiving the r e p o r t , the
Department of Education provides no real procedures or techniques lor educators to use in the
evaluation and reporting process (Minnesota, 198#).

A survey ol recent evaluation materials shows that no materials are commonly available
that describe the process of evaluating computer education programs. While there are books on
program evaluation, particularly for mathematics or reading programs~ no books describe the
process of evaluating computer education programs.

To respond to questions about student achievement, some organizations have prepared


paper-and-pencil computer literacy tests. The Minnesota Educational Computing Corporation
(previously Consorti~) developed and distributed a set of objective test items. The Northwest
Regional Educational Laboratory, in cooperation with the Department of Defense Dependent
Schools, published a set of computer literacy tests for grades % 7, 9, and II, as well as a
computer science exam. The National Center for Education Statistics published a set of survey
and resource inventory items, designed to gather data on computer use and computer literacy in
elementary and secondary education.

The most comprehensive testing program may take place with the upcoming "Computer
Competence" test from the National Assessment of Educational Progress. In 1986, 90,000 children
age 9, 13~ and 17 will be tested on the subject of computers, along with mathematics, science,
and reading. The resulting data will provide baseline inlormation about students' access to
computers and about their general knowledge of computers.

While student assessment and student achievement data are important information, they
constitute only one aspect of a computer education program in a school. Other aspects that need
to be reviewed are the program's goals and objectives, the staff development process, and the
instructional (hardware, software and print/video) materials.

Many secondary schools use the Evaluative Criteria by the National Study of School
Evaluation to find out how well they are achieving their program objectives and to what extent
they are meeting the needs of their students and the community. Because the most recent
edition, published in 1978, does not include computer education with its other educational
programs, evaluators cannot use the NSSE checklists and evaluation items to assess the computer
curriculum or progarn.

page 27
Billings

The need~ then~ clearly exists ~or schools to have a set of a s s e s s m e n t p r o c e d u r e s with
evaluation tools that can be used in monitoring the progress ,of a ccxnputer education progravn.
Computer educators who use such procedures should be able to accurately descirbe the cempu~er
e d u c a t i o n process that occurs in their schools and use the data to revise and to ~nprove the
computer education progr~n.

Concerns about the evaluation process

~Vilt everyone agree ~ a t it is time to evaluate the computer education program? Probably
not. Here are sc~ne of the questions that are typically asked or sc~ne of the attitudes that are
expressed when the topic of evaluation c a n e s up, followed by an evaluator's response.

Question: I s n ' t i t t o o e a r l y t o e v a l u a t e o u r coraputer e d u c a t i o n p r o g r a m ? We i u s t


started i m p l e m e n t i n g it.

R e s p o n s e ; . Every educational program goes through stages of development. ~e~ardless of the


stage of the current program, an appropriate evaluation process will detennine the progress that
the program is making. You may want to look at very dLfferent aspects of the program in its first
year as cc~npared to its second or third year. An evaluation may also suggest changes so that
the program continues to improve.

Question: Why do we need to evaluate our computer education program a t all? It


seems to be going O~K.!

Response: Increasing numbers of decisions are required regarding hardware, software and book
p u r c h a s e s , the c o m p u t e r c o u r s e s to be o f f e r e d , and the amount or type of staff training.
Experiences you have had to date will help you make the best decisions that you can. Howeve G
information collected over a period of time wilt prepare you to make even better decisions and to
support them in the face of lack of understanding and even opposition and challenge.

Question: How w i l l the e v a l u a t i o n i n l o r m a t i o n be used? Will i t be part of the


p e r f o r m a n c e r e v i e w s of our t e a c h e r s ?

Response: The evaluation process for the computer education program should be separated from
any regular or @nerat evaluation of teachers. The c a n p u t e r education program will be evaluated
on the attainment of its goals and objectives, staff development, instructional materials and effect
on students.

Question: Is evaluation really worth the effort? Most evaluation reports are too long,
filled with incomprehensible statistics and placed on shelves where no one ever looks
a t them!

_~.sponse: A process should be designed to help you evaluate your particular computer education
program. The results will be most valuable to those who have to make decisions regarding the
program. The value of the report will also be in direct proportion to the evaluator's knowledge
of the needs of the audience.

Question: Isn't evaluation just another word for c o m p a r i n g p r e - t e s t and p o s t - t e s t


scores? if you need "hard data", it is easier to £ocus on easily measurable situations.

Response: Tests are just one set of tools to use in monitoring the effectivenss of a program.
There are many others: observations, questionnaires, interviews and document review. These
tools can give you both numerical and descriptive information about the components of the
computer education program: the goals and objectives, staff development, instructional materials,
and student achievement.

page 2g
BiRings

Bu G even if there are questions and concerns about the evaluation process, the information
that is gathered will stand on its own as vai.uable infon~nation to computer educators, the program
staff and to the school community. The kmportant thing is to get started!

STEPS IN THE PROCESS

An evaluation process can have three major partsa (1) Plannin~..the evaluation--reviewing
the evaluation process and the components of the cernputer education program, then setting up
~ e exact steps to be Iollowed; (2) GatherinG the information--developing the questions, deciding
which tools are appropriat% then designing and using the tools; and (3) Using the informatio_n--
s ~ r ~ a r i z h l g the data collecteG making the reconc~nendations, and reporting the results.

Plannin G t h e e v a l u a t i o n

to Reviewing the evaluation process.


The first consideration in the planning stages of any e v a l u a t i o n is the purpose of the
e v a l u a t i o n itself. Whether it is to comply with a funding agency's request or to revise the
program goals and objecfives~ the purpose must be clear to everyone who will be involved.

Consider everyone's assumptions and concerns about the evaluation process. Unless these
assumptions are dealt with adequately in the beginning, the necessary cooperation won't be there
when the actual evaluation occurs. Find out the concerns, such as the ones described earlier,
and discuss then openly with the participants. Make sure that all participants who contribute to
the evaluation process get sc~nething out of the process in the end.

2. R e v i e w i n g t h e c o m p o n e n t s of t h e computer e d u c a t i o n program. Before any data


can be c o l l e c t e d , specific evaluation questions must be developed. Rather than ask questions
about the computer education program in general, it is better to develop questions about certain
aspects or components ol the program. The major components are: program goals and objectives,
staff development, instructional materials and student achievement. Schools who want to review
only one or more of these components can do that. Likewise, schools who want information about
any additional progran cempanents can use the same steps to gather that information.

Program goats and objectives: The program's goals and objectives change as computer
hardware costs decrease, as the access to computers increases, as new prograrr~ning languages or
s o f t w a r e p a c k a g e s b e c o m e s a v a i l a b l e , and as the r e s u l t s and reactions from students are
reviewed. By evaluating the goals and objectives, the Iollowing information can be offered to the
school:

a. revised program objectives, prioritized according to school's


philosophy, the c o m m u n i t y needs, the access to appropriate
instructional materials, or the teacher training in mind.
b. a comparison of the intended goals and objectives with those actually
~.plemented or achieved.

Staff Development:

Information needs to be gathered about the assistance given to all staff people who work
with students in the cc~nputer education program. The following feedback could be offered."

a. information on the preparation of teachers and staff, as well as the


amount of additional preparation needed by each of them.
b. the extent to wllich the teachers are informed about the recent
developments in computing, computer-related instruction and techniques
for teaching ccmpufing.
c. suggestions for training programs that would be valuable to the
people involved.

page 29
BiJ]ings

Instructional Materials;

School personnel select and use canputer hardwar% software and print/non-print materials
~or use in c a n p u t e r - r e l a t e d courses, i-tow will people know K the materials are being used, if
those m a t e r i a l s a r e s a t i s f a c t o r y and when or how the m a t e r i a l s should be changed? An
evaluation process can provide infonnation such as the following;

a. the e x t e n t to which the conqputer hardwar% software and print


materials actually support the objectives of the program.
b. the extent to which the instructional materials are really being used
by the teachers or program steaK.
c. suggestions ~or purchasing additional or dEIerent materials or for
changing the use of the existing materials°

Student Achievement=
Teachers and parents want the c a n p u t e r education program to have a positive e f f e c t on
s t u d e n t s e n r o l l e d in the p r o g r a m . They would like to know that the students were given
adequate opportunities to use the instructional materials and that the students learned sanething
by using the materials. A primary focus should be on evaluating the extent to which students
attained objectives set for that course, whether the objectives be cognitive, a:Kective or psycho-
motor. In g e n e r a l , the e v a l u a t i o n of student achievement should include irfformation on the
Iollowing:

a, evidence that students are learning what is expected.


b. identification oi achievements that the program produced, but which
had not been identified as progrm~ objectives.
c. changes in attitudes (planned or not) that the progr~n produced.

~, S e t t i n g up t h e e x a c t s t e p s t o be f o l l o w e d .
Before gathering the irfforrnation~ there
are certain steps and guidelines to Iollow in doing a formative evaluation, While the p r o c e s s
varies fronn school to school, there are common steps to follow.

a. Know what information must be provided. What do people want to


know? How rnuch do they want to know?
b. Know what procedures m u s t be f o l l o w e d . Is t h e r e a s t a n d a r d
procedure or does one need to be created?
c. Remember who the primary audience is for the evaluation. All the
i n f o r m a t i o n c o l l e c t e d should be useful. The evaluation needs to be
f o c u s e d to one p r i m a r y group of people - t h o s e w h o w a n t t h e
irfformation.
d. Know what the evaluation really means to the school. Make sure
that the results # a n the evaluation will be used in the way intended -
to help monitor and improve the computer education program,
e . Get a d e s c r i p t i o n of the purpose a n d p r o c e s s in w r i t i n g . A
d e s c r i p t i o n helps focus the e v a l u a t i o n p r o c e s s a n d a l s o h e l p s
communicate the needed ipz[orrnation about it,
I. Know what the audience will accept as credible i n f o r m a t i o n , Do
people reading the report expect to see quantitative data as well as
qualitative narratives of the program? Reach a g r e e m e n t e a r l y on a
reporting style.

C~therinL~.~ - ir~ormation

#. Developing the questions.


Questions need to be formulated about each ot the c o m p o n e n t s to be r e v i e w e d , The
questions should be relevant so that people want to know the responses and specific so that the
responses are obtainable, Here are some examples of questions that are o f ~ n asked about the
c o m p u t e r e d u c a t i o n program a n d f u r t h e r r e f i n e m e n t s o[ the questions so that appropriate
irfformation can be gathered=

,rage 30
Billings

QJestion: How is the computer education program going?

Better questions: Is the program fuelling its goals and o b j e c t i v e s ?


Should the set ol goals and objectives (learning outcomes) be changed
in any way?

Even better questions: Does the program staff believe that each goal,
objective or learning outcome is being met? II so, how well was it
accomplished? 1¢ not~ why not?

How does the set o[ actual objectives (learning outcomes) compare to


those intended by the program planners?

Is each goal~ o b j e c t i v e , or learning o u t c o m e c o n s i s t e n t with the


s c h o o l ' s educational philosophy, the access to instructional materials,
and with the extent of stafl training?

What goals, objectives, and learning outcomes should be added to the


program, revised for the program, or deleted from the program?

5. D e c i d i n g w h i c h t o o l s are a p p r o p r i a t e . To select the "riyal" tools with which to


evaluate your cc~nputer education program, you need to look at the questions you want to answer
and c o n s i d e r the a p p r o p r i a t e n e s s ol various tools." questionnaires, observations, interviews,
doctznent review, tests. Besides having a distinct set of purposes, each of the tools has some
relative advantages and disadvantages given the .size of groups and the types of questions you are
trying to answer. For example, to save time and to be able to quantify the responses, you may
give questionnaires to a large number ol people as opposed to interviewing them. Or, you may
choose to review written records ol computer use rather than rely on teacher or student recall
during an interview.

The Iollowing chart su~r~narizes some of the major advantages and disadvantages for e a c h
t y p e of tool that has been described so far. Remember that it is not as important for you to
choose the "righf' tool as it is for your to develop the tool as best you can and to then use it as
appropriately as you can. Ahnost any tool can help you get responses to the kinds ol questions
you have been developing.

Tool Advantages Disadvantages

Qaestionnai re easy to achainister depth of inlo~nation


convenient for large limited
nunbers of people

Observation yields information for - t i r m consuming


" h a r d - t o - t e s t " aspects - labor intensive

Interview reponses can yield - tirre consuming


more detailed information - labor intensive

Docu~nent forms already d e s i r e d rmy not be corglete


[~view information previously i f gathered before
gathered quest ions known

Test - easy to adninister - not readily available


- results e a s i l y accepted

page 31
giliings

The first step in selecting your tools is to list each question that you want answered and
then d e s c r i b e one or t w o methods for answering each of the questions° You the~ group the
questions together t h a t c a n be a n s w e r e d by the same t y p e of an ins~rument, such as a
questionnaire. It is probable that one questionnaire could contain enoudh ite~ns to help you
respond to several of your major questions.

D e s i g n i n g and Using t h e Tools.

Once you have decided what questions you are going to try to answer and what types of
tools would be most useful, you are ready to purd~ase, create or revise the toots that you need.
Since few, if any, corr~mercial tools are available for your user you wiU be designing most of the
tools that you need in your evaluation.

There are a few examples oI available tools° One of the most directly applicable would be
e n d - o f - c h a p t e r t e s t s or p r e - p o s t tests fran commercial publishers of print materials for the
computer literacy courses in the schools. T h e r e are also many forms t h a t d i s t r i c t s have
d e v e l o p e d for their i n s e r v i c e training p r o g r a m s or for selecting textbooks and other print
materials. And there are many forms for e v a l u a t i n g and s e l e c t i n g c o m p u t e r h a r d w a r e and
computer software.

Given the list of goals and objectives~ you could discover whether or not they were actually
implemented and, if so, how, where and when, and, if not, why not. You may atso find that other
goals and objectives were implemented instead and will want to list them for discussion when you
develop your recommendations.

The following format could be used to measure the actual implementation of the prograin
goals and objectives.

Goal or Implerrentation Actual


Objective Planned Implerentation

. To be a b l e to Hands-on time i n the Every s t u d e n t r e c e i v e d two


operate a cunpu te r Iab g0-min lab p e r i o d s per week
conputer

. To be a b l e to Access to hardarare S t u d e n t s given i n s t r u c t i o n


use word- and w o r d - p r o c e s s i n g and a c c e s s to hardware and
processing software s o f t w a r e for four ~0~nin
s o l t w a r e to and periods total
c r e a t e and e d i t
a docunent

7. Summarizing the Data.

The information that you collect has to be summarized so that you can discuss your results,
either verbally or in a written report. 5crne of your information may not come in a format that
can be analyzed quickly, but the more you have structured the response formats in your tools, the
easier your analysis will be.

In general, the responses to very open-ended questions will have to be categorized into a
number of types of answers and then tallied and totalled. Responses on a numerical rating scale
on a questionnaire, however, are already categorized and are thus set up to tally and total easily.

page. 32
Billings

The first thing you should do is to review the questions that you are trying to answer. If
you are using the program implementation description responses in the two right-hand columns~
compare the actual implementation to what was planned and summarize the information in this
way:

1. How much (what percent) of what was planned actually occurred?


2. What were the things that were successfully done?
3. What did not happen as planned?
4° What things happened that were not planned?

tf rating scales were used to respond to the question, "i-low well did we meet each of our
objectives?" you would review the results from those forms. The rating scale makes the responses
easy to tally~ state as a fraction) and then convert to a percentage. For example) you may make
statements such as: '~20/2F or 80% of the objectives were more than satisfactorily met in the
computer literacy class. These objectives were taught for 16 out of the 18 weeks in the course."

Continue with information about objectives met satisfactorily and less than satishctorily. If
all the o b j e c t i v e s t h a t w e r e not r e a c h e d satisfactorily or more than satisfactodly all have
sc~nething in common, mention this information.

8. Making the Recommendation.

gust as asking the right questions and developing the fight tools was important to collecting
information, summarizing the information and developing the recommendations is important to the
hnprove~nent of the computer education program. The information that you get from the
e v a l u a t i o n is supposed to be used for program improvement) but people aren't going to just
"know" what to do with the data you collected. You have to tell them.

Formulate your recommendations around the four components of your computer education
program, gust as you formed the evaluation questions and designed the tools around the four
components of the computer education program) you can do the same with your recommendations.

I. Recommend first those things that don't need to change because they work weJl:

Where are the successes in the computer education program? There are probably quite a
few of them and they should be discussed first. You wiU have gathered and summarized some
data t h a t indicate some aspects of the program don't need to be changed. Discuss those and
acknowledge not only the successes) but the factors behind them) if you know what they are.

2. Recommend next those things that would help reinforce or extend the program.

You will discover some factors behind some of the "successes" in the program. You can use
t h a t information to recommend additional or parallel support in areas that would reinforce or
extend the program. For exampl% if some of the software objectives w e r e n ' t met because
students did not have the needed access, recommend that the school purchase the necessary
software and print materials.

3. C~fer suggestions for any revisions or restructure of the program.

If you uncover some areas of the program that need major revisions or restructure) bring
that to the attention of the school. This is one more way that the program will be made better.
It is, granted) much easier to talk about the success stories than to suggest that sc~nething needs
fixing.

9. Reporting the Information.


Your r e p o r t of the r e s u l t s of the data collection is your final responsibility. Effective
reporting of the results adds to their power and influence; poor reporting of the best of results is
likely to make them ineffective.

pai~e 33
Billings

The report needs to be delivered to the decision makers in dine for fl~em to be able to
study your conclusions and recc~nmendations and then apply lJlem in the decision-making process°

There are as many ways to report your results as there are types of cemputer education
p r o g r a m s and t y p e s of e v a l u a t i o n designs. Your report wilt help shape the content and t/~e
sequence in which it is presented.

In general~ you will want to include the iollowing pieces o~[ in[onnation in a report:

Cover page with identifying infon~ation


Summary ol findings
Background on the cc~nputer educadon progrmn
Summary of the evaluation process
S u n ~ a r y of the results
Recommendations

Remind yourseE, before you begin~ who your audience is, what t h e y n e e d to know~ and
when.

Make a list of the dif[erent people who will be getting your results and why. List the
questions that they want answers t% as well as the i~onnation you think they should have.

Use a reporting strategy consistent with your audience needs. How much do they want to
read or how long do they want to listen to you? Try to give your audience only knportant
information. They will want to know only what is working well and why~ and what needs to be
changed and why.

Finalty~ make it clear to the reader ,.~aere the important i ~ o n n a t i o n is tocated~ so the
report can be read quickly and easily. That ensures that more people will read it, so your work
is more worthwhile.

SUMMA.~-,LY

The need for evaluating computer education programs is clear. The need for information is
great, whether the i~ormation is used to justify increasing costs of the program or to revise and
improve the program. The benefits of evaluating cc~nputer education programs are many. The
program staff have data that help them modify their plan for the next stage of the program. The
leachers and acinin~trators can see the progress their program is making. Parents can see the
results of spending tax dollars on a new program. Students can get better instruction and the
chance to learn even more about computing.

The evaluation process~ as outlined in this article, is very "do-able" in the schools now. it
is time :[or schools to take the necessary steps and proceed with an evaluation process that gives
them the in:[orrnation they n e e d and the opportunity for an even better cc~nputer education
program.

page 34
BiJlings

BIBL IOGRAI~Y

Barbour, A. and Editors of Electronic Learning, "Computing in America's Classroans 198#" in


Electronic Learnin~ NY: Scholastic, October, 198#.

Bellack~ A,Ao and Kliebard, H.ivi. (Editors) Curriculum and Evaluation. Berkeley, CA:
McCutchan, 1977°

Fink~ A. and Koseco~ 3° An Evaluation Primer. Beverly Hills, CA: Sage Publications, 1978.

Knapper, C.K. Evaluating Instructional Technolo2d~.~. New York: Halsted Press, 1980.

tv~nnesota Department o5 Education. Plannin~for Educational Technology. White Bear Lake, ~v~N:
Minnesota Curriculum Services Center, 1983.

Morris~ L.L. and Fitzgibbon, C.T. Evaluator's Handbook. Beverly Hills, CA: Sage Publications,
1978.

National Center for Education Statistics. Instructional Use of Computers in Public Schools.
Washington, D.C. U.So Deparlment of Education, 1982.

National Center of School Evaluation. Evaluative Criteria lor the Evaluation of Secondary
Schools. Falls Church, Virginia: National Study of School Evaluation, 1978.

Poi~harn, W.H. (Editor). Evaluation in Education. Berkely, CA: McCutchan Publishing Co., I973.

Q.E.D. Inc., ~#3icrocomputer Usage in the Schools, 1984-85. Denver, CO: Qaality Education Data,
Inc., 1985.

5criven, M. "The Methodolg# of Evaluation," in ~Vorlhen and Sanders. Educational Evaluation:


Theory and Practice. Worthington, OH: Charles A. Jones Publishing Co., 1973.

SRCEI. SchoolinKanct Technology: State Level Policy Initiatives~ Vol I. Research Triangle Park,
N.C.: Southeastern Regional Council for Education Improvement, July 1983.

TALMIS, Inc. TALMIS Industry Updates: "Centralized Purchasing in Education: Pros & Cons,"
March I985. "School Market Strong", September 19g#.

TALMIS, Inc. The K-12 Market for Microcomputers and Software. Chicago: TALMIS t(eport,
Sept. 198#.
Watt, O. "Canputer Evaluation Caneth", Popular Computin~ July, 1984.

WoE, R.M. Evaluation in Education. New York: Praeger Publishers, 198#.

pa6e 35

You might also like