Professional Documents
Culture Documents
Karen Bilhngs
Executive Editor, I n s t r u c t i o n a l Computing Dept.
School Division, Houghton Milflin Co., One Beacon St.
Boston~ Massachusetts 02108
OVERVIEW
While the word "evaluation" may evoke some unpleasant reactions among educators~ the
process of evaluation can indeed be bo~ positive and valuable. An evaluation not only helps
describe a program in a school, but also provides a technique for monitoring and improving that
program. It can also point out the achievements and attitudes of the participants and convey
information about the program to the public.
The major benefit of an evaluation process is that the program staff receive information
that helps them modify the computer education program for the next stage of the development.
These programs are newly developed and will undergo revision and modification as the staff
discover what aspects of lhe programs work best in a particular setting. With an evaluation
process~ the teachers and administrators can see the progress they are making as the ccxnputer
education program is implemented.
The o t h e r benefit is to the t a x p a y e r and those who support the program. Computer
education programs are not only costly initially, but are growing more expensive e a c h year.
Schools are continuing to increase their computer-related purchases. Educators and parents soon
will ask "How well is the computer education program going?" or "What are the students learning
in this course called 'Computer Literacy'?"
The purpose of this paper is to further describe the need for evaluating computer education
programs in the schools, to explore the nature and purpose of such an evaluation process and,
finally~ to describe an evaluation procedure that not only responds to the needs of the schools,
but can be implemented immediately.
h E E D FOR EVALUATION
The ntimber of microcomputers has risen dramatically over the past five years. Early
surveys by the National Center for Education Statistics, U.S. Deparlanent of Education, show that
the nt~nber of personal computers available for instructional use by public school students rose
from 31,000 in the fall of 1980 to 96,000 in the spring of 1982. Subsequent annual surveys of all
U.S. public schools by Quality Education Data, Inc., show the following growth p a t t e r n in
microcomputer use:
page 22
gHlings
The most recent assessment of the number of microcomputers .shows that there are now
approximately one million machines in the schools (TALMIS, 1983). While purchasing patterns
have varied Irc~n school to school in the last five years) some general trends have emerged.
Schools t h a t purchased two or three computers one year would purchase 10-15 the next year
(TALM]S, 1933). Districts also started centralizing their purchases and receiving large quantity
discounts from the computer companies (TALMIS, 1995). The emphasis shifted away from the
hardware acquisition towards the instructional uses of the computer placed in the classrooms.
Software evaluation became an increasingly important task.
The role of computers in the curriculum increasingly was discussed and debated in college
course% professional journals and trade magazines) in books and at conferences. New journals
appeared that discussed computers uses by students and teachers. Familiar and established
journals incorporated computer topics into their issues. State-wide conferences were held for
computer-using teachers. Computer topics were found on national and regional conference
programs sponsored by mathematics, english, reading, social studies, business education, and
science educators.
School districts set up inservice opportunities for their stall to learn about computers.
CoLleges and universities started offering new courses at the undergraduate and graduate levels
to help prepare teachers to use computers and some of the institutions began offering advanced
degrees in computer education.
With increased hardware availability) computing started to grow as a separate subject area
in the schools. New computer courses were developed, new computer teacher and coordinator
positions were created, new professional organizations were started, and, in some states, new
certification requirements for computer teachers were developed.
Separate computer courses have been added to the curriculum in many schools, especially
in grades 7-12. While most of these courses are elective) some are required for high school
graduation. A survey by Electronic Learning (19g$) .showed that 20 states either recommended
or required the availability of computer courses for students. Texas now requires that students
obtain credit in a one-semester computer literacy course. Other states, such as Tennessee, have
added computer competency to their graduation requirements. The District of Cokrnbia's Board
of Education has approved a five-year plan that calls for education technology activities, as well
as computer literacy requirements for student graduation.
During the past five years, many schools or districts have been implementing plans to teach
students sane computer literacy skills, either (1) as a separate course or unit or (2) within an
already existing course. These two different processes have become known as "stand-alone" vs.
"integrated' approaches. Some schools deliberately choose one approach or the other, while some
are using both. Courses developed to teach computer related objectives can be called Computer
Awareness, Computer Literacy) Computer Applications) Computer Science or Computer
Programming. In the past, courses that "integrate" computer-related objectives into their
page 23
Billings
existing structure have been the mathematics and business education courses. However, English
t e a c h e r s have started incorporating word-processing uses into their writing courses and social
studies teachers have been asking students to use data base management systems or speadsheets
to explore certain ideas or draw conclusions Irc~n sets of data.
Computer education programs are costly and are growing more expensive. What i E the cost
so far? Given the number and types of microccmputers in schools, one can calculate the total
value of hardware to be $1.3 billion. Soltware sales to schools have been reported at $110
million for the 1983-84 school year and at $177 million for the t9gg-85 school year (TALMIS,
1985).
There are also the costs of ~ the computers in the schools. Every hardware purchase
carries with i t "hidden" costs ~ maintenance contracts, repair work, electrical work, added
insurance and security. Therefore, some computer cooordinators request an additonal 10% over
each hardware purchase to pay for these "set-up" costs. Assuming that z~0%of the schools have
realized these extra costs over the past three years, "set-up" costs could easily add another
$08.6 million to the estimated costs ~ a computer program.
There are also large costs involved in implementing curriculun and staff development. If
each school spends $I,000 for each of four years to train teachers, that expense adds up to $400
million. If half o[ the schools hire one computer resource ~acher to teach all the computer
classes) do additional staff training, amd erfw, administrative tasks, at a cost of $15,000 a year
for each of the past two years) the cost totals $1.5 billion. If, for the past two years, e a c h
school spends about $350 for a set of computer books or other instructional materials, this adds
$70 million to a computer education budget.
Adding up just the "real" and easily identified costs of computer use in the schools then
gives the following picture:
Total Through 84-85 School Year
Total $ 3,645)600,000
Assume that one=half of the computer use in schools is to teach "computer literacy" or
other computer-related objectives, while the other half is for teaching traditional skills via
computer instruction. The chart would then be revised to the following:
Total $ 2,308,300,000
page 24
giRings
tf schools have already spent almost $2-1/2 billion to teach computer l i t e r a c y skills t o
students, what will the cost total by 19902
In fact, program development should always include an evaluation phase which provides
information that can be used in the next stage of planning.
In this way, the evaluation information helps improve the computer education program from
year to year.
Schools, then, have a need to gather information about their computer-related curriculum,
whether it be a computer course, a set of courses, computer instruction that is integrated into
other instruction, or a district-wide K-12 computer education program. People in the schools
therefore need a procedure and a set of tools to gather and use information about their program.
This procedure will help diagnose the eflective, uncertain, and problem areas in a program so that
the people involved can then make specific recommendations for improvements in the program.
An evaluation process should include procedures and tools that will help educators gather
and use information about their computer education program. The o b j e c t i v e s of the p r o c e s s
should be twofold:
page 25
Billings
The computer education program may be at a very early stage of development° If so, the
process will help educators react to a recently written progredn plan or a list oL newly developed
goals and objectives. If a program has been implemented for a number of years, and has the
courses, staff, hardwar% softwar% and print materials all in plac% the evaluation process can be
used to examine all of the components of the program and see how well e a c h c o m p o n e n t is
working.
The process described in this paper is not designed to give final d a t a or e v i d e n c e for
c o n t i n u i n g a program. Instead, the process shows how to gather information about a progreun
which will help determine if~ and how~ it should be modifted for the next year.
The goals of most research studies are to obtain generalizable knowledge or to test the
relationship between two or more things. Researchers investigate questions such as: 'How does
the use of a word processor affect a child's writing skill?" or "How does the programming process
affect a student's problem solving ability?" or "What is the computer's impact on our nationts
schools?" The process that researchers would use to answer those questions would take a great
deal of time and expertise in research techniques.
Evaluators can be teachers and ackninistrators from within the school system~ resource
personnel from regional education agencies or evaluation experts from outside sources such as a
university or state deparlment of education. A classroom teacher can evaluate his or her own
course to see how it should be changed~ if at all~ the next time it is taught. For a d i s t r i c t ' s
computer education programs the evaluator could be someone from the district's curriculum or
research staff or someone with training or experience in program evaluation.
So, if the questions relate to how well a program is working or if any modifications may be
needed in a program, an evaluation would be conducted.
A smmqative evaluation typically yields final information about the result of the program. A
summative r e p o r t will document the program's implementation at the end of a trial period or
evaluate a program's degree of success after it has been refined. The reports are more formally
written and provide sLm~nary statements to the program planners and decision makers.
A formative evaluation is typically designed to give the program planners and knplelnenters
an idea of how well their program is working, particularly in its initial stages. A formative report
page 26
Billings
usually describes program objectives and content9 and indicates the progress in student attitude
and achievement° It then identifies the components in the computer education program that work
well and the ones that need modification. A formative evaluation will be mo6t useful at the end
o~ the fLrst and second year of a program, but will also have use in the following years.
State leadership has not been as specific in the program evaluation phase as it has been in
the planning and i m p l e m e n t a t i o n phases of computer programs. For example, the State of
Minnesota passed the Technology and Educational Improvement Act which allocates funds for the
planning and impl~nentation of technology in the schools. Each district's request for funds must
also include prooedures for evaluating its eef~orts in technology and reporting to its community. In
discussing the components of the evaluation plan, the Department of Education suggests, "Each
school district will want to design an evaluation and reporting plan which best suits their needs."
O t h e r than to s t a t e t h a t the e v a l u a t i o n and reporting plans should address WHAT is being
evaluated, HOW the data are to be c o l l e c t e d , a n d WHO will be receiving the r e p o r t , the
Department of Education provides no real procedures or techniques lor educators to use in the
evaluation and reporting process (Minnesota, 198#).
A survey ol recent evaluation materials shows that no materials are commonly available
that describe the process of evaluating computer education programs. While there are books on
program evaluation, particularly for mathematics or reading programs~ no books describe the
process of evaluating computer education programs.
The most comprehensive testing program may take place with the upcoming "Computer
Competence" test from the National Assessment of Educational Progress. In 1986, 90,000 children
age 9, 13~ and 17 will be tested on the subject of computers, along with mathematics, science,
and reading. The resulting data will provide baseline inlormation about students' access to
computers and about their general knowledge of computers.
While student assessment and student achievement data are important information, they
constitute only one aspect of a computer education program in a school. Other aspects that need
to be reviewed are the program's goals and objectives, the staff development process, and the
instructional (hardware, software and print/video) materials.
Many secondary schools use the Evaluative Criteria by the National Study of School
Evaluation to find out how well they are achieving their program objectives and to what extent
they are meeting the needs of their students and the community. Because the most recent
edition, published in 1978, does not include computer education with its other educational
programs, evaluators cannot use the NSSE checklists and evaluation items to assess the computer
curriculum or progarn.
page 27
Billings
The need~ then~ clearly exists ~or schools to have a set of a s s e s s m e n t p r o c e d u r e s with
evaluation tools that can be used in monitoring the progress ,of a ccxnputer education progravn.
Computer educators who use such procedures should be able to accurately descirbe the cempu~er
e d u c a t i o n process that occurs in their schools and use the data to revise and to ~nprove the
computer education progr~n.
~Vilt everyone agree ~ a t it is time to evaluate the computer education program? Probably
not. Here are sc~ne of the questions that are typically asked or sc~ne of the attitudes that are
expressed when the topic of evaluation c a n e s up, followed by an evaluator's response.
Response: Increasing numbers of decisions are required regarding hardware, software and book
p u r c h a s e s , the c o m p u t e r c o u r s e s to be o f f e r e d , and the amount or type of staff training.
Experiences you have had to date will help you make the best decisions that you can. Howeve G
information collected over a period of time wilt prepare you to make even better decisions and to
support them in the face of lack of understanding and even opposition and challenge.
Response: The evaluation process for the computer education program should be separated from
any regular or @nerat evaluation of teachers. The c a n p u t e r education program will be evaluated
on the attainment of its goals and objectives, staff development, instructional materials and effect
on students.
Question: Is evaluation really worth the effort? Most evaluation reports are too long,
filled with incomprehensible statistics and placed on shelves where no one ever looks
a t them!
_~.sponse: A process should be designed to help you evaluate your particular computer education
program. The results will be most valuable to those who have to make decisions regarding the
program. The value of the report will also be in direct proportion to the evaluator's knowledge
of the needs of the audience.
Response: Tests are just one set of tools to use in monitoring the effectivenss of a program.
There are many others: observations, questionnaires, interviews and document review. These
tools can give you both numerical and descriptive information about the components of the
computer education program: the goals and objectives, staff development, instructional materials,
and student achievement.
page 2g
BiRings
Bu G even if there are questions and concerns about the evaluation process, the information
that is gathered will stand on its own as vai.uable infon~nation to computer educators, the program
staff and to the school community. The kmportant thing is to get started!
An evaluation process can have three major partsa (1) Plannin~..the evaluation--reviewing
the evaluation process and the components of the cernputer education program, then setting up
~ e exact steps to be Iollowed; (2) GatherinG the information--developing the questions, deciding
which tools are appropriat% then designing and using the tools; and (3) Using the informatio_n--
s ~ r ~ a r i z h l g the data collecteG making the reconc~nendations, and reporting the results.
Plannin G t h e e v a l u a t i o n
Consider everyone's assumptions and concerns about the evaluation process. Unless these
assumptions are dealt with adequately in the beginning, the necessary cooperation won't be there
when the actual evaluation occurs. Find out the concerns, such as the ones described earlier,
and discuss then openly with the participants. Make sure that all participants who contribute to
the evaluation process get sc~nething out of the process in the end.
Program goats and objectives: The program's goals and objectives change as computer
hardware costs decrease, as the access to computers increases, as new prograrr~ning languages or
s o f t w a r e p a c k a g e s b e c o m e s a v a i l a b l e , and as the r e s u l t s and reactions from students are
reviewed. By evaluating the goals and objectives, the Iollowing information can be offered to the
school:
Staff Development:
Information needs to be gathered about the assistance given to all staff people who work
with students in the cc~nputer education program. The following feedback could be offered."
page 29
BiJ]ings
Instructional Materials;
School personnel select and use canputer hardwar% software and print/non-print materials
~or use in c a n p u t e r - r e l a t e d courses, i-tow will people know K the materials are being used, if
those m a t e r i a l s a r e s a t i s f a c t o r y and when or how the m a t e r i a l s should be changed? An
evaluation process can provide infonnation such as the following;
Student Achievement=
Teachers and parents want the c a n p u t e r education program to have a positive e f f e c t on
s t u d e n t s e n r o l l e d in the p r o g r a m . They would like to know that the students were given
adequate opportunities to use the instructional materials and that the students learned sanething
by using the materials. A primary focus should be on evaluating the extent to which students
attained objectives set for that course, whether the objectives be cognitive, a:Kective or psycho-
motor. In g e n e r a l , the e v a l u a t i o n of student achievement should include irfformation on the
Iollowing:
~, S e t t i n g up t h e e x a c t s t e p s t o be f o l l o w e d .
Before gathering the irfforrnation~ there
are certain steps and guidelines to Iollow in doing a formative evaluation, While the p r o c e s s
varies fronn school to school, there are common steps to follow.
C~therinL~.~ - ir~ormation
,rage 30
Billings
Even better questions: Does the program staff believe that each goal,
objective or learning outcome is being met? II so, how well was it
accomplished? 1¢ not~ why not?
The Iollowing chart su~r~narizes some of the major advantages and disadvantages for e a c h
t y p e of tool that has been described so far. Remember that it is not as important for you to
choose the "righf' tool as it is for your to develop the tool as best you can and to then use it as
appropriately as you can. Ahnost any tool can help you get responses to the kinds ol questions
you have been developing.
page 31
giliings
The first step in selecting your tools is to list each question that you want answered and
then d e s c r i b e one or t w o methods for answering each of the questions° You the~ group the
questions together t h a t c a n be a n s w e r e d by the same t y p e of an ins~rument, such as a
questionnaire. It is probable that one questionnaire could contain enoudh ite~ns to help you
respond to several of your major questions.
Once you have decided what questions you are going to try to answer and what types of
tools would be most useful, you are ready to purd~ase, create or revise the toots that you need.
Since few, if any, corr~mercial tools are available for your user you wiU be designing most of the
tools that you need in your evaluation.
There are a few examples oI available tools° One of the most directly applicable would be
e n d - o f - c h a p t e r t e s t s or p r e - p o s t tests fran commercial publishers of print materials for the
computer literacy courses in the schools. T h e r e are also many forms t h a t d i s t r i c t s have
d e v e l o p e d for their i n s e r v i c e training p r o g r a m s or for selecting textbooks and other print
materials. And there are many forms for e v a l u a t i n g and s e l e c t i n g c o m p u t e r h a r d w a r e and
computer software.
Given the list of goals and objectives~ you could discover whether or not they were actually
implemented and, if so, how, where and when, and, if not, why not. You may atso find that other
goals and objectives were implemented instead and will want to list them for discussion when you
develop your recommendations.
The following format could be used to measure the actual implementation of the prograin
goals and objectives.
The information that you collect has to be summarized so that you can discuss your results,
either verbally or in a written report. 5crne of your information may not come in a format that
can be analyzed quickly, but the more you have structured the response formats in your tools, the
easier your analysis will be.
In general, the responses to very open-ended questions will have to be categorized into a
number of types of answers and then tallied and totalled. Responses on a numerical rating scale
on a questionnaire, however, are already categorized and are thus set up to tally and total easily.
page. 32
Billings
The first thing you should do is to review the questions that you are trying to answer. If
you are using the program implementation description responses in the two right-hand columns~
compare the actual implementation to what was planned and summarize the information in this
way:
tf rating scales were used to respond to the question, "i-low well did we meet each of our
objectives?" you would review the results from those forms. The rating scale makes the responses
easy to tally~ state as a fraction) and then convert to a percentage. For example) you may make
statements such as: '~20/2F or 80% of the objectives were more than satisfactorily met in the
computer literacy class. These objectives were taught for 16 out of the 18 weeks in the course."
Continue with information about objectives met satisfactorily and less than satishctorily. If
all the o b j e c t i v e s t h a t w e r e not r e a c h e d satisfactorily or more than satisfactodly all have
sc~nething in common, mention this information.
gust as asking the right questions and developing the fight tools was important to collecting
information, summarizing the information and developing the recommendations is important to the
hnprove~nent of the computer education program. The information that you get from the
e v a l u a t i o n is supposed to be used for program improvement) but people aren't going to just
"know" what to do with the data you collected. You have to tell them.
Formulate your recommendations around the four components of your computer education
program, gust as you formed the evaluation questions and designed the tools around the four
components of the computer education program) you can do the same with your recommendations.
I. Recommend first those things that don't need to change because they work weJl:
Where are the successes in the computer education program? There are probably quite a
few of them and they should be discussed first. You wiU have gathered and summarized some
data t h a t indicate some aspects of the program don't need to be changed. Discuss those and
acknowledge not only the successes) but the factors behind them) if you know what they are.
2. Recommend next those things that would help reinforce or extend the program.
You will discover some factors behind some of the "successes" in the program. You can use
t h a t information to recommend additional or parallel support in areas that would reinforce or
extend the program. For exampl% if some of the software objectives w e r e n ' t met because
students did not have the needed access, recommend that the school purchase the necessary
software and print materials.
If you uncover some areas of the program that need major revisions or restructure) bring
that to the attention of the school. This is one more way that the program will be made better.
It is, granted) much easier to talk about the success stories than to suggest that sc~nething needs
fixing.
pai~e 33
Billings
The report needs to be delivered to the decision makers in dine for fl~em to be able to
study your conclusions and recc~nmendations and then apply lJlem in the decision-making process°
There are as many ways to report your results as there are types of cemputer education
p r o g r a m s and t y p e s of e v a l u a t i o n designs. Your report wilt help shape the content and t/~e
sequence in which it is presented.
In general~ you will want to include the iollowing pieces o~[ in[onnation in a report:
Remind yourseE, before you begin~ who your audience is, what t h e y n e e d to know~ and
when.
Make a list of the dif[erent people who will be getting your results and why. List the
questions that they want answers t% as well as the i~onnation you think they should have.
Use a reporting strategy consistent with your audience needs. How much do they want to
read or how long do they want to listen to you? Try to give your audience only knportant
information. They will want to know only what is working well and why~ and what needs to be
changed and why.
Finalty~ make it clear to the reader ,.~aere the important i ~ o n n a t i o n is tocated~ so the
report can be read quickly and easily. That ensures that more people will read it, so your work
is more worthwhile.
SUMMA.~-,LY
The need for evaluating computer education programs is clear. The need for information is
great, whether the i~ormation is used to justify increasing costs of the program or to revise and
improve the program. The benefits of evaluating cc~nputer education programs are many. The
program staff have data that help them modify their plan for the next stage of the program. The
leachers and acinin~trators can see the progress their program is making. Parents can see the
results of spending tax dollars on a new program. Students can get better instruction and the
chance to learn even more about computing.
The evaluation process~ as outlined in this article, is very "do-able" in the schools now. it
is time :[or schools to take the necessary steps and proceed with an evaluation process that gives
them the in:[orrnation they n e e d and the opportunity for an even better cc~nputer education
program.
page 34
BiJlings
BIBL IOGRAI~Y
Bellack~ A,Ao and Kliebard, H.ivi. (Editors) Curriculum and Evaluation. Berkeley, CA:
McCutchan, 1977°
Fink~ A. and Koseco~ 3° An Evaluation Primer. Beverly Hills, CA: Sage Publications, 1978.
Knapper, C.K. Evaluating Instructional Technolo2d~.~. New York: Halsted Press, 1980.
tv~nnesota Department o5 Education. Plannin~for Educational Technology. White Bear Lake, ~v~N:
Minnesota Curriculum Services Center, 1983.
Morris~ L.L. and Fitzgibbon, C.T. Evaluator's Handbook. Beverly Hills, CA: Sage Publications,
1978.
National Center for Education Statistics. Instructional Use of Computers in Public Schools.
Washington, D.C. U.So Deparlment of Education, 1982.
National Center of School Evaluation. Evaluative Criteria lor the Evaluation of Secondary
Schools. Falls Church, Virginia: National Study of School Evaluation, 1978.
Poi~harn, W.H. (Editor). Evaluation in Education. Berkely, CA: McCutchan Publishing Co., I973.
Q.E.D. Inc., ~#3icrocomputer Usage in the Schools, 1984-85. Denver, CO: Qaality Education Data,
Inc., 1985.
SRCEI. SchoolinKanct Technology: State Level Policy Initiatives~ Vol I. Research Triangle Park,
N.C.: Southeastern Regional Council for Education Improvement, July 1983.
TALMIS, Inc. TALMIS Industry Updates: "Centralized Purchasing in Education: Pros & Cons,"
March I985. "School Market Strong", September 19g#.
TALMIS, Inc. The K-12 Market for Microcomputers and Software. Chicago: TALMIS t(eport,
Sept. 198#.
Watt, O. "Canputer Evaluation Caneth", Popular Computin~ July, 1984.
pa6e 35