Professional Documents
Culture Documents
At the end of this unit standard you will be able to Apply information gathering techniques
for computer system development
Purpose:
People credited with this unit standard are able to:
Design and conduct an interview for gathering information for computer system
development.
Design and perform an analysis of the results from a questionnaire for gathering
information for computer system development
Gather data from documents for computer system development
Observe a person's behaviour for gathering information for computer system
development
Consolidate the information gathered via different techniques. .
The credit value of this unit is based on a person having the prior knowledge and
skills to:
Demonstrate literacy skills at least at NQF level 4.
Demonstrate PC competency skills (End-User Computing modules up to level 3).
Equipment needed:
Learning material, Learner workbook, Pen, Ruler.
PLEASE NOTE: THE USE OF PENCILS OR TIPPEX IS NOT ALLOWED.
IF YOU USE A PENCIL THE VALIDITY OF YOUR WORK COULD BE QUESTIONABLE, AND THIS
COULD LEAD TO FRAUD.
Resources (selective resources might be used, depending on the facilitator and venue
circumstances), one or all of the following can be used:
Your facilitator/mentor
Learning material
Learner workbook
Visual aids
White board
Flip chart
Equipment
Training venue
Assessments:
The only way to establish whether you are competent and have accomplished the specific
outcomes is through continuous assessments
The given exercises can contain one or more of the following:
Information for you to read
Exercises that require you to have a problem-solving approach to communication
Questions for you to answer
Case studies with questions that follow
The facilitator will tell you which exercise you need to complete each day.
You need to hand in your answers to the facilitator who will mark it for correctness.
If you do not know the answer, you will have to go back to that particular section in
you learner guide and go over it again.
Ask the facilitator for help, if you do not understand any of the questions asked.
Always remember to give reasons for your answers
Specific Outcome 1 : Design and conduct an interview for gathering information for
computer system development.
The interviewee reports that he or she has understood the interview objectives.
The interviewee reports that he or she has understood the interview questions.
The interview provides answers that meet interview objectives.
The presentation of the interview is appropriate to the interviewee.
Specific Outcome 2 : Design and perform an analysis of the results from a questionnaire
for gathering information.
The respondents report that they understand the questionnaire objectives.
The respondents report that they understand the questions.
The questionnaire responses provide answers that meet questionnaire objectives.
The presentation of the questionnaire is appropriate to the target population.
A summary of questionnaire responses, and a comparison with expected responses,
allows summary statements to be made about the population sample.
Specific Outcome 3 : Gather data from documents for computer system development.
Research notes identify data that meet the specified information requirements using
an industry recommended format.
Research notes identify the characteristics of the data and the relationships between
data items.
Research notes identifying data items facilitate access to those data items.
Specific Outcome 4 : Observe a person's behaviour for gathering information for
computer system development
A record of the behaviour identifies events that meet the specified information
requirements, and outlines those events.
A report about the observation compares the outcome of the observation with the
observation objectives.
Specific Outcome 5 : Consolidate the information gathered via different techniques.
The comparison identifies agreement and differences between the information
gathered from different techniques.
Differences are resolved and justified by reviewing the information gathering
techniques.
ASSESEMENT CRITERIA
The interviewee reports that he or she has understood the interview objectives.
The interviewee reports that he or she has understood the interview questions.
The interview provides answers that meet interview objectives.
The presentation of the interview is appropriate to the interviewee.
When you're watching the news at night or reading the paper in the morning, you'll notice that all
the stories have a point in common: They all contain interviews. No matter what subject is being
tackled, there'll always be people willing to be interviewed about it. And that's great, because that
way we can get a sample of what people think and feel about different issues.
Interviews are usually defined as a conversation with a purpose. They can be very helpful to your
organization when you need information about assumptions and perceptions of activities in your
community. They're also great if you're looking for in-depth information on a particular topic from
an expert. (If what you really need is numerical data--how much and how many--a written
questionnaire may better serve your purposes.)
Interviewing has been described as an art, rather than a skill or science. In other cases, it has been
described as game in which the interviewee gets some sort of reward, or simply as a technical skill
you can learn. But, no matter how you look at it, interviewing is a process that can be mastered by
practice. This chapter will show you how.
Using an interview is the best way to have an accurate and thorough communication of ideas
between you and the person from whom you're gathering information. You have control of the
question order, and you can make sure that all the questions will be answered.
In addition, you may benefit from the spontaneity of the interview process. Interviewees don't
always have the luxury of going away and thinking about their responses or, even to some degree,
Interviews are not the only way of gathering information and depending on the case, they may not
even be appropriate or efficient. For example, large-scale phone interviews can be time-consuming
and expensive. Mailed questionnaires may be the best option in cases where you need information
form a large number of people. Interviews aren't efficient either when all you need is collecting
straight numeric data. Asking your respondents to fill out a form may be more appropriate.
Interviews will not be suitable if respondents will be unwillingly to cooperate. If your interviewees
have something against you or your organization, they will not give you the answers you want and
may even mess up your results. When people don't want to talk, setting up an interview is a waste of
time and resources. You should, then, look for a less direct way of gathering the information you
need.
You must also be well prepared for traps that might arise from interviews. For example, your
interviewee may have a personal agenda and he or she will try to push the interview in a way to
benefit their own interests. The best solution is to become aware of your interviewee's inclinations
before arranging the interview.
Sometimes, the interviewee exercises his or her control even after the interview is done, asking to
change or edit the final copy. That should be a right of the interviewer only. If the subject you're
addressing involves technical information, you may have the interviewee check the final result for
you, just for accuracy.
Your choice of interviewees will, obviously, be influenced by the nature of the information you need.
For example, if you're trying to set up a volunteer program for your organization, you may want to
interview the volunteer coordinator at one or two other successful agencies for ideas for your
program.
On the other hand, if you're taking a look at the community's response to an ad campaign you've
been running, you'll want to identify members of the target audience to interview. In this case, a
focus group can be extremely useful.
Sometimes, being a good interviewer is described as an innate ability or quality possessed by only
some people and not by others. Certainly, interviewing may come more easily to some people than
to others, but anybody can learn the basic strategies and procedures of interviewing. We're here to
show you how.
Interview structure:
First you should decide how structured you want your interview to be. Interviews can be formally
structured, loosely structured, or not structured at all. The style of interviewing you will adopt will
depend on the kind of result you're looking for.
In a highly structured interview, you simply ask subjects to answer a list of questions. To get a valid
result, you should ask all subjects identical questions. In an interview without a rigid structure, you
can create and ask questions appropriate the situations that arise and to the central purpose of the
interview. There's no predetermined list of questions to ask. Finally, in a semi-structured setting,
there is a list of predetermined questions, but interviewees are allowed to digress.
Now that you've decided how structured you want the interview to be, it's time to decide how you
want to conduct it. Can you do it through the phone, or do you need to it face-to-face? Would a
focus group be most appropriate? Let's look at each of these interview types in depth.
Face-to-face interviews
Face-to-face interviews are a great way to gather information. Whether you decide to interview
face-to-face depends on the amount of time and resources you have available at your disposal.
Some advantages of interviewing in person are:
You have more flexibility. You can probe for more specific answers, repeat questions, and
use discretion as to the particular questions you ask.
You can make sure the interview is complete and all questions have been asked.
Telephone interviews
They're particularly useful when the person you want to speak to lives far away and setting up a
face-to-face interview is impractical. Many of the same advantages and disadvantages of face-to-
face interviewing apply here; the exception being, of course, that you won't be able to watch
nonverbal behavior.
Keep phone interviews to no more than about ten minutes--exceptions to this rule may be
made depending on the type of interview you're conducting and on the arrangements
you've made with the interviewee.
If you need your interviewee to refer to any materials, provide them in advance.
Be extra motivating on the phone, because people tend to be less willing to become
engaged in conversation over the phone.
Identify yourself and offer your credentials. Some respondents may be distrustful, thinking
they're being played a prank.
Write down the information as you hear it; don't trust your memory to write the information
down later.
Speak loud, clear and with pitch variation -- don't make it another boring phone call.
Don't call too early in the morning or too late at night, unless arranged in advance.
With the increasing use of computers as a means of communication, interviews via e-mail have
become popular. E-mail is an inexpensive option for interviewing. The advantages and drawbacks of
e-mail interviews are similar to phone interviews. E-mails are far less intrusive than the phone. You
are able to contact your interviewee, send your questions, and follow up the received answers with
a thank-you message. You may never meet or talk to your respondent.
However, through e-mail your chances for probing are very limited, unless you keep sending
messages back and forth to clarify answers. That's why you need to be very clear about what you
need when you first contact your interviewee. Some people may also resent the impersonal nature
of e-mail interaction, while others may feel more comfortable having time to think about their
answers.
Focus groups
A focus group, led by a trained facilitator, is a particular type of "group interview" that may be very
useful to you. Focus groups consisting of groups of people whose opinions you would like to know
may be somewhat less structured; however, the input you get is very valuable. Focus groups are
perhaps the most flexible tool for gathering information because you can focus in on getting the
opinions of a group of people while asking open-ended questions that the whole group is free to
answer and discuss. This often sparks debate and conversation, yielding lots of great information
about the group's opinion.
During the focus group, the facilitator is also able to observe the nonverbal communication of the
participants. Although the sample size is generally smaller than some other forms of information
gathering, the free exchange of opinions brought on by the group interaction is an invaluable tool.
So you've chosen your interviewees, set up the interview, and started to think about interview
questions. You're ready to roll, right?
Not quite. First, you need to make sure you have as much information as possible about your
interview topic. You don't need to be an expert -- after all, that's why you're interviewing people! --
but you do want to be fairly knowledgeable. Having a solid understanding of the topic at hand will
make you feel more comfortable as an interviewer, enhance the quality of the questions you ask,
and make your interviewee more comfortable as well.
In addition, it's important to understand your interviewee's culture and background before you
conduct your interview. This understanding will be reflected on the way you phrase your questions,
Now that you're prepared, it's time to conduct the interview. Whether calling or meeting someone,
be sure to be on time -- your interviewee is doing you a favor, and you don't want to keep him or her
waiting.
When interviewing someone, start with some small talk to build rapport. Don't just plunge into your
questions -- make your interviewee as comfortable as possible.
Points to remember:
Practice -- prepare a list of interview questions in advance. Rehearse, try lines, mock-interview
friends. Memorize your questions. Plan ahead the location and ways to make the ambient more
comfortable.
Small-talk -- never begin an interview cold. Try to put your interviewee at ease and establish
rapport.
Be natural -- even if you rehearsed your interview time and time again and have all your
questions memorized, make it sound and feel like you're coming up with them right there.
Look sharp -- dress appropriately to the ambient you're in and to the kind of person you're
interviewing. Generally you're safe with business attire, but adapt to your audience. Arrive on
time if you are conducting the interview in person.
Listen -- present yourself aware and interested. If your interviewee says something funny, smile.
If it's something sad, look sad. React to what you hear.
Keep your goals in mind -- remember that what you want is to obtain information. Keep the
interview on track, don't digress too much. Keep the conversation focused on your questions. Be
considerate of your interviewee's limited time.
Don't take "yes/no" answers -- monosyllabic answers don't offer much information. Ask for an
elaboration, probe, ask why. Silence may also yield information. Ask the interviewee to clarify
anything you do not understand
Respect -- make interviewees feel like their answers are very important to you (they are
supposed to be!) and be respectful for the time they're donating to help you.
Questions are such a fundamental part of an interview that's worth taking a minute to look at the
subject in depth. Questions can relate to the central focus of your interview, with to-the-point,
specific answers; they can be used to check the reliability of other answers; they can be used just to
create a comfortable relationship between you and the interviewee; and they can probe for more
complete answers.
It's very important that you ask your questions in a way to motivate the interviewee to answer as
completely and honestly as possible. Avoid inflammatory questions ("Do you always discriminate
against women and minorities, or just some of the time?"), and try to stay polite. And remember to
express clearly what you want to know. Just because interviewer and interviewee speak the same
language, it doesn't mean they'll necessarily understand each other.
There are some problems that can arise from the way you ask a question. Here are several of the
most common pitfalls:
Questions that put the interviewee in the defensive -- These questions bring up emotional
responses, usually negative. To ask, "Why did you do such a bad thing?" will feel like you are
confronting your interviewee, and he or she will get defensive. Try to ask things in a more
relaxed manner.
The two-in-one question -- These are questions that ask for two answers in one question. For
instance, "Does your company have special recruitment policy for women and racial minorities?"
may cause hesitation and indecision in the interviewee. A "yes" would mean both, and a "no"
would be for neither. Separate the issues into two separate questions.
The complex question -- Questions that are too long, too involved, or too intricate will intimidate
or confuse your interviewee. The subject may not even understand the questions in its entirety.
The solution is to break down the question and make brief and concise.
In addition, pay attention to the order in which you ask your questions. The arrangement or
ordering of your question may significantly affect the results of your interview. Try to start the
interview with mild and easy questions to develop a rapport with the interviewee. As the
interview proceeds, move to more sensitive and complex questions.
Interviewing is one of the primary ways to gather information about an information system. A good
system analyst must be good at interviewing and no project can be conduct without interviewing.
n Prepare the interview carefully, including appointment, priming question, checklist, agenda,
and questions.
n Listen carefully and take note during the interview (tape record if possible)
n Be neutral
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
Self-check
2.1 Questionnaire
The questionnaire is most frequently a very concise, preplanned set of questions designed to yield
specific information to meet a particular need for research information about a pertinent topic. The
research information is attained from respondents normally from a related interest area. The
dictionary definition gives a clearer definition: A questionnaire is a written or printed form used in
gathering information on some subject or subjects consisting of a list of questions to be submitted to
one or more persons.
Advantages
Economy - Expense and time involved in training interviewers and sending them to interview
are reduced by using questionnaires.
Uniformity of questions - Each respondent receives the same set of questions phrased in
exactly the same way. Questionnaires may, therefore, yield data more comparable than
information obtained through an interview.
Standardization - If the questions are highly structured and the conditions under which they
are answered are controlled, then the questionnaire could become standardized.
Disadvantages
The questionnaire is said to be the most "used and abused" method of gathering information by the
lazy man. because often it is poorly organized, vaguely worded, and excessively lengthy.
Closed or restricted form - calls for a "yes" or "no" answer, short response, or item checking;
is fairly easy to interpret, tabulate, and summarize.
Open or unrestricted form - calls for free response from the respondent; allows for greater
depth of response; is difficult to interpret, tabulate, and summarize.
Deals with a significant topic, a topic the respondent will recognize as important enough to
justify spending his time in completing. The significance should be clearly stated on the
questionnaire or in the accompanying letter.
Seeks only that information which cannot be obtained from other sources such as census
data.
As short as possible, only long enough to get the essential data. Long questionnaires
frequently find their way into wastebaskets.
Attractive in appearance, neatly arranged, and clearly duplicated or printed.
Get all of the help you can in planning and constructing your questionnaire. Study other
questionnaires and submit your own questionnaire to faculty members and class members
for criticism.
Try your questionnaire out on a few friends or associates. This helps to locate unclear and
vague terms.
Choose respondents carefully. It is important that questionnaires be sent only to those who
possess the desired information - those who are likely to be sufficiently interested to
respond conscientiously and objectively.
A preliminary card asking whether or not the individual would be willing to participate in the
proposed study is recommended by some research authorities. This is not only a courteous
approach but a practical way of discovering those who will cooperate in furnishing the
desired information.
It has also been found that in many instances better response is obtained when the original
request was sent to the administrative head of an organization rather than directly to the
person who had the desired information. It is possible that when a superior officer turns
over a questionnaire to a staff member to fill out there is some implied feeling of obligation.
If questionnaires are planned for use in public schools, it is imperative that approval of the
project be secured from the principal or superintendent of the school.
Is the question necessary? How will it be used? What answers will it provide? How will it be tabulated,
analyzed, and interpreted?
Are several questions needed instead of one?
Do the respondents have the information or experience necessary to answer the questions?
Is the question clear?
Is the question loaded in one direction? Biased? Emotionally toned?
Will the respondents answer the question honestly?
Will the respondents answer the question?
Is the question misleading because of unstated assumptions?
Is the best type of answer solicited?
Is the wording of the question likely to be objectionable to the respondents?
Is a direct or indirect question best?
If a checklist is used, are the possible answers mutually exclusive, or should they be?
If a checklist is used, are the possible answers "exhaustive"?
Is the answer to a question likely to be influenced by preceding questions?
Are the questions in psychological order?
Is the respondent required to make interpretations of quantities or does the respondent give data
which investigator must interpret?
Summary
Questionnaires have the advantage of gathering information from many people in a relatively short
time and of being less biased in the interpretation of their results. Choosing right questionnaires
respondents and designing effective questionnaires are the critical issues in this information
collection method. People usually are only use a part of functions of a system, so they are always
just familiar with a part of the system functions or processes. In most situations, one copy of
questionnaires obviously cannot fit to all the users. To conduct an effective survey, the analyst
should group the users properly and design different questionnaires for different group. Moreover,
the ability to build good questionnaires is a skill that improves with practice and experience. When
designing questionnaires, the analyst should concern the following issues at least:
Self-check
The fourth one is analyzing procedures and other documents. By examining existing system and
organizational documentation, system analyst can find out details about current system and the
organization these systems support. In documents analyst can find information, such as problem
with existing systems, opportunities to meet new needs if only certain information or information
processing were available, organizational direction that can influence information system
requirements, and the reason why current systems are designed as they are, etc.
However, when analyzing those official documentations, analysts should pay attention to the
difference between the systems described on the official documentations and the practical systems
in real world. For the reason of inadequacies of formal procedures, individual work habits and
preferences, resistance to control, and other factors, the difference between so called formal system
and informal system universally exists.
Sources of Requirements
Good requirements start with good sources. Finding those quality sources is an important task and,
fortunately, one that takes few resources. Examples of sources of requirements include:
Customers
Users
Partners
Domain Experts
Industry Analysts
After you have identified these sources, there are a number of techniques that may be used to
gather requirements. The following will describe the various techniques, followed by a brief
discussion of when to use each technique.
To get the requirements down on paper, you can to do one or more of the following:
Interview users
Send questionnaires
Conduct workshops
The best idea is to get the requirements down quickly and then to encourage the users to correct
and improve them. Put in those corrections, and repeat the cycle. Do it now, keep it small, and
correct it at once. Start off with the best structure you can devise, but expect to keep on correcting it
throughout the process. Success tips: Do it now, keep it small, and correct it immediately.
3.2 Some of the things you might do with the information you collect include:
Making photocopies of all recording forms, records, audio or video recordings, and any other
collected materials, to guard against loss, accidental erasure, or other problems
Entering narratives, numbers, and other information into a computer program, where they can
be arranged and/or worked on in various ways
Performing any mathematical or similar operations needed to get quantitative information ready
for analysis. These might, for instance, include entering numerical observations into a chart,
Transcribing (making an exact, word-for-word text version of) the contents of audio or video
recordings
Coding data (translating data, particularly qualitative data that isn’t expressed in numbers, into a
form that allows it to be processed by a specific software program or subjected to statistical
analysis)
Organizing data in ways that make them easier to work with. How you do this will depend on
your research design and your evaluation questions. You might group observations by the
dependent variable (indicator of success) they relate to, by individuals or groups of participants,
by time, by activity, etc. You might also want to group observations in several different ways, so
that you can study interactions among different variables.
The “who” question can be more complex. If you’re reasonably familiar with statistics and statistical
procedures, and you have the resources in time, money, and personnel, it’s likely that you’ll do a
somewhat formal study, using standard statistical tests. (There’s a great deal of software – both for
sale and free or open-source – available to help you.)
You can hire or find a volunteer outside evaluator, such as from a nearby college or university, to
take care of data collection and/or analysis for you.
You can conduct a less formal evaluation. Your results may not be as sophisticated as if you
subjected them to rigorous statistical procedures, but they can still tell you a lot about your
program. Just the numbers – the number of dropouts (and when most dropped out), for
instance, or the characteristics of the people you serve – can give you important and usable
information.
You can try to learn enough about statistics and statistical software to conduct a formal
evaluation yourself. (Take a course, for example.)
You can collect the data and then send it off to someone – a university program, a friendly
statistician or researcher, or someone you hire – to process it for you.
If possible, use a randomized or closely matched control group for comparison. If your control is
properly structured, you can draw some fairly reliable conclusions simply by comparing its
results to those of your intervention group. Again, these results won’t be as reliable as if the
comparison were made using statistical procedures, but they can point you in the right
direction. It’s fairly easy to tell whether or not there’s a major difference between the numbers
for the two or more groups. If 95% of the students in your class passed the test, and only 60% of
those in a similar but uninstructed control group did, you can be pretty sure that your class made
a difference in some way, although you may not be able to tell exactly what it was that
mattered. By the same token, if 72% of your students passed and 70% of the control group did
as well, it seems pretty clear that your instruction had essentially no effect, if the groups were
starting from approximately the same place.
Who should actually collect and analyze data also depends on the form of your evaluation. If you’re
doing a participatory evaluation, much of the data collection - and analyzing - will be done by
community members or program participants themselves. If you’re conducting an evaluation in
which the observation is specialized, the data collectors may be staff members, professionals, highly
trained volunteers, or others with specific skills or training (graduate students, for example).
Analysis also could be accomplished by a participatory process. Even where complicated statistical
procedures are necessary, participants and/or community members might be involved in sorting out
what those results actually mean once the math is done and the results are in. Another way analysis
can be accomplished is by professionals or other trained individuals, depending upon the nature of
the data to be analyzed, the methods of analysis, and the level of sophistication aimed at in the
conclusions.
Self-check
SPECIFIC OUTCOME 4:
Observe a person's behaviour for gathering information for
computer system development
ASSESEMENT CRITERIA
A record of the behaviour identifies events that meet the specified information requirements,
and outlines those events.
A report about the observation compares the outcome of the observation with the
observation objectives.
All members of the student’s individualized education program (IEP) can observe behavior to learn
about patterns and functions of behavior. Everyone who observes behavior probably looks for
similar characteristics of autism spectrum disorders (e.g., communication challenges, social deficits,
restricted area of interests, sensory needs, etc.) and the impact on behavior. How information is
gathered may be different for each person collecting the data and depending on the complexity of
the situation. One format involves directly observing and recording situational factors surrounding a
problem behavior using an assessment tool called ABC data collection. An ABC data form is an
assessment tool used to gather information that should evolve into a positive behavior support plan.
ABC refers to:
The following is an example of ABC data collection. ABC is considered a direct observation format
because you have to be directly observing the behavior when it occurs. Typically it is a format that is
used when an external observer is available who has the time and ability to observe and document
behaviors during specified periods of the day. It is time and personnel intensive. From this data, we
can see that when Joe is asked to end an activity he is enjoying (we know that he enjoys playing
computer games), he screams, refuses to leave, and ignores. We also can see that the response to
Joe’s refusal consists mostly of empty threats. If we follow Joe throughout the day, we may find that
he is asked repeatedly to follow diections. In addition, the data reveals that Joe’s family uses threats
Parent asks Joe to stop playing Joe screams, "NO!" and Parent tells Joe to leave the
on the computer. refuses to leave the computer again.
computer.
Parent tells Joe to leave the Joe again refuses to Parent starts counting to 10 as a
computer. leave. warning to get off the computer.
Parent starts counting to 10 as a Joe does not move from Parent finishes counting to 10
warning to get off the computer. the computer station. and again warns him to get off
the computer.
Parent finishes counting to 10 Joe stays at the Parent threatens that Joe lose
and again warns him to get off computer and refuses to computer privileges in the future.
the computer. leave.
Parent threatens that the Joe Joe ignores and The parent count to 10 again and
will lose computer privileges in continues working on again threatens future computer
the future. the computer. use.
The parent counts to 10 again Joe ignores and The parent becomes angry and
and again threatens future continues computer use. leaves the room.
computer use
While it is important to look at both the antecedents and the form of the behavior, the focus of this
article is on the consequence portion of the data collection. Examine the consequence portion of the
data collection form when identifying those responses that both increase and decrease problem
behavior. For example, if attention seems to increase problem behavior, then it may be important to
teach the individual to get attention in a more appropriate fashion or to use attention for positive
behaviors. If escape from a difficult task seems to be a consistent theme in the consequence section,
then it may be important to either change the task or to teach the child to ask for help. And we may
choose to use downtime as a reinforcer. Our responses should always focus on strengthening
desired behavior, promoting the use of the replacement behavior, and decreasing the occurrence of
the problem behavior (Sugai, et. al., 2000). An important aspect of this prospect is understanding
those responses or consequences that maintain, and either enhance or decrease behavior over time.
Parent asks Joe to stop playing on the Joe screams, "NO!" and Sensory
computer. refuses to leave the Feedback
computer.
Escape
Attention
Parent tells Joe to leave the Joe again refuses to leave. Sensory
computer. Feedback
Escape
Attention
Parent starts counting to 10 as a Joe does not move from the Sensory
warning to get off the computer. computer station. Feedback
Escape
Attention
Attention
Parent threatens that the Joe will Joe ignores and continues Sensory
lose computer privileges in the working on the computer. Feedback
future.
Escape
The parent counts to 10 again and Joe ignores and continues Sensory
again threatens future computer use computer use. Feedback
Escape
Attention
Sometimes the ABC data collection form is used to document a behavior incident. Remember that
this type of form will give you limited data and focuses heavily on negative behaviors. However, it is
easier when someone is not available to do more indepth observing. In truth, the ABC data
collection should not be used just to document behavior incidents. It is best used as a narrative
during a specified time of the day. Equally important is to document those conditions that surround
positive behaviors. By documenting these, professionals and family members can identify effective
strategies that can be replicated.
Once accurate and sufficient data is collected; placements, planning, modifications, instruction, and
feedback are easier, more valid, and effective (Morton & Lieberman, 2006). ABC data collection can
be used for all individuals with behavior issues at home and in school, not just individuals on the
autism spectrum.
References
Sugai, G., Horner, R.H., Dunlap, G., Hieneman, M., Nelson, C.M., Scott, T., Liaupsin, C., Sailor, W.,
Turnbull, A.P., Turnbull III, H.R.; Wickham, D., Wilcox, B., and Ruef, M. (2000). Applying positive
behavior support and functional behavioral assessment in schools. Journal of Positive Behavior
Interventions, 2(3), 131-143.
Morton & Lieberman, 2006. Strategies for collecting data in physical education. Teaching Elementary
Physical Education, 17(4), 28-31.
Pratt, C., & Dubie, M. (2008). Observing behavior using a-b-c data. The Reporter, 14(1), 1-4.
Self-check
If the stakeholders are not co-located or readily available, for example in the case of a product being
developed for mass market, techniques such as brainstorming, interviews and workshops that
require face-to-face contact with the stakeholders may be difficult or impossible.
The Knowledge Discovery in Databases process comprises of a few steps leading from raw data
collections to some form of new knowledge. The iterative process consists of the following steps:
Data cleaning: also known as data cleansing, it is a phase in which noise data and irrelevant data
are removed from the collection.
Data integration: at this stage, multiple data sources, often heterogeneous, may be combined in
a common source.
Data selection:� at this step, the data relevant to the analysis is decided on and retrieved from
the data collection.
Data transformation: also known as data consolidation, it is a phase in which the selected data is
transformed into forms appropriate for the mining procedure.
Pattern evaluation:� in this step, strictly interesting patterns representing knowledge are
identified based on given measures.
Knowledge representation: is the final phase in which the discovered knowledge is visually
represented to the user. This essential step uses visualization techniques to help users
understand and interpret the data mining results.
Self-check
At the end of this unit standard you will be able to Apply principles of creating computer
software by developing a complete programme to meet given business specifications
Purpose:
People credited with this unit standard are able to:
Specific outcome:
Interpret a given specification to plan a computer program solution
Design a computer program to meet a business requirement
Equipment needed:
Learning material, Learner workbook, Pen, Ruler.
PLEASE NOTE: THE USE OF PENCILS OR TIPPEX IS NOT ALLOWED.
IF YOU USE A PENCIL THE VALIDITY OF YOUR WORK COULD BE QUESTIONABLE, AND THIS
COULD LEAD TO FRAUD.
Resources (selective resources might be used, depending on the facilitator and venue
circumstances), one or all of the following can be used:
Your facilitator/mentor
Learning material
Learner workbook
Visual aids
White board
Flip chart
Equipment
Training venue
The facilitator will tell you which exercise you need to complete each day.
You need to hand in your answers to the facilitator who will mark it for correctness.
If you do not know the answer, you will have to go back to that particular section in
you learner guide and go over it again.
Ask the facilitator for help, if you do not understand any of the questions asked.
Always remember to give reasons for your answers
SPECIFIC OUTCOME 1:
ASSESEMENT CRITERIA
The plan proposes a description of the problem to be solved by the development of the
computer program that is understandable by an end-user and meets the given specification.
The plan integrate the research of problems in term of data and functions
The plan includes an evaluation of the viability of developing a computer program to solve the
problem identified and compares the costs of developing the program with the benefits to be
obtained from the program.
The plan concludes by choosing the best solution and documenting the program features that
will contain the capabilities and constraints to meet the defined problem.
The SRS document itself states in precise and explicit language those functions and capabilities a
software system (i.e., a software application, an eCommerce Web site, and so on) must provide, as
well as states any required constraints by which the system must abide. The SRS also functions as
a blueprint for completing a project with as little cost growth as possible. The SRS is often referred to
as the “parent” document because all subsequent project management documents, such as design
specifications, statements of work, software architecture specifications, testing and validation plans,
and documentation plans, are related to it.
It’s important to note that an SRS contains functional and non-functional requirements only; it
doesn’t offer design suggestions, possible solutions to technology or business issues, or any other
information other than what the development team understands the customer’s system
requirements to be.
It provides feedback to the customer. An SRS is the customer’s assurance that the development
organization understands the issues or problems to be solved and the software behavior
necessary to address those problems. Therefore, the SRS should be written in natural language ,
in an unambiguous manner that may also include charts, tables, data flow diagrams, decision
tables, and so on.
It serves as an input to the design specification. As mentioned previously, the SRS serves as the
parent document to subsequent documents, such as the software design specification and
statement of work. Therefore, the SRS must contain sufficient detail in the functional
system requirements so that a design solution can be devised. It serves as a product validation
check. The SRS also
It serves as the parent document for testing and validation strategies that will be applied to the
requirements for verification.
Software requirements specifications are typically developed during the first stages
of “Requirements Development,” which is the initial product development phase in which
information is gathered about what requirements are needed–and not. This information-
gathering stage can include onsite visits, questionnaires, surveys, interviews, and perhaps a return-
on-investment (ROI) analysis or needs analysis of the customer or client’s current
business environment. The actual specification, then, is written after the requirements have been
gathered and analyzed.
Unfortunately, much of the time, systems architects and programmers write software requirements
specifications with little (if any) help from the technical communications organization. And when
that assistance is provided, it’s often limited to an edit of the final draft just prior to going out the
door. Having technical writers involved throughout the entire SRS development process can offer
several benefits:
Technical writers are skilled information gatherers, ideal for eliciting and articulating customer
requirements. The presence of a technical writer on the requirements-gathering team helps
balance the type and amount of information extracted from customers, which can help improve
the software requirements specifications.
Technical writers can better assess and plan documentation projects and better meet customer
document needs. Working on SRSs provides technical writers with an opportunity for learning
about customer needs firsthand–early in the product development process.
Technical writers, involved early and often in the process, can become an information resource
throughout the process, rather than an information gatherer at the end of the process.
You probably will be a member of the SRS team (if not, ask to be), which means SRS development
will be a collaborative effort for a particular project. In these cases, your company will have
developed SRSs before, so you should have examples (and, likely, the company’s SRS template) to
use. But, let’s assume you’ll be starting from scratch. Several standards organizations (including the
IEEE) have identified nine topics that must be addressed when designing and writing an SRS:
1. Interfaces
2. Functional Capabilities
3. Performance Levels
4. Data Structures/Elements
5. Safety
6. Reliability
7. Security/Privacy
8. Quality
1. A template
4. A traceability matrix
The first and biggest step to writing software requirements specifications is to select an existing
template that you can fine tune for your organizational needs (if you don’t have one already).
There’s not a “standard specifications template” for all projects in all industries because the
individual requirements that populate an SRS are unique not only from company to company, but
also from project to project within any one company. The key is to select an existing template or
specification to begin with, and then adapt it to meet your needs.
In recommending using existing templates, I’m not advocating simply copying a template from
available resources and using them as your own; instead, I’m suggesting that you use available
templates as guides for developing your own. It would be almost impossible to find a specification or
specification template that meets your particular project requirements exactly.
Table 1 shows what a basic SRS outline might look like. This example is an adaptation and extension
of the IEEE Standard 830-1998:
1. Introduction
1.1 Purpose
1.2 Document conventions
1.3 Intended audience
1.4 Additional information
1.5 Contact information/SRS team members
1.6 References
2. Overall Description
2.1 Product perspective
2.2 Product functions
2.3 User classes and characteristics
2.4 Operating environment
2.5 User environment
2.6 Design/implementation constraints
2.7 Assumptions and dependencies
3. External Interface Requirements
3.1 User interfaces
3.2 Hardware interfaces
3.3 Software interfaces
3.4 Communication protocols and interfaces
4. System Features
4.1 System feature A
4.1.1 Description and priority
4.1.2 Action/result
4.1.3 Functional requirements
4.2 System feature B
5. Other Nonfunctional Requirements
5.1 Performance requirements
5.2 Safety requirements
5.3 Security requirements
5.4 Software quality attributes
5.5 Project documentation
5.6 User documentation
6. Other Requirements
Appendix A: Terminology/Glossary/Definitions list
Appendix B: To be determined
“The best-laid plans of mice and men…” begins the famous saying. It has direct application to writing
software requirements specifications because even the most thought-out requirements are not
immune to changes in industry, market, or government regulations. A top-quality SRS should include
plans for planned and unplanned contingencies, as well as an explicit definition of the
responsibilities of each party, should a contingency be implemented. Some business rules are easier
to work around than others, when Plan B has to be invoked. For example, if a customer wants to
change a requirement that is tied to a government regulation, it may not be ethical and/or legal to
be following “the spirit of the law.” Many government regulations, as business rules, simply don’t
allow any compromise or “wiggle room.” A project manager may be responsible for ensuring that a
government regulation is followed as it relates to a project requirement; however, if a contingency is
required, then the responsibility for that requirement may shift from the project manager to a
regulatory attorney. The SRS should anticipate such actions to the furthest extent possible.
The business rules for contingencies and responsibilities can be defined explicitly within a
Requirements Traceability Matrix (RTM), or contained in a separate document and referenced in the
matrix, as the example in Table 3 illustrates. Such a practice leaves no doubt as to responsibilities
and actions under certain conditions as they occur during the product-development phase.
As software design and development proceed, the design elements and the actual code must be tied
back to the requirement(s) that define them.
Software design is both a process and a model. The design process is a sequence of steps that enable
the designer to describe all aspects of the software to be built. It is important to note, however, that
the design process is not simply a cookbook. Creative skill, past experience, a sense of what makes
“good” software, and an overall commitment to quality are critical success factors for a competent
design. The design model is the equivalent of an architect’s plans for a house. It begins by
representing the totality of the thing to be built (e.g., a three-dimensional rendering of the house)
and slowly refines the thing to provide guidance for constructing each detail (e.g., the plumbing
layout). Similarly, the design model that is created for software provides a variety of different views
of the computer software. Basic design principles enable the software engineer to navigate the
design process. Davis [DAV95] suggests a set of principles for software design, which have been
adapted and extended in the following list:
The design process should not suffer from “tunnel vision.” A good designer should consider
alternative approaches, judging each based on the requirements of the problem, the resources
available to do the job.
The design should be traceable to the analysis model. Because a single element of the design model
often traces to multiple requirements, it is necessary to have a means for tracking how requirements
have been satisfied by the design model.
The design should not reinvent the wheel. Systems are constructed using a set of design patterns,
many of which have likely been encountered before. These patterns should always be chosen as an
alternative to reinvention. Time is short and resources are limited! Design time should be invested in
representing truly new ideas and integrating those patterns that already exist.
The design should exhibit uniformity and integration. A design is uniform if it appears that one
person developed the entire thing. Rules of style and format should be defined for a design team
before design work begins. A design is integrated if care is taken in defining interfaces between
design components.
The design should be structured to accommodate change. The design concepts discussed in the
next section enable a design to achieve this principle.
The design should be structured to degrade gently, even when aberrant data, events, or operating
conditions are encountered. Well- designed software should never “bomb.” It should be designed to
accommodate unusual circumstances, and if it must terminate processing, do so in a graceful
manner.
Design is not coding, coding is not design. Even when detailed procedural designs are created for
program components, the level of abstraction of the design model is higher than source code. The
only design decisions made at the coding level address the small implementation details that enable
the procedural design to be coded.
The design should be assessed for quality as it is being created, not after the fact. A variety of
design concepts and design measures are available to assist the designer in assessing quality.
The design should be reviewed to minimize conceptual (semantic) errors. There is sometimes a
tendency to focus on minutiae when the design is reviewed, missing the forest for the trees. A
design team should ensure that major conceptual elements of the design (omissions, ambiguity,
inconsistency) have been addressed before worrying about the syntax of the design model.
The design concepts provide the software designer with a foundation from which more sophisticated
methods can be applied. A set of fundamental design concepts has evolved. They are:
Software Architecture - It refers to the overall structure of the software and the ways in which that
structure provides conceptual integrity for a system. A good software architecture will yield a good
return on investment with respect to the desired outcome of the project, e.g. in terms of
performance, quality, schedule and cost.
Control Hierarchy - A program structure that represents the organization of a program component
and implies a hierarchy of control.
Structural Partitioning - The program structure can be divided both horizontally and vertically.
Horizontal partitions define separate branches of modular hierarchy for each major program
function. Vertical partitioning suggests that control and work should be distributed top down in the
program structure.
Data Structure - It is a representation of the logical relationship among individual elements of data.
Information Hiding - Modules should be specified and designed so that information contained within
a module is inaccessible to other modules that have no need for such information
There are many aspects to consider in the design of a piece of software. The importance of each
should reflect the goals the software is trying to achieve. Some of these aspects are:
Compatibility - The software is able to operate with other products that are designed for
interoperability with another product. For example, a piece of software may be backward-
compatible with an older version of itself.
Extensibility - New capabilities can be added to the software without major changes to the
underlying architecture.
Fault-tolerance - The software is resistant to and able to recover from component failure.
Modularity - the resulting software comprises well defined, independent components. That leads to
better maintainability. The components could be then implemented and tested in isolation before
being integrated to form a desired software system. This allows division of work in a software
development project.
Reliability - The software is able to perform a required function under stated conditions for a
specified period of time.
Reusability - the software is able to add further features and modification with slight or no
modification.
Robustness - The software is able to operate under stress or tolerate unpredictable or invalid input.
For example, it can be designed with a resilience to low memory conditions.
Usability - The software user interface must be usable for its target user/audience. Default values for
the parameters must be chosen so that they are a good choice for the majority of the users.
Performance - The software performs its tasks within a user-acceptable time. The software does not
consume too much memory.
Who is involved?
The process of creating a computer program is not as straight-forward as you might think. It involves
a lot of thinking, experimenting, testing, and rewriting to achieve a high-quality product. Let's break
down the process to give you an idea of what goes on.
The more detailed this description is, the easier it will be to get good results.
The languages are grouped by how complex they are for the writer. The simplest with the least
power are at the bottom. Simple languages for simple tasks. (But how simple is any of this, really??)
Existing hardware
There may be a huge team of dozens of people involved. Or perhaps one programmer decides that
he can write a program that is the answer to what users complain about. It may be done in a highly
structured series of conferences and consumer surveys. Or perhaps someone is listening to what
people say as they go about trying to work. Somehow the needs of the end users must be
understood as well as the limitations of the code and the hardware. Costs come into play, too. (Sad
but true.)
All of these people must communicate back and forth throughout the process. No program of any
size will be without unexpected problems. So it's test and fix and test again until the program
actually does what it was intended to do.
A program goes through the following steps over and over during its development, never just once.
Each time through the development loop, the program must be debugged. This means testing the
program under all conditions in order to find the problems so they can be handled. There
will alwaysbe problems. Sometimes it's just a typo, and sometimes it's a flaw in the logic, and
sometimes it's an incompatibility with the hardware or other software. Handling such situations can
be the most time-consuming part of the whole process!
Proper documentation can make or break a program. This includes explanations to the end user of
how to use the program and also internal notes to programmers about what the code is doing and
why. It is amazing how quickly the original coder can forget why he wrote the code that way!
Programs often need to be maintained, that is, changes must be made. For example, the sales tax
rate might change or zip codes may get more digits. With proper internal documentation, a different
programmer can make these adjustments without damaging the program.
4.1 Testing a computer program against given specifications according to test plans.
Programme Testing refers to a set of activities conducted with the intent of finding
errors in software.
Test plan refers to specification is called a test plan. The developers are well aware
what test plans will be executed and this information is made available to management
and the developers. The idea is to make them more cautious when developing their
code or making additional changes. Some companies have a higher-level document
called a test strategy.
Software testing methods are traditionally divided into white- and black-box testing. These two
approaches are used to describe the point of view that a test engineer takes when designing test
cases.
i. White-Box testing
Visual testing
The aim of visual testing is to provide developers with the ability to examine what was
happening at the point of software failure by presenting the data in such a way that
the developer can easily find the information he or she requires, and the information
is expressed clearly.
At the core of visual testing is the idea that showing someone a problem (or a test
failure), rather than just describing it, greatly increases clarity and understanding.
Visual testing therefore requires the recording of the entire test process – capturing
everything that occurs on the test system in video format. Output videos are
supplemented by real-time tester input via picture-in-a-picture webcam and audio
commentary from microphones.
Grey-box testing involves having knowledge of internal data structures and algorithms for
purposes of designing tests, while executing those tests at the user, or black-box level. The
tester is not required to have full access to the software's source code. Manipulating input
data and formatting output do not qualify as grey-box, because the input and output are
clearly outside of the "black box" that we are calling the system under test. This distinction is
particularly important when conducting integration testing between two modules of code
written by two different developers, where only the interfaces are exposed for test.
However, tests that require modifying a back-end data repository such as a database or a log
file does qualify as grey-box, as the user would not normally be able to change the data
repository in normal production operations Grey-box testing may also include reverse
engineering to determine, for instance, boundary values or error messages.
By knowing the underlying concepts of how the software works, the tester makes better-
informed testing choices while testing the software from outside. Typically, a grey-box tester
will be permitted to set up an isolated testing environment with activities such as seeding
a database. The tester can observe the state of the product being tested after performing
certain actions such as executing SQL statements against the database and then executing
queries to ensure that the expected changes have been reflected. Grey-box testing
implements intelligent test scenarios, based on limited information. This will particularly
apply to data type handling, exception handling, and so on.
Unit testing
Unit testing, also known as component testing, refers to tests that verify the
functionality of a specific section of code, usually at the function level. In an object-
oriented environment, this is usually at the class level, and the minimal unit tests
include the constructors and destructors.
These types of tests are usually written by developers as they work on code (white-
box style), to ensure that the specific function is working as expected. One function
might have multiple tests, to catch corner cases or other branches in the code. Unit
testing alone cannot verify the functionality of a piece of software, but rather is used
Integration testing
Integration testing is any type of software testing that seeks to verify the interfaces
between components against a software design. Software components may be
integrated in an iterative way or all together ("big bang"). Normally the former is
considered a better practice since it allows interface issues to be located more quickly
and fixed.
Integration testing works to expose defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated and
tested until the software works as a system.
One option for interface testing is to keep a separate log file of data items being passed,
often with a timestamp logged to allow analysis of thousands of cases of data passed
between units for days or weeks. Tests can include checking the handling of some extreme
data values while other interface variables are passed as normal values. Unusual data values
in an interface can help explain unexpected performance in the next unit. Component
System testing, or end-to-end testing, tests a completely integrated system to verify that it
meets its requirements. For example, a system test might involve testing a logon interface,
then creating and editing an entry, plus sending or printing results, followed by summary
processing or deletion (or archiving) of entries, then logoff.
In addition, the software testing should ensure that the program, as well as working as
expected, does not also destroy or partially corrupt its operating environment or cause other
processes within that environment to become inoperative (this includes not corrupting
shared memory, not consuming or locking up excessive resources and leaving any parallel
processes unharmed by its presence).
v. Installation testing
An installation test assures that the system is installed correctly and working at actual
customer's hardware.
Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features, including
old bugs that have come back. Such regressions occur whenever software functionality that
was previously working, correctly, stops working as intended. Typically, regressions occur as
an unintended consequence of program changes, when the newly developed part of the
software collides with the previously existing code. Common methods of regression testing
include re-running previous sets of test-cases and checking whether previously fixed faults
have re-emerged.
The depth of testing depends on the phase in the release process and the risk of the added
features. They can either be complete, for changes added late in the release or deemed to
be risky, or be very shallow, consisting of positive tests on each feature, if the changes are
early in the release or deemed to be of low risk. Regression testing is typically the largest
test effort in commercial software development due to checking numerous details in prior
software features, and even new software can be developed while using some old test-cases
to test parts of the new design to ensure prior functionality is still supported.
1. A smoke test is used as an acceptance test prior to introducing a new build to the
main testing process, i.e. before integration or regression.
x. Alpha testing
Beta testing comes after alpha testing and can be considered a form of external user
acceptance testing. Versions of the software, known as beta versions, are released to a
limited audience outside of the programming team. The software is released to groups of
people so that further testing can ensure the product has few faults or bugs. Sometimes,
beta versions are made available to the open public to increase the feedback field to a
maximal number of future users.
Functional testing refers to activities that verify a specific action or function of the code.
These are usually found in the code requirements documentation, although some
development methodologies work from use cases or user stories. Functional tests tend to
answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific
function or user action, such as scalability or other performance, behaviour under
certain constraints, or security. Testing will determine the breaking point, the point at which
extremes of scalability or performance leads to unstable execution. Non-functional
requirements tend to be those that reflect the quality of the product, particularly in the
context of the suitability perspective of its users.
Load testing is primarily concerned with testing that the system can continue to operate
under a specific load, whether that be large quantities of data or a large number of users.
There is little agreement on what the specific goals of performance testing are. The terms
load testing, performance testing, scalability testing, and volume testing, are often used
interchangeably.
Real-time software systems have strict timing constraints. To test if timing constraints are
met, real-time testing is used.
Usability testing is to check if the user interface is easy to use and understand. It is
concerned mainly with the use of the application.
Security testing is essential for software that processes confidential data to prevent system
intrusion by hackers.
Actual translation to human languages must be tested, too. Possible localization failures
include:
Untranslated messages in the original language may be left hard coded in the source
code.
Some messages may be created automatically at run time and the resulting string may
be ungrammatical, functionally incorrect, misleading or confusing.
Software may lack support for the character encoding of the target language.
6.1 Plan and design documentation for a computer program to agreed standards.
Computer program documentation is written text that accompanies computer program or software.
It either explains how it operates or how to use it, or may mean different things to people in
different roles.
For large software projects, it is usually the case that documentation starts being generated well
before the development process begins. A proposal to develop the system may be produced in
response to a request for tenders by an external client or in response to other business strategy
documents. For some types of system, a comprehensive requirements document may be produced
which defines the features required and expected behavior of the system. During the development
process itself, all sorts of different documents may be produced – project plans, design
specifications, test plans etc.
It is not possible to define a specific document set that is required – this depends on the contract
with the client for the system, the type of system being developed and its expected lifetime, the
culture and size of the company developing the system and the development schedule that it
expected. However, we can generally say that the documentation produced falls into two classes:
2. Product documentation. This documentation describes the product that is being developed.
System documentation describes the product from the point of view of the engineers developing
and maintaining the system; user documentation provides a product description that is oriented
towards system users.
Process documentation is produced so that the development of the system can be managed.
Product documentation is used after the system is operational but is also essential for management
of the system development. The creation of a document, such as a system specification, may
represent an important milestone in the software development process.
Process documentation
Effective management requires the process being managed to be visible. Because software is
intangible and the software process involves apparently similar cognitive tasks rather than obviously
different physical tasks, the only way this visibility can be achieved is through the use of process
documentation.
2. Reports. These are documents which report how resources were used during the process of
development.
3. Standards. These are documents which set out how the process is to be implemented. These
may be developed from organizational, national or international standards.
4. Working papers. These are often the principal technical communication documents in a
project. They record the ideas and thoughts of the engineers working on the project, are interim
versions of product documentation, describe implementation strategies and set out problems which
have been identified. They often, implicitly, record the rationale for design decisions.
5. Memos and electronic mail messages. These record the details of everyday communications
between managers and development engineers.
The major characteristic of process documentation is that most of it becomes out-dated. Plans may
be drawn up on a weekly, fortnightly or monthly basis. Progress will normally be reported weekly.
Memos record thoughts, ideas and intentions which change.
Although of interest to software historians, much of this process information is of little real use after
it has gone out of date and there is not normally a need to preserve it after the system has been
delivered. However, there are some process documents that can be useful as the software evolves in
response to new requirements.
For example, test schedules are of value during software evolution as they act as a basis for re-
planning the validation of system changes. Working papers which explain the reasons behind design
decisions (design rationale) are also potentially valuable as they discuss design options and choices
made. Access to this information helps avoid making changes which conflict with these original
decisions. Ideally, of course, the design rationale should be extracted from the working papers and
separately maintained. Unfortunately this hardly ever happens.
Product documentation
User Documentation
Users of a system are not all the same. The producer of documentation must structure it to cater for
different user tasks and different levels of expertise and experience. It is particularly important to
distinguish between end-users and system administrators:
1. End-users use the software to assist with some task. This may be flying an aircraft, managing
insurance policies, writing a book, etc. They
want to know how the software can help them. They are not interested in computer or
administration details.
2. System administrators are responsible for managing the software used by end-users. This may
involve acting as an operator if the system is a large mainframe system, as a network manager is the
system involves a network of workstations or as a technical guru who fixes end-users software
problems and who liaises between users and the software supplier.
To cater for these different classes of user and different levels of user expertise, there are at least 5
documents (or perhaps chapters in a single document) which should be delivered with the software
system (Figure1).
The system installation document is intended for system administrators. It should provide details of
how to install the system in a particular environment. It should contain a description of the files
making up the system and the minimal hardware configuration required. The permanent files which
must be established, how to start the system and the configuration dependent files which must be
changed to tailor the system to a particular host system should also be described. The use of
automated installers for PC software has meant that some suppliers see this document as
unnecessary. In fact, it is still required to help system managers discover and fix problems with the
installation.
The introductory manual should present an informal introduction to the system, describing its
‘normal’ usage. It should describe how to get started and how end-users might make use of the
common system facilities. It should be liberally illustrated with examples. Inevitably beginners,
whatever their background and experience, will make mistakes. Easily discovered information on
how to recover from these mistakes and restart useful work should be an integral part of this
document.
The system reference manual should describe the system facilities and their usage, should provide a
complete listing of error messages and should describe how to recover from detected errors. It
should be complete. Formal descriptive techniques may be used. The style of the reference manual
should not be unnecessarily pedantic and turgid, but completeness is more important than
readability.
A more general system administrator’s guide should be provided for some types of system such as
command and control systems. This should describe the messages generated when the system
interacts with other systems and how to react to these messages. If system hardware is involved, it
might also explain the operator’s task in maintaining that hardware. For example, it might describe
how to clear faults in the system console, how to connect new peripherals, etc.
As well as manuals, other, easy-to-use documentation might be provided. A quick reference card
listing available system facilities and how to use them is particularly convenient for experienced
System Documentation
System documentation includes all of the documents describing the system itself from the
requirements specification to the final acceptance test plan. Documents describing the design,
implementation and testing of a system are essential if the program is to be understood and
maintained. Like user documentation, it is important that system documentation is structured, with
overviews leading the reader into more formal and detailed descriptions of each aspect of the
system.
For large systems that are developed to a customer’s specification, the system documentation
should include:
3. For each program in the system, a description of the architecture of that program.
4. For each component in the system, a description of its functionality and interfaces.
5. Program source code listings. These should be commented where the comments should explain
complex sections of code and provide a rationale for the coding method used. If meaningful names
are used and a good, structured programming style is used, much of the code should be self-
documenting without the need for additional comments. This information is now normally
maintained electronically rather than on paper with selected information printed on demand from
readers.
6. Validation documents describing how each program is validated and how the validation
information relates to the requirements. A system maintenance guide which describes known
problems with the system, describes which parts of the system are hardware and software
dependent and which describes how evolution of the system has been taken into account in its
design.
For smaller systems and systems that are developed as software products, system documentation is
usually less comprehensive. This is not necessarily a good thing but schedule pressures on
developers mean that documents are simply never written or, if written, are not kept up to date.
These pressures are sometimes inevitable but, in my view, at the very least you should always try to
maintain a specification of the system, an architectural design document and the program source
code.
Often, pressure of work means that this modification is continually set aside until finding what is to
be changed becomes very difficult indeed. The best solution to this problem is to support document
maintenance with software tools which record document relationships, remind software engineers
when changes to one document affect another and record possible inconsistencies in the
documentation. Such a system is described by Garg and Scacchi (1990).
Document Quality
Unfortunately, much computer system documentation is badly written, difficult to understand, out-
of-date or incomplete. Although the situation is improving, many organizations still do not pay
enough attention to producing system documents which are well-written pieces of technical prose.
Document quality is as important as program quality. Without information on how to use a system
or how to understand it, the utility of that system is degraded. Achieving document quality requires
management commitment to document design, standards, and quality assurance processes.
Producing good documents is neither easy nor cheap and many software engineers find it more
difficult that producing good quality programs.
The document structure is the way in which the material in the document is organized into chapters
and, within these chapters, into sections and sub- sections. Document structure has a major impact
on readability and usability and it is important to design this carefully when creating documentation.
As with software systems, you should design document structures so that the different parts are as
independent as possible. This allows each part to be read as a single item and reduces problems of
cross-referencing when changes have to be made.
Structuring a document properly also allows readers to find information more easily. As well as
document components such as contents lists and indexes, well-structured documents can be skim
read so that readers can quickly locate sections or sub-sections that are of most interest to them.
Documentation standards act as a basis for document quality assurance. Documents produced
according to appropriate standards have a consistent appearance, structure and quality. Other
standards that may be used in the documentation process are:
1. Process standards- These standards define the process which should be followed for high-
quality document production.
2. Product standards - These are standards which govern the documents themselves.
Standards are, by their nature, designed to cover all cases and, consequently, can sometimes seem
unnecessarily restrictive. It is therefore important that, for each project, the appropriate standards
are chosen and modified to suit that particular project. Small projects developing systems with a
relatively short expected lifetime need different standards from large software projects where the
software may have to be maintained for 10 or more years.
Process standards
Process standards define the approach to be taken in producing documents. This generally means
defining the software tools which should be used for document production and defining the quality
assurance procedures which ensure that high-quality documents are produced. Document process
quality assurance standards must be flexible and must be able to cope with all types of document. In
some cases, where documents are simply working papers or memos, no explicit quality checking is
required. However, where documents are formal documents, that is, when their evolution is to be
controlled by configuration management procedures, a formal quality process should be adopted.
Drafting, checking, revising and re-drafting is an iterative process which should be continued until a
document of acceptable quality is produced. The acceptable quality level will depend on the
document type and the potential readers of the document.
Product standards
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
________
Table of contents
Specific Outcome 1 : Test a computer program against given specifications according to test
plans.
The candidate undertaking this unit standard is best advised to at least spend one hundred hours of
study time on this learning programme. Below is a table which demonstrates how these one
hundred hours could be spread:
TIMEFRAME
10 Timeframe for Training: Theory content –Role play, Simulation, Group work, Pair work =
. 15 hrs.
(Total
Hours/Days/Weeks) Non contact session- self-study, assignment, practise guided by
coach or mentor, formative assessment and summative
assessment =35 hrs.
At the end of this unit standard you will be able to Test a computer program against a
given specification
Purpose:
People credited with this unit standard are able to:
Specific outcome:
Test a computer program against given specifications according to test plans
Record the results from testing a computer program
Review the testing process for a computer program against organisation policy and
procedures.
Equipment needed:
Learning material, Learner workbook, Pen, Ruler.
PLEASE NOTE: THE USE OF PENCILS OR TIPPEX IS NOT ALLOWED.
IF YOU USE A PENCIL THE VALIDITY OF YOUR WORK COULD BE QUESTIONABLE, AND THIS
COULD LEAD TO FRAUD.
Assessments:
The only way to establish whether you are competent and have accomplished the specific
outcomes is through continuous assessments
The given exercises can contain one or more of the following:
Information for you to read
Exercises that require you to have a problem-solving approach to communication
Questions for you to answer
Case studies with questions that follow
The facilitator will tell you which exercise you need to complete each day.
You need to hand in your answers to the facilitator who will mark it for correctness.
If you do not know the answer, you will have to go back to that particular section in
you learner guide and go over it again.
Ask the facilitator for help, if you do not understand any of the questions asked.
Always remember to give reasons for your answers
ASSESEMENT CRITERIA
The testing executes operational steps identified in the test plan.
The testing uses input data as specified in the test plan.
The testing outlines deviations from the test plan, with explanations.
The testing follows the standards and procedures specified in the test plan for testing and re-
testing.
1.1 Testing a computer program against given specifications according to test plans.
OUTCOME RANGE
Programme Testing refers to a set of activities conducted with the intent of finding
errors in software.
Test plan refers to specification is called a test plan. The developers are well aware
what test plans will be executed and this information is made available to management
and the developers. The idea is to make them more cautious when developing their
code or making additional changes. Some companies have a higher-level document
called a test strategy.
Static testing involves verification, whereas dynamic testing involves validation. Together they help
improve software quality. Among the techniques for static analysis, mutation testing can be used to
ensure the test-cases will detect errors which are introduced by mutating the source code.
Software testing methods are traditionally divided into white- and black-box testing. These two
approaches are used to describe the point of view that a test engineer takes when designing test
cases.
Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test
designer can create tests to cause all statements in the program to be executed at
least once)
Mutation testing methods
Static testing methods
Code coverage tools can evaluate the completeness of a test suite that was created
with any method, including black-box testing. This allows the software team to
examine parts of a system that are rarely tested and ensures that the most
important function points have been tested. Code coverage as a software metric can
be reported as a percentage for:
- 100% statement coverage ensures that all code paths or branches (in terms
of control flow) are executed at least once. This is helpful in ensuring correct
functionality, but not sufficient since the same code may process different inputs
correctly or incorrectly.
Black-box testing treats the software as a "black box", examining functionality without any
knowledge of internal implementation. The testers are only aware of what the software is
supposed to do, not how it does it. Black-box testing methods include: partitioning,
boundary, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-
based testing, use case testing, exploratory and specification-based testing.
Specification-based testing
Visual testing
The aim of visual testing is to provide developers with the ability to examine what was
happening at the point of software failure by presenting the data in such a way that
the developer can easily find the information he or she requires, and the information
is expressed clearly.
At the core of visual testing is the idea that showing someone a problem (or a test
failure), rather than just describing it, greatly increases clarity and understanding.
Visual testing therefore requires the recording of the entire test process – capturing
everything that occurs on the test system in video format. Output videos are
supplemented by real-time tester input via picture-in-a-picture webcam and audio
commentary from microphones.
Grey-box testing involves having knowledge of internal data structures and algorithms for
purposes of designing tests, while executing those tests at the user, or black-box level. The
tester is not required to have full access to the software's source code. Manipulating input
data and formatting output do not qualify as grey-box, because the input and output are
clearly outside of the "black box" that we are calling the system under test. This distinction is
particularly important when conducting integration testing between two modules of code
written by two different developers, where only the interfaces are exposed for test.
However, tests that require modifying a back-end data repository such as a database or a log
file does qualify as grey-box, as the user would not normally be able to change the data
By knowing the underlying concepts of how the software works, the tester makes better-
informed testing choices while testing the software from outside. Typically, a grey-box tester
will be permitted to set up an isolated testing environment with activities such as seeding
a database. The tester can observe the state of the product being tested after performing
certain actions such as executing SQL statements against the database and then executing
queries to ensure that the expected changes have been reflected. Grey-box testing
implements intelligent test scenarios, based on limited information. This will particularly
apply to data type handling, exception handling, and so on.
Testing levels
There are generally four recognized levels of tests: unit testing, integration testing, system
testing, and acceptance testing. Tests are frequently grouped by where they are added in
the software development process, or by the level of specificity of the test. The main levels
during the development process as defined by the SWEBOK guide are unit-, integration-, and
system testing that are distinguished by the test target without implying a specific process
model. Other test levels are classified by the testing objective.
Unit testing
Unit testing, also known as component testing, refers to tests that verify the
functionality of a specific section of code, usually at the function level. In an object-
oriented environment, this is usually at the class level, and the minimal unit tests
include the constructors and destructors.
These types of tests are usually written by developers as they work on code (white-
box style), to ensure that the specific function is working as expected. One function
might have multiple tests, to catch corner cases or other branches in the code. Unit
testing alone cannot verify the functionality of a piece of software, but rather is used
to ensure that the building blocks of the software work independently from each
other.
Integration testing
Integration testing is any type of software testing that seeks to verify the interfaces
between components against a software design. Software components may be
integrated in an iterative way or all together ("big bang"). Normally the former is
considered a better practice since it allows interface issues to be located more quickly
and fixed.
Integration testing works to expose defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated and
tested until the software works as a system.
One option for interface testing is to keep a separate log file of data items being passed,
often with a timestamp logged to allow analysis of thousands of cases of data passed
between units for days or weeks. Tests can include checking the handling of some extreme
data values while other interface variables are passed as normal values. Unusual data values
in an interface can help explain unexpected performance in the next unit. Component
interface testing is a variation of black-box testing with the focus on the data values beyond
just the related actions of a subsystem component.
System testing, or end-to-end testing, tests a completely integrated system to verify that it
meets its requirements. For example, a system test might involve testing a logon interface,
then creating and editing an entry, plus sending or printing results, followed by summary
processing or deletion (or archiving) of entries, then logoff.
In addition, the software testing should ensure that the program, as well as working as
expected, does not also destroy or partially corrupt its operating environment or cause other
processes within that environment to become inoperative (this includes not corrupting
shared memory, not consuming or locking up excessive resources and leaving any parallel
processes unharmed by its presence).
An installation test assures that the system is installed correctly and working at actual
customer's hardware.
Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features, including
old bugs that have come back. Such regressions occur whenever software functionality that
was previously working, correctly, stops working as intended. Typically, regressions occur as
an unintended consequence of program changes, when the newly developed part of the
software collides with the previously existing code. Common methods of regression testing
include re-running previous sets of test-cases and checking whether previously fixed faults
have re-emerged.
The depth of testing depends on the phase in the release process and the risk of the added
features. They can either be complete, for changes added late in the release or deemed to
be risky, or be very shallow, consisting of positive tests on each feature, if the changes are
early in the release or deemed to be of low risk. Regression testing is typically the largest
test effort in commercial software development due to checking numerous details in prior
software features, and even new software can be developed while using some old test-cases
to test parts of the new design to ensure prior functionality is still supported.
3. A smoke test is used as an acceptance test prior to introducing a new build to the
main testing process, i.e. before integration or regression.
Beta testing comes after alpha testing and can be considered a form of external user
acceptance testing. Versions of the software, known as beta versions, are released to a
limited audience outside of the programming team. The software is released to groups of
people so that further testing can ensure the product has few faults or bugs. Sometimes,
beta versions are made available to the open public to increase the feedback field to a
maximal number of future users.
Functional testing refers to activities that verify a specific action or function of the code.
These are usually found in the code requirements documentation, although some
development methodologies work from use cases or user stories. Functional tests tend to
answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific
function or user action, such as scalability or other performance, behaviour under
certain constraints, or security. Testing will determine the breaking point, the point at which
extremes of scalability or performance leads to unstable execution. Non-functional
requirements tend to be those that reflect the quality of the product, particularly in the
context of the suitability perspective of its users.
Load testing is primarily concerned with testing that the system can continue to operate
under a specific load, whether that be large quantities of data or a large number of users.
This is generally referred to as software scalability. The related load testing activity of when
performed as a non-functional activity is often referred to as endurance testing. Volume is a
way to test software functions even when certain components (for example a file or
database) increase radically in size. Stress testing is a way to test reliability under
unexpected or rare workloads. Stability testing (often referred to as load or endurance
testing) checks to see if the software can continuously function well in or above an
acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms
load testing, performance testing, scalability testing, and volume testing, are often used
interchangeably.
Real-time software systems have strict timing constraints. To test if timing constraints are
met, real-time testing is used.
Usability testing is to check if the user interface is easy to use and understand. It is
concerned mainly with the use of the application.
Security testing is essential for software that processes confidential data to prevent system
intrusion by hackers.
Actual translation to human languages must be tested, too. Possible localization failures
include:
Untranslated messages in the original language may be left hard coded in the source
code.
Some messages may be created automatically at run time and the resulting string may
be ungrammatical, functionally incorrect, misleading or confusing.
Software may lack support for the character encoding of the target language.
Fonts and font sizes which are appropriate in the source language may be
inappropriate in the target language; for example, CJK characters may become
unreadable if the font is too small.
A string in the target language may be longer than the software can handle. This may
make the string partly invisible to the user or cause the software to crash or
malfunction.
Software may display images with text that was not localized.
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying
the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as
the customer of testing and emphasizing a test-first design paradigm. See also Test Driven
Development.
Application Programming Interface (API): A formalized set of software calls and routines that
can be referenced by an application program in order to access supporting system or network
services.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools,
to improve software quality.
Automated Testing:
Testing employing software tools which execute tests without manual intervention. Can be
applied in GUI, performance, API, etc. testing.
The use of software to control the execution of tests, the comparison of actual outcomes to
predicted outcomes, the setting up of test preconditions, and other test control and test
reporting functions.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the
program to design tests.
Baseline: The point at which some deliverable produced during the software engineering
process is put under formal change control.
Bottom Up Testing: An approach to integration testing where the lowest level components are
tested first, then used to facilitate the testing of higher level components. The process is
repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being
tested. (Some of these tests are stress tests).
Boundary Value Analysis: In boundary value analysis, test cases are generated using the
extremes of the input domain, e.g. maximum, minimum, just inside/outside boundaries, typical
values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner
cases".
Branch Testing: Testing in which all branches in the program source code are tested at least
once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test
features in detail.
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test.
The input cases stored can then be used to reproduce the test at a later time. Most commonly
applied to GUI test tools.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects
which can be used to design test cases.
Code Coverage: An analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been executed and
therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a
group who ask questions analysing the program logic, analysing the code with respect to a
checklist of historically common programming errors, and analysing its compliance with coding
standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a
small set of test cases, while the state of program variables is manually monitored, to analyse
the programmer's logic and assumptions.
Data Dictionary: A database that contains definitions of all data items defined during analysis.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally
defined data values, maintained as a file or spreadsheet. A common technique in Automated
Testing.
Emulator: A device, computer program, or system that accepts the same inputs and produces
the same outputs as a given system.
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged
execution.
Equivalence Partitioning: A test case design technique for a component in which test cases are
designed to execute representatives from equivalence classes.
Error: A mistake in the system under test; usually but not always a coding mistake on the part of
the developer.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for
an element of the software under test.
Functional Decomposition: A technique used during planning, analysis and design; creates a
functional hierarchy for the software.
Functional Specification: A document that describes in detail the characteristics of the product
with regard to its intended features.
Testing the features and operational behaviour of a product to ensure they correspond to its
specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on
the outputs generated in response to selected inputs and execution conditions.
High Order Tests: Black-box tests conducted once the software has been integrated.
Independent Test Group (ITG): A group of people whose primary responsibility is software
testing.
Inspection: A group review quality improvement process for written material. It consists of two
aspects; product (document itself) improvement and process improvement (of both document
production and inspection).
Installation Testing: Confirms that the application under test recovers from expected or
unexpected events without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions.
Localization Testing: This term refers to making software specifically designed for a specific
locality.
Loop Testing: A white box testing technique that exercises program loops.
Metric: A standard of measurement. Software metrics are the statistics describing the structure
or content of a program. A metric should be a real objective measurement of something such as
number of bugs per lines of code.
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there
to ensure the system or an application does not crash out.
Mutation Testing: Testing done on the application where bugs are purposely added to it.
Negative Testing: Testing aimed at showing software does not work. Also known as "test to
fail". See also Positive Testing.
N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which
errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The
cycles are typically repeated until the solution reaches a steady state and there are no errors.
See also Regression Testing.
Path Testing: Testing in which all paths in the program source code are tested at least once.
Positive Testing: Testing aimed at showing software works. Also known as "test to pass".
Quality Circle: A group of individuals with related interests that meet at regular intervals to
consider problems or other matters related to the quality of outputs of a process and to the
correction of problems or to the improvement of quality.
Quality Control: The operational techniques and the activities used to fulfill and verify
requirements of quality.
Quality Management: That aspect of the overall management function that determines and
implements the quality policy.
Ramp Testing: Continuously raising an input signal until the system breaks down.
Recovery Testing: Confirms that the program recovers from expected or unexpected events
without loss of data or functionality. Events can include shortage of disk space, unexpected loss
of communication, or power out conditions.
Release Candidate: A pre-release version, which contains the desired functionality of the final
version, but which needs to be tested for bugs (which ideally should be removed before the
final version is released).
Security Testing: Testing which confirms that the program can restrict access to authorized
personnel and that the authorized personnel can access the functions available to their security
level.
Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work.
Originated in the hardware testing practice of turning on a new piece of hardware for the first
time and considering it a success if it does not catch on fire.
Soak Testing: Running a system at high load for a prolonged period of time. For example,
running several times more transactions in an entire day (or night) than would be expected in a
busy day, to identify and performance problems that appear after a large number of
transactions have been executed.
Software Requirements Specification: A deliverable that describes all data, functional and
behavioural requirements, all constraints, and all validation requirements for software/
Storage Testing: Testing that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected termination resulting
from lack of space. This is external storage as opposed to internal storage.
System Testing: Testing that attempts to discover defects that are properties of the entire
system rather than of its individual components.
Testing:
The process of exercising software to verify that it satisfies specified requirements and to detect
errors.
The process of analysing a software item to detect the differences between existing and
required conditions (that is, bugs), and to evaluate the features of the software item.
Test Bed: An execution environment configured for testing. May consist of specific hardware,
OS, network topology, configuration of the product under test, other application or system
software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
Test Case:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing.
A Test Case will consist of information such as requirements testing, test steps, verification
steps, prerequisites, outputs, test environment, etc.
A set of inputs, execution preconditions, and expected outcomes developed for a particular
objective, such as to exercise a particular program path or to verify compliance with a specific
requirement.
Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.
Test Environment: The hardware and software environment in which tests will be run, and any
other software with which the software under test interacts when under test including stubs
and test drivers.
Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.
Test Plan: A document describing the scope, approach, resources, and schedule of intended
testing activities. It identifies test items, the features to be tested, the testing tasks, who will do
each task, and any risks requiring contingency planning.
Test Procedure: A document providing detailed instructions for the execution of one or
more test cases.
Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are
to be executed.
Test Script: Commonly used to refer to the instructions for a particular test that will be carried
out by an automated test tool.
Test Specification: A document specifying the test approach for a software feature or
combination or features and the inputs, predicted results and execution conditions for the
associated tests.
Test Suite: A collection of tests used to validate the behaviour of a product. The scope of a Test
Suite varies from organization to organization. There may be several Test Suites for a particular
product for example. In most cases however a Test Suite is a high level concept, grouping
together hundreds or thousands of tests related by what they are intended to test.
Test Tools: Computer programs used in the testing of a system, a component of the system, or
its documentation.
Top Down Testing: An approach to integration testing where the component at the top of the
component hierarchy is tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components. The process is repeated until
the lowest level components have been tested.
Traceability Matrix: A document showing the relationship between Test Requirements and Test
Cases.
Usability Testing: Testing the ease with which users can learn and use a product.
Use Case: The specification of tests that are conducted from the end-user perspective. Use
cases tend to focus on operating software as an end-user would conduct their day-to-day
activities.
Validation: The process of evaluating software at the end of the software development process
to ensure compliance with software requirements. The techniques for validation is testing,
inspection and reviewing.
Verification: The process of determining whether of not the products of a given phase of the
software development cycle meet the implementation steps and can be traced to the incoming
objectives established during the previous phase. The techniques for verification are testing,
inspection and reviewing.
Volume Testing: Testing which confirms that any values that may become large over time (such
as accumulated counts, logs, and data files), can be accommodated by the program and will not
cause the program to stop working or degrade its operation in any manner.
White Box Testing: Testing based on an analysis of internal workings and structure of a piece of
software. Includes techniques such as Branch Testing and Path Testing. Also known as Testing
and Glass Box Testing. Contrast with Black Box Testing.
Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are
expected to be utilized by the end-user.
Recording test execution results is very important part of testing, whenever test execution cycle is
complete, tester should make a complete test results report which includes the Test Pass/Fail status
of the test cycle.
If manual testing is done then the test pass/fail result should be captured in an excel sheet and if
automation testing is done using automation tool then the HTML or XML reports should be provided
to stakeholders as test deliverable.
Test Plan: It is enough to communicate with the rest of the project teams, when a test plan is
created or when a major change is made to it.
Test documentation – Let all the teams know when the designing of the tests, data gathering and
other activities have begun and also when they are finished. This report will not only let them know
about the progress of the task but also signal the teams that need to review and provide signoff on
the artefacts, that they are up next.
3.1 Review the testing process for a computer program against organisation policy and
procedures.
OUTCOME RANGE
Test deliverables, Review of test incident report, Pass/Fail criteria, Contingencies, QA Approval.
A programme review is "A process or meeting during which a software product is examined by a
project personnel, managers, users, customers, user representatives, or other interested parties for
comment or approval".
In this context, the term "software product" means "any technical document or partial document,
produced as a deliverable of a software development activity", and may include documents such as
contracts, project plans and budgets, requirements documents, specifications, designs, source code,
user documentation, support and maintenance documentation, test plans, test specifications,
standards, and any other type of specialist work product.
Software peer reviews are conducted by the author of the work product, or by one or more
colleagues of the author, to evaluate the technical content and/or quality of the work.
Code review is systematic examination (often as peer review) of computer source code.
Pair programming is a type of code review where two persons develop code together at the
same workstation.
Inspection is a very formal type of peer review where the reviewers are following a well-
defined process to find defects.
Walkthrough is a form of peer review where the author leads members of the development
team and other interested parties through a software product and the participants ask
questions and make comments about defects.
Technical review is a form of peer review in which a team of qualified personnel examines
the suitability of the software product for its intended use and identifies discrepancies from
specifications and standards.
"Formality" identifies the degree to which an activity is governed by agreed (written) rules. Software
review processes exist across a spectrum of formality, with relatively unstructured activities such as
"buddy checking" towards one end of the spectrum, and more formal approaches such as
walkthroughs, technical reviews, and software inspections, at the other.
Research studies tend to support the conclusion that formal reviews greatly outperform informal
reviews in cost-effectiveness. Informal reviews may often be unnecessarily expensive (because of
time-wasting through lack of focus), and frequently provide a sense of security which is quite
unjustified by the relatively small
Differing types of review may apply this structure with varying degrees of rigour, but all activities are
mandatory for inspection:
[Entry evaluation]: The Review Leader uses a standard checklist of entry criteria to ensure
that optimum conditions exist for a successful review.
Planning the review: The Review Leader identifies or confirms the objectives of the review,
organises a team of Reviewers, and ensures that the team is equipped with all necessary
resources for conducting the review.
[Group] Examination: The Reviewers meet at a planned time to pool the results of their
preparation activity and arrive at a consensus regarding the status of the document (or
activity) being reviewed.
Rework/follow-up: The Author of the work product (or other assigned person) undertakes
whatever actions are necessary to repair defects or otherwise satisfy the requirements
agreed to at the Examination meeting. The Review Leader verifies that all action items are
closed.
[Exit evaluation]: The Review Leader verifies that all activities necessary for successful
review have been accomplished, and that all outputs appropriate to the type of review have
been finalised.
The most obvious value of software reviews (especially formal reviews) is that they can identify
issues earlier and more cheaply than they would be identified by testing or by field use (the defect
detection process). The cost to find and fix a defect by a well-conducted review may be one or two
orders of magnitude less than when the same defect is found by test execution or in the field.
This is particularly the case for peer reviews if they are conducted early and often, on samples of
work, rather than waiting until the work has been completed. Early and frequent reviews of small
work samples can identify systematic errors in the Author's work processes, which can be corrected
before further faulty work is done. This improvement in Author skills can dramatically reduce the
time it takes to develop a high-quality technical document, and dramatically decrease the error-rate
in using the document in downstream processes.
As a general principle, the earlier a technical document is produced, the greater will be the impact of
its defects on any downstream activities and their work products. Accordingly, greatest value will
accrue from early reviews of documents such as marketing plans, contracts, project plans and
schedules, and requirements specifications. Researchers and practitioners have shown the
effectiveness of reviewing process in finding bugs and security issues.