You are on page 1of 105

LEARNER GUIDE

Gathering Techniques for


Computer System Development
Module 5

Unit Standard 115358


Unit Standard 115392
Unit Standard 115384

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p1 of 105
Learner Information:
Details Please Complete this Section
Name & Surname:
Organisation:
Unit/Dept:
Facilitator Name:
Date Started:
Date of Completion:

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p2 of 105
Apply information gathering techniques for computer
system development(US115358) 
Unit Std # 115358
NQF Level 5
Notional hours 70
Credit(s) 7
Field Field 03 - Business, Commerce and Management Studies
Sub-Field Human Resources
Qualification Qualification National Certificate: Information Technology (Systems
Development) LEVEL 5- SAQA- 48872- 131 CREDITS

The Learner guide

At the end of this unit standard you will be able to Apply information gathering techniques
for computer system development

Purpose:
People credited with this unit standard are able to: 

 Design and conduct an interview for gathering information for computer system
development.
 Design and perform an analysis of the results from a questionnaire for gathering
information for computer system development
 Gather data from documents for computer system development
 Observe a person's behaviour for gathering information for computer system
development
 Consolidate the information gathered via different techniques. .

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p3 of 105
Specific outcome:
 People credited with this unit standard are able to:
 Design and conduct an interview for gathering information for computer system
development.
 Design and perform an analysis of the results from a questionnaire for gathering
information for computer system development
 Gather data from documents for computer system development
 Observe a person's behaviour for gathering information for computer system
development
 Consolidate the information gathered via different techniques.

Learning assumed to be in place:

 The credit value of this unit is based on a person having the prior knowledge and
skills to:
 Demonstrate literacy skills at least at NQF level 4.
 Demonstrate PC competency skills (End-User Computing modules up to level 3).

Equipment needed:
Learning material, Learner workbook, Pen, Ruler.
PLEASE NOTE: THE USE OF PENCILS OR TIPPEX IS NOT ALLOWED.
IF YOU USE A PENCIL THE VALIDITY OF YOUR WORK COULD BE QUESTIONABLE, AND THIS
COULD LEAD TO FRAUD.

Resources (selective resources might be used, depending on the facilitator and venue
circumstances), one or all of the following can be used:
 Your facilitator/mentor
 Learning material
 Learner workbook
 Visual aids
 White board
 Flip chart
 Equipment
 Training venue

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p4 of 105
Venue, Date and Time:
Consult your facilitator should there be any changes to the venue, date and/or time.
Refer to your timetable.

Assessments:
The only way to establish whether you are competent and have accomplished the specific
outcomes is through continuous assessments
The given exercises can contain one or more of the following:
 Information for you to read
 Exercises that require you to have a problem-solving approach to communication
 Questions for you to answer
 Case studies with questions that follow

How to do the exercise:

 The facilitator will tell you which exercise you need to complete each day.
 You need to hand in your answers to the facilitator who will mark it for correctness.
 If you do not know the answer, you will have to go back to that particular section in
you learner guide and go over it again.
 Ask the facilitator for help, if you do not understand any of the questions asked.
 Always remember to give reasons for your answers

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p5 of 105
Table of contents

Specific Outcome 1 : Design and conduct an interview for gathering information for
computer system development.
 The interviewee reports that he or she has understood the interview objectives.
 The interviewee reports that he or she has understood the interview questions.
 The interview provides answers that meet interview objectives.
 The presentation of the interview is appropriate to the interviewee.
Specific Outcome 2 : Design and perform an analysis of the results from a questionnaire
for gathering information.
 The respondents report that they understand the questionnaire objectives.
 The respondents report that they understand the questions.
 The questionnaire responses provide answers that meet questionnaire objectives.
 The presentation of the questionnaire is appropriate to the target population.
 A summary of questionnaire responses, and a comparison with expected responses,
allows summary statements to be made about the population sample.
Specific Outcome 3 : Gather data from documents for computer system development.
 Research notes identify data that meet the specified information requirements using
an industry recommended format.
 Research notes identify the characteristics of the data and the relationships between
data items.
 Research notes identifying data items facilitate access to those data items.
Specific Outcome 4 : Observe a person's behaviour for gathering information for
computer system development
 A record of the behaviour identifies events that meet the specified information
requirements, and outlines those events.
 A report about the observation compares the outcome of the observation with the
observation objectives.
Specific Outcome 5 : Consolidate the information gathered via different techniques.
 The comparison identifies agreement and differences between the information
gathered from different techniques.
 Differences are resolved and justified by reviewing the information gathering
techniques.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p6 of 105
SPECIFIC OUTCOME 1:

Design and conduct an interview for gathering information for computer


system development.

ASSESEMENT CRITERIA
 The interviewee reports that he or she has understood the interview objectives.
 The interviewee reports that he or she has understood the interview questions.
 The interview provides answers that meet interview objectives.
 The presentation of the interview is appropriate to the interviewee.

1.1 What is an interview?

When you're watching the news at night or reading the paper in the morning, you'll notice that all
the stories have a point in common: They all contain interviews. No matter what subject is being
tackled, there'll always be people willing to be interviewed about it. And that's great, because that
way we can get a sample of what people think and feel about different issues.

Interviews are usually defined as a conversation with a purpose. They can be very helpful to your
organization when you need information about assumptions and perceptions of activities in your
community. They're also great if you're looking for in-depth information on a particular topic from
an expert. (If what you really need is numerical data--how much and how many--a written
questionnaire may better serve your purposes.)

Interviewing has been described as an art, rather than a skill or science. In other cases, it has been
described as game in which the interviewee gets some sort of reward, or simply as a technical skill
you can learn. But, no matter how you look at it, interviewing is a process that can be mastered by
practice. This chapter will show you how.

Why should you conduct interviews?

Using an interview is the best way to have an accurate and thorough communication of ideas
between you and the person from whom you're gathering information. You have control of the
question order, and you can make sure that all the questions will be answered.

In addition, you may benefit from the spontaneity of the interview process. Interviewees don't
always have the luxury of going away and thinking about their responses or, even to some degree,

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p7 of 105
censoring their responses. You may find that interviewees will blurt things out that they would never
commit to on paper in a questionnaire.

When interviews are not the best option:

Interviews are not the only way of gathering information and depending on the case, they may not
even be appropriate or efficient. For example, large-scale phone interviews can be time-consuming
and expensive. Mailed questionnaires may be the best option in cases where you need information
form a large number of people. Interviews aren't efficient either when all you need is collecting
straight numeric data. Asking your respondents to fill out a form may be more appropriate.

Interviews will not be suitable if respondents will be unwillingly to cooperate. If your interviewees
have something against you or your organization, they will not give you the answers you want and
may even mess up your results. When people don't want to talk, setting up an interview is a waste of
time and resources. You should, then, look for a less direct way of gathering the information you
need.

Problems with interviews:

You must also be well prepared for traps that might arise from interviews. For example, your
interviewee may have a personal agenda and he or she will try to push the interview in a way to
benefit their own interests. The best solution is to become aware of your interviewee's inclinations
before arranging the interview.

Sometimes, the interviewee exercises his or her control even after the interview is done, asking to
change or edit the final copy. That should be a right of the interviewer only. If the subject you're
addressing involves technical information, you may have the interviewee check the final result for
you, just for accuracy.

Whom should you interview?

Your choice of interviewees will, obviously, be influenced by the nature of the information you need.
For example, if you're trying to set up a volunteer program for your organization, you may want to
interview the volunteer coordinator at one or two other successful agencies for ideas for your
program.

On the other hand, if you're taking a look at the community's response to an ad campaign you've
been running, you'll want to identify members of the target audience to interview. In this case, a
focus group can be extremely useful.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p8 of 105
If you're reluctant to contact a stranger for an interview, remember that most people enjoy talking
about what they know and are especially eager to share their knowledge with those who are
interested. Demonstrate interest and your chances of getting good interviews will improve.

How should you conduct interviews?

Sometimes, being a good interviewer is described as an innate ability or quality possessed by only
some people and not by others. Certainly, interviewing may come more easily to some people than
to others, but anybody can learn the basic strategies and procedures of interviewing. We're here to
show you how.

Interview structure:

First you should decide how structured you want your interview to be. Interviews can be formally
structured, loosely structured, or not structured at all. The style of interviewing you will adopt will
depend on the kind of result you're looking for.

In a highly structured interview, you simply ask subjects to answer a list of questions. To get a valid
result, you should ask all subjects identical questions. In an interview without a rigid structure, you
can create and ask questions appropriate the situations that arise and to the central purpose of the
interview. There's no predetermined list of questions to ask. Finally, in a semi-structured setting,
there is a list of predetermined questions, but interviewees are allowed to digress.

1.2 Types of interviews:

Now that you've decided how structured you want the interview to be, it's time to decide how you
want to conduct it. Can you do it through the phone, or do you need to it face-to-face? Would a
focus group be most appropriate? Let's look at each of these interview types in depth.

Face-to-face interviews

Face-to-face interviews are a great way to gather information. Whether you decide to interview
face-to-face depends on the amount of time and resources you have available at your disposal.
Some advantages of interviewing in person are:

 You have more flexibility. You can probe for more specific answers, repeat questions, and
use discretion as to the particular questions you ask.

 You are able to watch nonverbal behavior.

 You have control over the physical environment.

 You can record spontaneous answers.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p9 of 105
 You know exactly who is answering.

 You can make sure the interview is complete and all questions have been asked.

 You can use a more complex questionnaire.

However, if face-to-face interviews prove to be too expensive, too time-consuming, or too


inconvenient to be conducted, you should consider some other way of interviewing. For example, if
the information you're collecting is of a sensitive and confidential nature, your respondents may
prefer the comfort of anonymity, and an anonymous questionnaire would probably be more
appropriate.

Telephone interviews

Telephone interviews are also a good way of getting information.

They're particularly useful when the person you want to speak to lives far away and setting up a
face-to-face interview is impractical. Many of the same advantages and disadvantages of face-to-
face interviewing apply here; the exception being, of course, that you won't be able to watch
nonverbal behavior.

Here are some tips to make your phone interview successful:

 Keep phone interviews to no more than about ten minutes--exceptions to this rule may be
made depending on the type of interview you're conducting and on the arrangements
you've made with the interviewee.

 If you need your interviewee to refer to any materials, provide them in advance.

 Be extra motivating on the phone, because people tend to be less willing to become
engaged in conversation over the phone.

 Identify yourself and offer your credentials. Some respondents may be distrustful, thinking
they're being played a prank.

 If tape-recording the conversation, ask for authorization to do so.

 Write down the information as you hear it; don't trust your memory to write the information
down later.

 Speak loud, clear and with pitch variation -- don't make it another boring phone call.

 Don't call too early in the morning or too late at night, unless arranged in advance.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p10 of 105
 Finish the conversation cordially, and thank the interviewee.

With the increasing use of computers as a means of communication, interviews via e-mail have
become popular. E-mail is an inexpensive option for interviewing. The advantages and drawbacks of
e-mail interviews are similar to phone interviews. E-mails are far less intrusive than the phone. You
are able to contact your interviewee, send your questions, and follow up the received answers with
a thank-you message. You may never meet or talk to your respondent.

However, through e-mail your chances for probing are very limited, unless you keep sending
messages back and forth to clarify answers. That's why you need to be very clear about what you
need when you first contact your interviewee. Some people may also resent the impersonal nature
of e-mail interaction, while others may feel more comfortable having time to think about their
answers.

Focus groups

A focus group, led by a trained facilitator, is a particular type of "group interview" that may be very
useful to you. Focus groups consisting of groups of people whose opinions you would like to know
may be somewhat less structured; however, the input you get is very valuable. Focus groups are
perhaps the most flexible tool for gathering information because you can focus in on getting the
opinions of a group of people while asking open-ended questions that the whole group is free to
answer and discuss. This often sparks debate and conversation, yielding lots of great information
about the group's opinion.

During the focus group, the facilitator is also able to observe the nonverbal communication of the
participants. Although the sample size is generally smaller than some other forms of information
gathering, the free exchange of opinions brought on by the group interaction is an invaluable tool.

1.3 Prepare for the interview

So you've chosen your interviewees, set up the interview, and started to think about interview
questions. You're ready to roll, right?

Not quite. First, you need to make sure you have as much information as possible about your
interview topic. You don't need to be an expert -- after all, that's why you're interviewing people! --
but you do want to be fairly knowledgeable. Having a solid understanding of the topic at hand will
make you feel more comfortable as an interviewer, enhance the quality of the questions you ask,
and make your interviewee more comfortable as well.

In addition, it's important to understand your interviewee's culture and background before you
conduct your interview. This understanding will be reflected on the way you phrase your questions,

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p11 of 105
your choice of words, your ice-breakers, the way you'll dress, which the material you'll avoid so that
the questions remain inoffensive to your interviewee.

1.4 Conduct the interview

Now that you're prepared, it's time to conduct the interview. Whether calling or meeting someone,
be sure to be on time -- your interviewee is doing you a favor, and you don't want to keep him or her
waiting.

When interviewing someone, start with some small talk to build rapport. Don't just plunge into your
questions -- make your interviewee as comfortable as possible.

Points to remember:

 Practice -- prepare a list of interview questions in advance. Rehearse, try lines, mock-interview
friends. Memorize your questions. Plan ahead the location and ways to make the ambient more
comfortable.

 Small-talk -- never begin an interview cold. Try to put your interviewee at ease and establish
rapport.

 Be natural -- even if you rehearsed your interview time and time again and have all your
questions memorized, make it sound and feel like you're coming up with them right there.

 Look sharp -- dress appropriately to the ambient you're in and to the kind of person you're
interviewing. Generally you're safe with business attire, but adapt to your audience. Arrive on
time if you are conducting the interview in person.

 Listen -- present yourself aware and interested. If your interviewee says something funny, smile.
If it's something sad, look sad. React to what you hear.

 Keep your goals in mind -- remember that what you want is to obtain information. Keep the
interview on track, don't digress too much. Keep the conversation focused on your questions. Be
considerate of your interviewee's limited time.

 Don't take "yes/no" answers -- monosyllabic answers don't offer much information. Ask for an
elaboration, probe, ask why. Silence may also yield information. Ask the interviewee to clarify
anything you do not understand

 Respect -- make interviewees feel like their answers are very important to you (they are
supposed to be!) and be respectful for the time they're donating to help you.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p12 of 105
QUESTIONS:

Questions are such a fundamental part of an interview that's worth taking a minute to look at the
subject in depth. Questions can relate to the central focus of your interview, with to-the-point,
specific answers; they can be used to check the reliability of other answers; they can be used just to
create a comfortable relationship between you and the interviewee; and they can probe for more
complete answers.

It's very important that you ask your questions in a way to motivate the interviewee to answer as
completely and honestly as possible. Avoid inflammatory questions ("Do you always discriminate
against women and minorities, or just some of the time?"), and try to stay polite. And remember to
express clearly what you want to know. Just because interviewer and interviewee speak the same
language, it doesn't mean they'll necessarily understand each other.

There are some problems that can arise from the way you ask a question. Here are several of the
most common pitfalls:

 Questions that put the interviewee in the defensive -- These questions bring up emotional
responses, usually negative. To ask, "Why did you do such a bad thing?" will feel like you are
confronting your interviewee, and he or she will get defensive. Try to ask things in a more
relaxed manner.

 The two-in-one question -- These are questions that ask for two answers in one question. For
instance, "Does your company have special recruitment policy for women and racial minorities?"
may cause hesitation and indecision in the interviewee. A "yes" would mean both, and a "no"
would be for neither. Separate the issues into two separate questions.

 The complex question -- Questions that are too long, too involved, or too intricate will intimidate
or confuse your interviewee. The subject may not even understand the questions in its entirety.
The solution is to break down the question and make brief and concise.

 In addition, pay attention to the order in which you ask your questions. The arrangement or
ordering of your question may significantly affect the results of your interview. Try to start the
interview with mild and easy questions to develop a rapport with the interviewee. As the
interview proceeds, move to more sensitive and complex questions.

Interviewing is one of the primary ways to gather information about an information system. A good
system analyst must be good at interviewing and no project can be conduct without interviewing.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p13 of 105
There are many ways to arrange an effectively interview and no one is superior to others. However,
experience analysts commonly accept some following best practices for an effective interview:

n         Prepare the interview carefully, including appointment, priming question, checklist, agenda,
and questions.

n         Listen carefully and take note during the interview (tape record if possible)

n         Review notes within 48 hours after interview

n         Be neutral

n         Seek diverse views

Design interview questions

__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________

Self-check

OUTCOME Yes No I Need help


 The interviewee reports that he or she has understood
the interview objectives.
 The interviewee reports that he or she has understood
the interview questions.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p14 of 105
SPECIFIC OUTCOME 2 :
Design and perform an analysis of the results from a
questionnaire for gathering information.
ASSESEMENT CRITERIA
 The respondents report that they understand the questionnaire objectives.
 The respondents report that they understand the questions.
 The questionnaire responses provide answers that meet questionnaire objectives.
 The presentation of the questionnaire is appropriate to the target population.
 A summary of questionnaire responses, and a comparison with expected responses, allows
summary statements to be made about the population sample.

2.1 Questionnaire

A questionnaire is a means of eliciting the feelings, beliefs, experiences, perceptions, or attitudes of


some sample of individuals. As a data collecting instrument, it could be structured or unstructured.

The questionnaire is most frequently a very concise, preplanned set of questions designed to yield
specific information to meet a particular need for research information about a pertinent topic. The
research information is attained from respondents normally from a related interest area. The
dictionary definition gives a clearer definition: A questionnaire is a written or printed form used in
gathering information on some subject or subjects consisting of a list of questions to be submitted to
one or more persons.

Advantages

 Economy - Expense and time involved in training interviewers and sending them to interview
are reduced by using questionnaires.
 Uniformity of questions - Each respondent receives the same set of questions phrased in
exactly the same way. Questionnaires may, therefore, yield data more comparable than
information obtained through an interview.
 Standardization - If the questions are highly structured and the conditions under which they
are answered are controlled, then the questionnaire could become standardized.

Disadvantages

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p15 of 105
 Respondent’s motivation is difficult to assess, affecting the validity of response.
 Unless a random sampling of returns is obtained, those returned completed may represent
biased samples.

2.2 Factors affecting the percentage of returned questionnaires

 Length of the questionnaire.


 Reputation of the sponsoring agency.
 Complexity of the questions asked.
 Relative importance of the study as determined by the potential respondent.
 Extent to which the respondent believes that his responses are important.
 Quality and design of the questionnaire.
 Time of year the questionnaires are sent out.

The questionnaire is said to be the most "used and abused" method of gathering information by the
lazy man. because often it is poorly organized, vaguely worded, and excessively lengthy.

Two types of questionnaires

 Closed or restricted form - calls for a "yes" or "no" answer, short response, or item checking;
is fairly easy to interpret, tabulate, and summarize.
 Open or unrestricted form - calls for free response from the respondent; allows for greater
depth of response; is difficult to interpret, tabulate, and summarize.

Characteristics of a good questionnaire

 Deals with a significant topic, a topic the respondent will recognize as important enough to
justify spending his time in completing. The significance should be clearly stated on the
questionnaire or in the accompanying letter.
 Seeks only that information which cannot be obtained from other sources such as census
data.
 As short as possible, only long enough to get the essential data. Long questionnaires
frequently find their way into wastebaskets.
 Attractive in appearance, neatly arranged, and clearly duplicated or printed.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p16 of 105
 Directions are clear and complete, important terms are defined, each question deals with a
single idea, all questions are worded as simply and clearly as possible, and the categories
provide an opportunity for easy, accurate, and unambiguous responses.
 Questions are objective, with no leading suggestions to the desired response.
 Questions are presented in good psychological order, proceeding from general to more
specific responses. This order helps the respondent to organize his own thinking, so that his
answers are logical and objective. It may be wise to present questions that create a
favorable attitude before proceeding to those that may be a bit delicate or intimate. If
possible, annoying or embarrassing questions should be avoided.
 Easy to tabulate and interpret. It is advisable to preconstruct a tabulation sheet, anticipating
how the data will be tabulated and interpreted, before the final form of the question is
decided upon. Working backward from a visualization of the final analysis of data is an
important step in avoiding ambiguity in questionnaire form. If mechanical tabulating
equipment is to be used, it is important to allow code numbers for all possible responses to
permit easy transfer to machine-tabulation cards.

2.3 Guides for preparing and administering the questionnaire

 Get all of the help you can in planning and constructing your questionnaire. Study other
questionnaires and submit your own questionnaire to faculty members and class members
for criticism.
 Try your questionnaire out on a few friends or associates. This helps to locate unclear and
vague terms.
 Choose respondents carefully. It is important that questionnaires be sent only to those who
possess the desired information - those who are likely to be sufficiently interested to
respond conscientiously and objectively.
 A preliminary card asking whether or not the individual would be willing to participate in the
proposed study is recommended by some research authorities. This is not only a courteous
approach but a practical way of discovering those who will cooperate in furnishing the
desired information.
 It has also been found that in many instances better response is obtained when the original
request was sent to the administrative head of an organization rather than directly to the
person who had the desired information. It is possible that when a superior officer turns
over a questionnaire to a staff member to fill out there is some implied feeling of obligation.
 If questionnaires are planned for use in public schools, it is imperative that approval of the
project be secured from the principal or superintendent of the school.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p17 of 105
 If the desired information is delicate or intimate in nature, one must consider the possibility
of providing anonymous responses. This will result in the most objective responses. If
identity for classification purposes is necessary, the respondent must be convinced that the
information will be held in strictest confidence.
 Try to get the aid of sponsorship. Recipients are more likely to answer if a person,
organization, or institution of prestige has endorsed the project.
 Be sure to include a courteous, carefully constructed cover letter to explain the purpose of
the study.
 Some recipients are slow to return questionnaires. A courteous post card reminding an
individual that the questionnaire has not been received will often bring in some additional
responses.
 An important point to remember is that questionnaires should be used only after all other
sources on the topic to be researched have been thoroughly examined.

Rules for proper construction of a questionnaire

 Define or qualify terms that could easily be misunderstood or misinterpreted.


 What is the value of the tools in your Vo-Ag shop? (Replacement, present, market, teaching
value, etc.)
 What are you doing now? (Filling out your stupid questionnaire.)
 Be careful with descriptive adjectives and adverbs that have no agreed upon meaning, such
as frequently, occasionally, and rarely (one person’s rarely may be another person’s
frequently).
 Beware of double negatives.
 Are you opposed to not requiring students to take showers after gym classes?
 Are you in favor of not offering Vo. Ag. IV in your Agriculture Program?
 (One must study these questions carefully or answer improperly.)

Be careful of inadequate alternatives.

Married Yes ____ No ____

Employed Yes ____ No ____

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p18 of 105
(There are other answers that these types of questions do not answer, such as divorced, separated,
union strikes, etc.)

Evaluation of a Questionnaire or Interview Script

Is the question necessary? How will it be used? What answers will it provide? How will it be tabulated,
analyzed, and interpreted?
Are several questions needed instead of one?
Do the respondents have the information or experience necessary to answer the questions?
Is the question clear?
Is the question loaded in one direction? Biased? Emotionally toned?
Will the respondents answer the question honestly?
Will the respondents answer the question?
Is the question misleading because of unstated assumptions?
Is the best type of answer solicited?
Is the wording of the question likely to be objectionable to the respondents?
Is a direct or indirect question best?
If a checklist is used, are the possible answers mutually exclusive, or should they be?
If a checklist is used, are the possible answers "exhaustive"?
Is the answer to a question likely to be influenced by preceding questions?
Are the questions in psychological order?
Is the respondent required to make interpretations of quantities or does the respondent give data
which investigator must interpret?
 

Summary

Questionnaires have the advantage of gathering information from many people in a relatively short
time and of being less biased in the interpretation of their results. Choosing right questionnaires
respondents and designing effective questionnaires are the critical issues in this information
collection method. People usually are only use a part of functions of a system, so they are always
just familiar with a part of the system functions or processes. In most situations, one copy of
questionnaires obviously cannot fit to all the users. To conduct an effective survey, the analyst
should group the users properly and design different questionnaires for different group. Moreover,
the ability to build good questionnaires is a skill that improves with practice and experience. When
designing questionnaires, the analyst should concern the following issues at least:

n         The ambiguity of questions.

n         Consistence of respondents’ answers.

n         What kind of question should be applied, open-ended or close-ended?

n         What is the proper length of the questionnaires?

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p19 of 105
The third one is directly observing users. People are not always very reliable informants, even when
they try to be reliable and tell what they think is the truth. People often do not have a completely
accurate appreciation of what they do or how they do it. This I especially true concerning infrequent
events, issues from the past, or issues for which people have considerable passion. Since people can
not always be trusted to reliably interpret and report their own actions, analyst can supplement and
corroborate what people say by watching what they do or by obtaining relatively objective measures
of how people behave in work situation. However, observation can cause people to change their
normal operation behavior. It will make the gathered information biased

Self-check

OUTCOME Yes No I Need help


 The respondents report that they understand the
questionnaire objectives.
 The respondents report that they understand the
questions.
 The questionnaire responses provide answers that meet
questionnaire objectives.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p20 of 105
SPECIFIC OUTCOME 3:
Gather data from documents for computer system development.
ASSESEMENT CRITERIA
 Research notes identify data that meet the specified information requirements using an
industry recommended format.
 Research notes identify the characteristics of the data and the relationships between data
items.
 Research notes identifying data items facilitate access to those data items.

3.1 Gather data from documents for computer system development

The fourth one is analyzing procedures and other documents. By examining existing system and
organizational documentation, system analyst can find out details about current system and the
organization these systems support. In documents analyst can find information, such as problem
with existing systems, opportunities to meet new needs if only certain information or information
processing were available, organizational direction that can influence information system
requirements, and the reason why current systems are designed as they are, etc. 

However, when analyzing those official documentations, analysts should pay attention to the
difference between the systems described on the official documentations and the practical systems
in real world. For the reason of inadequacies of formal procedures, individual work habits and
preferences, resistance to control, and other factors, the difference between so called formal system
and informal system universally exists.

Sources of Requirements

Good requirements start with good sources. Finding those quality sources is an important task and,
fortunately, one that takes few resources. Examples of sources of requirements include:

 Customers

 Users

 Administrators and maintenance staff

 Partners

 Domain Experts

 Industry Analysts

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p21 of 105
 Information about competitors 

Requirements Gathering Techniques

After you have identified these sources, there are a number of techniques that may be used to
gather requirements. The following will describe the various techniques, followed by a brief
discussion of when to use each technique.

To get the requirements down on paper, you can to do one or more of the following:

 Conduct a brainstorming session

 Interview users

 Send questionnaires

 Work in the target environment

 Study analogous systems

 Examine suggestions and problem reports

 Talk to support teams

 Study improvements made by users

 Look at unintended uses

 Conduct workshops

 Demonstrate prototypes to stakeholders

The best idea is to get the requirements down quickly and then to encourage the users to correct
and improve them. Put in those corrections, and repeat the cycle. Do it now, keep it small, and
correct it at once. Start off with the best structure you can devise, but expect to keep on correcting it
throughout the process.  Success tips: Do it now, keep it small, and correct it immediately.

3.2 Some of the things you might do with the information you collect include:

 Gathering together information from all sources and observations

 Making photocopies of all recording forms, records, audio or video recordings, and any other
collected materials, to guard against loss, accidental erasure, or other problems

 Entering narratives, numbers, and other information into a computer program, where they can
be arranged and/or worked on in various ways

 Performing any mathematical or similar operations needed to get quantitative information ready
for analysis.  These might, for instance, include entering numerical observations into a chart,

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p22 of 105
table, or spread sheet, or figuring the mean (average), median (midpoint), and/or mode (most
frequently occurring) of a set of numbers.

 Transcribing (making an exact, word-for-word text version of) the contents of audio or video
recordings

 Coding data (translating data, particularly qualitative data that isn’t expressed in numbers, into a
form that allows it to be processed by a specific software program or subjected to statistical
analysis)

 Organizing data in ways that make them easier to work with.  How you do this will depend on
your research design and your evaluation questions. You might group observations by the
dependent variable (indicator of success) they relate to, by individuals or groups of participants,
by time, by activity, etc. You might also want to group observations in several different ways, so
that you can study interactions among different variables.

3.3 Whom should data be collected and analyzed?

The “who” question can be more complex. If you’re reasonably familiar with statistics and statistical
procedures, and you have the resources in time, money, and personnel, it’s likely that you’ll do a
somewhat formal study, using standard statistical tests. (There’s a great deal of software – both for
sale and free or open-source – available to help you.)

If that’s not the case, you have some choices:

 You can hire or find a volunteer outside evaluator, such as from a nearby college or university, to
take care of data collection and/or analysis for you.

 You can conduct a less formal evaluation. Your results may not be as sophisticated as if you
subjected them to rigorous statistical procedures, but they can still tell you a lot about your
program.  Just the numbers – the number of dropouts (and when most dropped out), for
instance, or the characteristics of the people you serve – can give you important and usable
information.

 You can try to learn enough about statistics and statistical software to conduct a formal
evaluation yourself. (Take a course, for example.)

 You can collect the data and then send it off to someone – a university program, a friendly
statistician or researcher, or someone you hire – to process it for you.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p23 of 105
 You can collect and rely largely on qualitative data.  Whether this is an option depends to a large
extent on what your program is about. You wouldn’t want to conduct a formal evaluation of
effectiveness of a new medication using only qualitative data, but you might be able to draw
some reasonable conclusions about use or compliance patterns from qualitative information.

 If possible, use a randomized or closely matched control group for comparison. If your control is
properly structured, you can draw some fairly reliable conclusions simply by comparing its
results to those of your intervention group.  Again, these results won’t be as reliable as if the
comparison were made using statistical procedures, but they can point you in the right
direction.  It’s fairly easy to tell whether or not there’s a major difference between the numbers
for the two or more groups.  If 95% of the students in your class passed the test, and only 60% of
those in a similar but uninstructed control group did, you can be pretty sure that your class made
a difference in some way, although you may not be able to tell exactly what it was that
mattered. By the same token, if 72% of your students passed and 70% of the control group did
as well, it seems pretty clear that your instruction had essentially no effect, if the groups were
starting from approximately the same place.

Who should actually collect and analyze data also depends on the form of your evaluation.   If you’re
doing a participatory evaluation, much of the data collection - and analyzing - will be done by
community members or program participants themselves. If you’re conducting an evaluation in
which the observation is specialized, the data collectors may be staff members, professionals, highly
trained volunteers, or others with specific skills or training (graduate students, for example).  
Analysis also could be accomplished by a participatory process. Even where complicated statistical
procedures are necessary, participants and/or community members might be involved in sorting out
what those results actually mean once the math is done and the results are in. Another way analysis
can be accomplished is by professionals or other trained individuals, depending upon the nature of
the data to be analyzed, the methods of analysis, and the level of sophistication aimed at in the
conclusions.

Self-check

OUTCOME Yes No I Need help


 Research notes identify data that meet the specified
information requirements using an industry
recommended format.
 Research notes identify the characteristics of the data
and the relationships between data items.
 Research notes identifying data items facilitate access

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p24 of 105
to those data items.

SPECIFIC OUTCOME 4:
Observe a person's behaviour for gathering information for
computer system development
ASSESEMENT CRITERIA
 A record of the behaviour identifies events that meet the specified information requirements,
and outlines those events.
 A report about the observation compares the outcome of the observation with the
observation objectives.

4.1 Observing Behavior Using A-B-C Data

All members of the student’s individualized education program (IEP) can observe behavior to learn
about patterns and functions of behavior. Everyone who observes behavior probably looks for
similar characteristics of autism spectrum disorders (e.g., communication challenges, social deficits,
restricted area of interests, sensory needs, etc.) and the impact on behavior. How information is
gathered may be different for each person collecting the data and depending on the complexity of
the situation. One format involves directly observing and recording situational factors surrounding a
problem behavior using an assessment tool called ABC data collection. An ABC data form is an
assessment tool used to gather information that should evolve into a positive behavior support plan.
ABC refers to:

 Antecedent- the events, action, or circumstances that occur before a behavior.

 Behavior- The behavior.

 Consequences- The action or response that follows the behavior.

The following is an example of ABC data collection. ABC is considered a direct observation format
because you have to be directly observing the behavior when it occurs. Typically it is a format that is
used when an external observer is available who has the time and ability to observe and document
behaviors during specified periods of the day. It is time and personnel intensive. From this data, we
can see that when Joe is asked to end an activity he is enjoying (we know that he enjoys playing
computer games), he screams, refuses to leave, and ignores. We also can see that the response to
Joe’s refusal consists mostly of empty threats. If we follow Joe throughout the day, we may find that
he is asked repeatedly to follow diections. In addition, the data reveals that Joe’s family uses threats

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p25 of 105
that are not followed through. Joe has learned that persistence, ignoring, and refusal will wear
parents down.

Antecedent Behavior Consequence

Parent asks Joe to stop playing Joe screams, "NO!" and Parent tells Joe to leave the
on the computer. refuses to leave the computer again.
computer.

Parent tells Joe to leave the Joe again refuses to Parent starts counting to 10 as a
computer. leave. warning to get off the computer.

Parent starts counting to 10 as a Joe does not move from Parent finishes counting to 10
warning to get off the computer. the computer station. and again warns him to get off
the computer.

Parent finishes counting to 10 Joe stays at the Parent threatens that Joe lose
and again warns him to get off computer and refuses to computer privileges in the future.
the computer. leave.

Parent threatens that the Joe Joe ignores and The parent count to 10 again and
will lose computer privileges in continues working on again threatens future computer
the future. the computer. use.

The parent counts to 10 again Joe ignores and The parent becomes angry and
and again threatens future continues computer use. leaves the room.
computer use

While it is important to look at both the antecedents and the form of the behavior, the focus of this
article is on the consequence portion of the data collection. Examine the consequence portion of the
data collection form when identifying those responses that both increase and decrease problem
behavior. For example, if attention seems to increase problem behavior, then it may be important to
teach the individual to get attention in a more appropriate fashion or to use attention for positive
behaviors. If escape from a difficult task seems to be a consistent theme in the consequence section,
then it may be important to either change the task or to teach the child to ask for help. And we may
choose to use downtime as a reinforcer. Our responses should always focus on strengthening
desired behavior, promoting the use of the replacement behavior, and decreasing the occurrence of
the problem behavior (Sugai, et. al., 2000). An important aspect of this prospect is understanding
those responses or consequences that maintain, and either enhance or decrease behavior over time.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p26 of 105
Assessment is the key to developing an effective program and tracking the progress of individuals.
Yet there are barriers in collecting the data such as time, remembering in a crisis situation, and being
consistent. We can overcome these barriers by planning ahead, matching collection strategies to the
setting, and simplifying the data collection chart. Remember anyone (e.g., parents, educators,
teachers, support personnel, administrators) can take the data when given clear direction and
parameters. Here is an example taken from what Joe’s parents know about his situation at home
using the ABC approach. Notice the responses have already been established on the form. These are
the responses that are typically identified as motivating behavior. While this system may be more
efficient, you will note that much of the richness of the narrative is missing.

Antecedent Behavior Consequence

Parent asks Joe to stop playing on the Joe screams, "NO!" and  Sensory
computer. refuses to leave the Feedback
computer.
 Escape

 Attention

Parent tells Joe to leave the Joe again refuses to leave.  Sensory
computer. Feedback

 Escape

 Attention

Parent starts counting to 10 as a Joe does not move from the  Sensory
warning to get off the computer. computer station. Feedback

 Escape

 Attention

Parent finishes counting to 10 and Joe stays at the computer  Sensory


again warns him to get off the and refuses to leave. Feedback
computer.
 Escape

 Attention

Parent threatens that the Joe will Joe ignores and continues  Sensory
lose computer privileges in the working on the computer. Feedback
future.
 Escape

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p27 of 105
 Attention

The parent counts to 10 again and Joe ignores and continues  Sensory
again threatens future computer use computer use. Feedback

 Escape

 Attention

Sometimes the ABC data collection form is used to document a behavior incident. Remember that
this type of form will give you limited data and focuses heavily on negative behaviors. However, it is
easier when someone is not available to do more indepth observing. In truth, the ABC data
collection should not be used just to document behavior incidents. It is best used as a narrative
during a specified time of the day. Equally important is to document those conditions that surround
positive behaviors. By documenting these, professionals and family members can identify effective
strategies that can be replicated.

Once accurate and sufficient data is collected; placements, planning, modifications, instruction, and
feedback are easier, more valid, and effective (Morton & Lieberman, 2006). ABC data collection can
be used for all individuals with behavior issues at home and in school, not just individuals on the
autism spectrum.

References

Sugai, G., Horner, R.H., Dunlap, G., Hieneman, M., Nelson, C.M., Scott, T., Liaupsin, C., Sailor, W.,
Turnbull, A.P., Turnbull III, H.R.; Wickham, D., Wilcox, B., and Ruef, M. (2000). Applying positive
behavior support and functional behavioral assessment in schools. Journal of Positive Behavior
Interventions, 2(3), 131-143.

Morton & Lieberman, 2006. Strategies for collecting data in physical education. Teaching Elementary
Physical Education, 17(4), 28-31.

Pratt, C., & Dubie, M. (2008). Observing behavior using a-b-c data. The Reporter, 14(1), 1-4.

Self-check

OUTCOME Yes No I Need help

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p28 of 105
 A record of the behaviour identifies events that meet
the specified information requirements, and outlines
those events.
 A report about the observation compares the
outcome of the observation with the observation
objectives.
 A record of the behaviour identifies events that meet
the specified information requirements, and outlines
those events.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p29 of 105
SPECIFIC OUTCOME 5:
Consolidate the information gathered via different techniques.
ASSESEMENT CRITERIA
 The comparison identifies agreement and differences between the information gathered from
different techniques.
 Differences are resolved and justified by reviewing the information gathering techniques.

5.1 Which Technique to Apply?

Which technique to apply depends on a number of factors, such as:

 Availability and location of stakeholders

 Development team knowledge of the problem domain

 Customers' and users' knowledge of the problem domain

 Customers' and users' knowledge of the development process and methods

If the stakeholders are not co-located or readily available, for example in the case of a product being
developed for mass market, techniques such as brainstorming, interviews and workshops that
require face-to-face contact with the stakeholders may be difficult or impossible.

5.2 Information consolidation

The Knowledge Discovery in Databases process comprises of a few steps leading from raw data
collections to some form of new knowledge. The iterative process consists of the following steps:

 Data cleaning: also known as data cleansing, it is a phase in which noise data and irrelevant data
are removed from the collection.

 Data integration: at this stage, multiple data sources, often heterogeneous, may be combined in
a common source.

 Data selection:� at this step, the data relevant to the analysis is decided on and retrieved from
the data collection.

 Data transformation: also known as data consolidation, it is a phase in which the selected data is
transformed into forms appropriate for the mining procedure.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p30 of 105
 Data mining:� it is the crucial step in which clever techniques are applied to extract patterns
potentially useful.

 Pattern evaluation:� in this step, strictly interesting patterns representing knowledge are
identified based on given measures.

 Knowledge representation: is the final phase in which the discovered knowledge is visually
represented to the user. This essential step uses visualization techniques to help users
understand and interpret the data mining results.

Self-check

OUTCOME Yes No I Need help


 The comparison identifies agreement and differences between the information
gathered from different techniques.
 Differences are resolved and justified by reviewing
the information gathering techniques.
 The comparison identifies agreement and
differences between the information gathered from
different techniques.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p31 of 105
Apply principles of creating computer software by
developing a complete programme to meet given business
specifications.(US115392)

Unit Std # 115392


NQF Level 5
Notional hours 12
Credit(s) 120
Field Field 03 - Physical, Mathematical, Computer and Life Sciences
Sub-Field Information Technology and Computer Sciences
Qualification National Certificate: Information Technology (Systems Development) LEVEL 5-
SAQA- 48872- 131 CREDITS

The Learner guide

At the end of this unit standard you will be able to Apply principles of creating computer
software by developing a complete programme to meet given business specifications

Purpose:
People credited with this unit standard are able to: 

 To provide a expert knowledge of the areas covered


For those working in, or entering the workplace in the area of Systems Development
To demonstrate an understanding of how to create (in the computer language of choice) a
complete computer program that will solve a given business problem, showing all the steps
involved in creating computer software

Specific outcome:
 Interpret a given specification to plan a computer program solution
 Design a computer program to meet a business requirement

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p32 of 105
 Create a computer program that implements the design
 Test a computer program against the business requirements
 Implement the program to meet business requirements
 Document the program according to industry standards

Learning assumed to be in place:


Apply the principles of Computer Programming
 Design, develop and test computer program segments to given specifications
Competent in demonstrating an understanding of the use of different number bases
and measurement units and an awareness of error in the context of relevant
calculations (SAQA ID = 9010).

Equipment needed:
Learning material, Learner workbook, Pen, Ruler.
PLEASE NOTE: THE USE OF PENCILS OR TIPPEX IS NOT ALLOWED.
IF YOU USE A PENCIL THE VALIDITY OF YOUR WORK COULD BE QUESTIONABLE, AND THIS
COULD LEAD TO FRAUD.

Resources (selective resources might be used, depending on the facilitator and venue
circumstances), one or all of the following can be used:
 Your facilitator/mentor
 Learning material
 Learner workbook
 Visual aids
 White board
 Flip chart
 Equipment
 Training venue

Venue, Date and Time:


Consult your facilitator should there be any changes to the venue, date and/or time.
Refer to your timetable.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p33 of 105
Assessments:
The only way to establish whether you are competent and have accomplished the specific
outcomes is through continuous assessments
The given exercises can contain one or more of the following:
 Information for you to read
 Exercises that require you to have a problem-solving approach to communication
 Questions for you to answer
 Case studies with questions that follow

How to do the exercise:

 The facilitator will tell you which exercise you need to complete each day.
 You need to hand in your answers to the facilitator who will mark it for correctness.
 If you do not know the answer, you will have to go back to that particular section in
you learner guide and go over it again.
 Ask the facilitator for help, if you do not understand any of the questions asked.
 Always remember to give reasons for your answers

SPECIFIC OUTCOME 1:

Interpret a given specification to plan a computer program solution.

ASSESEMENT CRITERIA
 The plan proposes a description of the problem to be solved by the development of the
computer program that is understandable by an end-user and meets the given specification.
 The plan integrate the research of problems in term of data and functions
 The plan includes an evaluation of the viability of developing a computer program to solve the
problem identified and compares the costs of developing the program with the benefits to be
obtained from the program.
 The plan concludes by choosing the best solution and documenting the program features that
will contain the capabilities and constraints to meet the defined problem.

1.1 Interpret a given specification to plan a computer program solution.

1.1 1 Writing Software Requirements Specifications (SRS)

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p34 of 105
For technical writers who haven’t had the experience of designing software requirements
specifications (SRSs, also known as software functional specifications or system specifications)
templates or even writing SRSs, they might assume that being given the opportunity to do so is
either a reward or punishment for something they did (or failed to do) on a previous project.
Actually, SRSs are ideal projects for technical writers to be involved with because they lay out the
foundation for the development of a new product and for the types of user documentation and
media that will be required later in the project development life cycle. It also doesn’t hurt that you’d
be playing a visible role in contributing to the success of the project.

1.1 2 What are Software Requirements Specifications?

An SRS is basically an organization’s understanding (in writing) of a customer or potential client’s


system requirements and dependencies at a particular point in time (usually) prior to any actual
design or development work. It’s a two-way insurance policy that assures that both the client and
the organization understand the other’s requirements from that perspective at a given point in time.

The SRS document itself states in precise and explicit language those functions and capabilities a
software system (i.e., a software application, an eCommerce Web site, and so on) must provide, as
well as states any required constraints by which the system must abide. The SRS also functions as
a blueprint for completing a project with as little cost growth as possible. The SRS is often referred to
as the “parent” document because all subsequent project management documents, such as design
specifications, statements of work, software architecture specifications, testing and validation plans,
and documentation plans, are related to it.

It’s important to note that an SRS contains functional and non-functional requirements only; it
doesn’t offer design suggestions, possible solutions to technology or business issues, or any other
information other than what the development team understands the customer’s system
requirements to be.

A well-designed, well-written SRS accomplishes four major goals:

 It provides feedback to the customer. An SRS is the customer’s assurance that the development
organization understands the issues or problems to be solved and the software behavior
necessary to address those problems. Therefore, the SRS should be written in natural language ,
in an unambiguous manner that may also include charts, tables, data flow diagrams, decision
tables, and so on.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p35 of 105
 It decomposes the problem into component parts. The simple act of writing down software
requirements in a well-designed format organizes information, places borders around the
problem, solidifies ideas, and helps break down the problem into its component parts in an
orderly fashion.

 It serves as an input to the design specification. As mentioned previously, the SRS serves as the
parent document to subsequent documents, such as the software design specification and
statement of work. Therefore, the SRS must contain sufficient detail in the functional
system requirements so that a design solution can be devised. It serves as a product validation
check. The SRS also

 It serves as the parent document for testing and validation strategies that will be applied to the
requirements for verification.

Software requirements specifications are typically developed during the first stages
of “Requirements Development,” which is the initial product development phase in which
information is gathered about what requirements are needed–and not. This information-
gathering stage can include onsite visits, questionnaires, surveys, interviews, and perhaps a return-
on-investment (ROI) analysis or needs analysis of the customer or client’s current
business environment. The actual specification, then, is written after the requirements have been
gathered and analyzed.

1.1.3 Why Should Technical Writers be Involved with Software Requirements Specifications?

Unfortunately, much of the time, systems architects and programmers write software requirements
specifications with little (if any) help from the technical communications organization. And when
that assistance is provided, it’s often limited to an edit of the final draft just prior to going out the
door. Having technical writers involved throughout the entire SRS development process can offer
several benefits:

 Technical writers are skilled information gatherers, ideal for eliciting and articulating customer
requirements. The presence of a technical writer on the requirements-gathering team helps
balance the type and amount of information extracted from customers, which can help improve
the software requirements specifications.

 Technical writers can better assess and plan documentation projects and better meet customer
document needs. Working on SRSs provides technical writers with an opportunity for learning
about customer needs firsthand–early in the product development process.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p36 of 105
 Technical writers know how to determine the questions that are of concern to the user or
customer regarding ease of use and usability. Technical writers can then take that knowledge
and apply it not only to the specification and documentation development, but also to user
interface development, to help ensure the UI (User Interface) models the customer
requirements.

 Technical writers, involved early and often in the process, can become an information resource
throughout the process, rather than an information gatherer at the end of the process.

In short, a requirements-gathering team consisting solely of programmers, product marketers,


systems analysts/architects, and a project manager runs the risk of creating a specification that may
be too heavily loaded with technology-focused or marketing-focused issues. The presence of a
technical writer on the team helps place at the core of the project those user or customer
requirements that provide more of an overall balance to the design of the software requirements
specifications, product, and documentation.

1.1.4 What Kind of Information Should an SRS Include?

You probably will be a member of the SRS team (if not, ask to be), which means SRS development
will be a collaborative effort for a particular project. In these cases, your company will have
developed SRSs before, so you should have examples (and, likely, the company’s SRS template) to
use. But, let’s assume you’ll be starting from scratch. Several standards organizations (including the
IEEE) have identified nine topics that must be addressed when designing and writing an SRS:

1. Interfaces

2. Functional Capabilities

3. Performance Levels

4. Data Structures/Elements

5. Safety

6. Reliability

7. Security/Privacy

8. Quality

9. Constraints and Limitations

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p37 of 105
But, how do these general topics translate into an SRS document? What, specifically, does an SRS
document include? How is it structured? And how do you get started? An SRS document typically
includes four ingredients, as discussed in the following sections:

1. A template

2. A method for identifying requirements and linking sources

3. Business operation rules

4. A traceability matrix

Begin with an SRS Template

The first and biggest step to writing software requirements specifications is to select an  existing
template that you can fine tune for your organizational needs (if you don’t have one already).
There’s not a “standard specifications template” for all projects in all industries because the
individual requirements that populate an SRS are unique not only from company to company, but
also from project to project within any one company. The key is to select an existing template or
specification to begin with, and then adapt it to meet your needs.

In recommending using existing templates, I’m not advocating simply copying a template from
available resources and using them as your own; instead, I’m suggesting that you use available
templates as guides for developing your own. It would be almost impossible to find a specification or
specification template that meets your particular project requirements exactly. 

Table 1 shows what a basic SRS outline might look like. This example is an adaptation and extension
of the IEEE Standard 830-1998:

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p38 of 105
Table 1 A sample of a basic SRS outline

1. Introduction
1.1 Purpose
1.2 Document conventions
1.3 Intended audience
1.4 Additional information
1.5 Contact information/SRS team members
1.6 References
2. Overall Description
2.1 Product perspective
2.2 Product functions
2.3 User classes and characteristics
2.4 Operating environment
2.5 User environment
2.6 Design/implementation constraints
2.7 Assumptions and dependencies
3. External Interface Requirements
3.1 User interfaces
3.2 Hardware interfaces
3.3 Software interfaces
3.4 Communication protocols and interfaces
4. System Features
4.1 System feature A
4.1.1 Description and priority
4.1.2 Action/result
4.1.3 Functional requirements
4.2 System feature B
5. Other Nonfunctional Requirements
5.1 Performance requirements
5.2 Safety requirements
5.3 Security requirements
5.4 Software quality attributes
5.5 Project documentation
5.6 User documentation
6. Other Requirements
Appendix A: Terminology/Glossary/Definitions list
Appendix B: To be determined

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p39 of 105
SPECIFIC OUTCOME 2 :
Design a computer program to meet a business requirement.
ASSESEMENT CRITERIA
 The design incorporates development of appropriate design documentstion and is desk-
checked.
 The design of the program includes program structure components.
 The design of the program includes program logical flow components.
 The design of the program includes data structures and access method components.

2.1 A Design a computer program to meet a business requirement

2.1.1 Establish Business Rules for Contingencies and Responsibilities

“The best-laid plans of mice and men…” begins the famous saying. It has direct application to writing
software requirements specifications because even the most thought-out requirements are not
immune to changes in industry, market, or government regulations. A top-quality SRS should include
plans for planned and unplanned contingencies, as well as an explicit definition of the
responsibilities of each party, should a contingency be implemented. Some business rules are easier
to work around than others, when Plan B has to be invoked. For example, if a customer wants to
change a requirement that is tied to a government regulation, it may not be ethical and/or legal to
be following “the spirit of the law.” Many government regulations, as business rules, simply don’t
allow any compromise or “wiggle room.” A project manager may be responsible for ensuring that a
government regulation is followed as it relates to a project requirement; however, if a contingency is
required, then the responsibility for that requirement may shift from the project manager to a
regulatory attorney. The SRS should anticipate such actions to the furthest extent possible.

2.1 2 Establish a Requirements Traceability Matrix

The business rules for contingencies and responsibilities can be defined explicitly within a
Requirements Traceability Matrix (RTM), or contained in a separate document and referenced in the
matrix, as the example in Table 3 illustrates. Such a practice leaves no doubt as to responsibilities
and actions under certain conditions as they occur during the product-development phase.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p40 of 105
The RTM functions as a sort of “chain of custody” document for requirements and can include
pointers to links from requirements to sources, as well as pointers to business rules. For example,
any given requirement must be traced back to a specified need, be it a use case, business
essential, industry-recognized standard, or government regulation. As mentioned previously, linking
requirements with sources minimizes or even eliminates the presence of spurious or frivolous
requirements that lack any justification. The RTM is another record of mutual understanding, but
also helps during the development phase.

As software design and development proceed, the design elements and the actual code must be tied
back to the requirement(s) that define them. 

2.13 Software Design

Software design is both a process and a model. The design process is a sequence of steps that enable
the designer to describe all aspects of the software to be built. It is important to note, however, that
the design process is not simply a cookbook. Creative skill, past experience, a sense of what makes
“good” software, and an overall commitment to quality are critical success factors for a competent
design. The design model is the equivalent of an architect’s plans for a house. It begins by
representing the totality of the thing to be built (e.g., a three-dimensional rendering of the house)
and slowly refines the thing to provide guidance for constructing each detail (e.g., the plumbing
layout). Similarly, the design model that is created for software provides a variety of different views
of the computer software. Basic design principles enable the software engineer to navigate the
design process. Davis [DAV95] suggests a set of principles for software design, which have been
adapted and extended in the following list:

The design process should not suffer from “tunnel vision.” A good designer should consider
alternative approaches, judging each based on the requirements of the problem, the resources
available to do the job.

The design should be traceable to the analysis model. Because a single element of the design model
often traces to multiple requirements, it is necessary to have a means for tracking how requirements
have been satisfied by the design model.

The design should not reinvent the wheel. Systems are constructed using a set of design patterns,
many of which have likely been encountered before. These patterns should always be chosen as an
alternative to reinvention. Time is short and resources are limited! Design time should be invested in
representing truly new ideas and integrating those patterns that already exist.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p41 of 105
The design should “minimize the intellectual distance” between the software and the problem as it
exists in the real world. That is, the structure of the software design should (whenever possible)
mimic the structure of the problem domain.

The design should exhibit uniformity and integration. A design is uniform if it appears that one
person developed the entire thing. Rules of style and format should be defined for a design team
before design work begins. A design is integrated if care is taken in defining interfaces between
design components.

The design should be structured to accommodate change. The design concepts discussed in the
next section enable a design to achieve this principle.

The design should be structured to degrade gently, even when aberrant data, events, or operating
conditions are encountered. Well- designed software should never “bomb.” It should be designed to
accommodate unusual circumstances, and if it must terminate processing, do so in a graceful
manner.

Design is not coding, coding is not design. Even when detailed procedural designs are created for
program components, the level of abstraction of the design model is higher than source code. The
only design decisions made at the coding level address the small implementation details that enable
the procedural design to be coded.

The design should be assessed for quality as it is being created, not after the fact. A variety of
design concepts and design measures are available to assist the designer in assessing quality.

The design should be reviewed to minimize conceptual (semantic) errors. There is sometimes a
tendency to focus on minutiae when the design is reviewed, missing the forest for the trees. A
design team should ensure that major conceptual elements of the design (omissions, ambiguity,
inconsistency) have been addressed before worrying about the syntax of the design model.

2.1.4 Design Concepts

The design concepts provide the software designer with a foundation from which more sophisticated
methods can be applied. A set of fundamental design concepts has evolved. They are:

Abstraction - Abstraction is the process or result of generalization by reducing the information


content of a concept or an observable phenomenon, typically in order to retain only information
which is relevant for a particular purpose.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p42 of 105
Refinement - It is the process of elaboration. A hierarchy is developed by decomposing a
macroscopic statement of function in a step-wise fashion until programming language statements
are reached. In each step, one or several instructions of a given program are decomposed into more
detailed instructions. Abstraction and Refinement are complementary concepts.

Modularity - Software architecture is divided into components called modules.

Software Architecture - It refers to the overall structure of the software and the ways in which that
structure provides conceptual integrity for a system. A good software architecture will yield a good
return on investment with respect to the desired outcome of the project, e.g. in terms of
performance, quality, schedule and cost.

Control Hierarchy - A program structure that represents the organization of a program component
and implies a hierarchy of control.

Structural Partitioning - The program structure can be divided both horizontally and vertically.
Horizontal partitions define separate branches of modular hierarchy for each major program
function. Vertical partitioning suggests that control and work should be distributed top down in the
program structure.

Data Structure - It is a representation of the logical relationship among individual elements of data.

Software Procedure - It focuses on the processing of each modules individually

Information Hiding - Modules should be specified and designed so that information contained within
a module is inaccessible to other modules that have no need for such information

2.1.5 Design considerations

There are many aspects to consider in the design of a piece of software. The importance of each
should reflect the goals the software is trying to achieve. Some of these aspects are:

Compatibility - The software is able to operate with other products that are designed for
interoperability with another product. For example, a piece of software may be backward-
compatible with an older version of itself.

Extensibility - New capabilities can be added to the software without major changes to the
underlying architecture.

Fault-tolerance - The software is resistant to and able to recover from component failure.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p43 of 105
Maintainability - A measure of how easily bug fixes or functional modifications can be accomplished.
High maintainability can be the product of modularity and extensibility.

Modularity - the resulting software comprises well defined, independent components. That leads to
better maintainability. The components could be then implemented and tested in isolation before
being integrated to form a desired software system. This allows division of work in a software
development project.

Reliability - The software is able to perform a required function under stated conditions for a
specified period of time.

Reusability - the software is able to add further features and modification with slight or no
modification.

Robustness - The software is able to operate under stress or tolerate unpredictable or invalid input.
For example, it can be designed with a resilience to low memory conditions.

Security - The software is able to withstand hostile acts and influences.

Usability - The software user interface must be usable for its target user/audience. Default values for
the parameters must be chosen so that they are a good choice for the majority of the users.

Performance - The software performs its tasks within a user-acceptable time. The software does not
consume too much memory.

Portability - The usability of the same software in different environments.

Scalability - The software adapts well to increasing data or number of users.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p44 of 105
SPECIFIC OUTCOME 3:
Create a computer program that implements the design.
ASSESEMENT CRITERIA
 The creation includes coding from design documents, to meet the design specifications
 Names created in the program describe the purpose of the items named
 The creation includes conformance with the design documentation, and differences are
documented with reasons for deviations

3.1 Create a computer program that implements the design

What is involved in writing a computer program? 

What kinds of decisions must be made? 

Who is involved? 

The process of creating a computer program is not as straight-forward as you might think. It involves
a lot of thinking, experimenting, testing, and rewriting to achieve a high-quality product. Let's break
down the process to give you an idea of what goes on.

3.1.1 What Task?

The first decision to make when creating a computer program is: 


          What is this program supposed to do?

The more detailed this description is, the easier it will be to get good results.

3.1.2 What Language?

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p45 of 105
The choice of what computer language to use has important consequences for how easy the
program will be to write and maintain. The graphic shows some of the most commonly used
languages and what tasks they are usually used for. 

The languages are grouped by how complex they are for the writer. The simplest with the least
power are at the bottom. Simple languages for simple tasks. (But how simple is any of this, really??)

3.1.3 Things to consider in choosing a language

Works with what you've got -

   Existing standards in your company

   Existing hardware

   Existing software with which to interact

   Programmers' current knowledge

Will work in the future -

   With variety of hardware

   Changes easy to make in programs

   Errors easy to find in programs

3.1.4 Who's Involved?

What people are involved in the creation of a new computer program?

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p46 of 105
The End User sets the tasks to be done. What does the customer
want to do??

A Systems Analyst designs the overall requirements and sets the


strategy for the program.

A Programmer writes the actual code to perform the tasks.

There may be a huge team of dozens of people involved. Or perhaps one programmer decides that
he can write a program that is the answer to what users complain about. It may be done in a highly
structured series of conferences and consumer surveys. Or perhaps someone is listening to what
people say as they go about trying to work. Somehow the needs of the end users must be
understood as well as the limitations of the code and the hardware. Costs come into play, too. (Sad
but true.)

All of these people must communicate back and forth throughout the process. No program of any
size will be without unexpected problems. So it's test and fix and test again until the program
actually does what it was intended to do.

3.1.5 Program Development

A program goes through the following steps over and over during its development, never just once.

1. Set & Review goals:  What is it supposed to do?

2. Design:  Create the strategy to achieve goal.

3. Coding:  Write the program.

4. Testing:  Try it out with real people.

5. Documentation:  What you did and why. How to use it.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p47 of 105
One of the techniques used during the design phase is to flowchart the program, as on the right.
Different shapes represent different kinds of steps, like input and output, decisions, calculations.
Such charts help keep the logic clear, especially in complex programs.

Each time through the development loop, the program must be debugged. This means testing the
program under all conditions in order to find the problems so they can be handled. There
will alwaysbe problems. Sometimes it's just a typo, and sometimes it's a flaw in the logic, and
sometimes it's an incompatibility with the hardware or other software. Handling such situations can
be the most time-consuming part of the whole process!

Proper documentation can make or break a program. This includes explanations to the end user of
how to use the program and also internal notes to programmers about what the code is doing and
why. It is amazing how quickly the original coder can forget why he wrote the code that way!

Programs often need to be maintained, that is, changes must be made. For example, the sales tax
rate might change or zip codes may get more digits. With proper internal documentation, a different
programmer can make these adjustments without damaging the program.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p48 of 105
SPECIFIC OUTCOME 4:
Test a computer program against the business requirements.
ASSESEMENT CRITERIA
 The testing includes assessment of the need to develop a testing program to assist with stress
testing
 The testing includes the planning, developing and implementing a test strategy that is
appropriate for the type of program being tested.
 The testing includes the recording of test results that allow for the identification and validation
of test outcomes.

4.1 Testing a computer program against given specifications according to test plans.
 Programme Testing refers to a set of activities conducted with the intent of finding
errors in software.

 Test plan refers to specification is called a test plan. The developers are well aware
what test plans will be executed and this information is made available to management
and the developers. The idea is to make them more cautious when developing their
code or making additional changes. Some companies have a higher-level document
called a test strategy.

4.1.1 Approaches to programme testing

(a) Static vs. dynamic testing

There are many approaches available in software testing. Reviews, walkthroughs, or inspections are


referred to as static testing, whereas actually executing programmed code with a given set of test
cases is referred to as dynamic testing. Static testing is often implicit, as proofreading, plus when
programming tools/text editors check source code structure or compilers (pre-compilers) check
syntax and data flow as static program analysis. Dynamic testing takes place when the program itself
is run. Dynamic testing may begin before the program is 100% complete in order to test particular
sections of code and are applied to discrete functions or modules. Typical techniques for this are
either using stubs/drivers or execution from a debugger environment.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p49 of 105
Static testing involves verification, whereas dynamic testing involves validation. Together they help
improve software quality. Among the techniques for static analysis, mutation testing can be used to
ensure the test-cases will detect errors which are introduced by mutating the source code.

(b) The box approach

Software testing methods are traditionally divided into white- and black-box testing. These two
approaches are used to describe the point of view that a test engineer takes when designing test
cases.

i. White-Box testing

White-box testing (also known as clear box testing, glass box testing, transparent box


testing and structural testing) tests internal structures or workings of a program, as opposed
to the functionality exposed to the end-user. In white-box testing an internal perspective of
the system, as well as programming skills, are used to design test cases. The tester chooses
inputs to exercise paths through the code and determine the appropriate outputs. This is
analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the


software testing process, it is usually done at the unit level. It can test paths within a unit,
paths between units during integration, and between subsystems during a system–level test.
Though this method of test design can uncover many errors or problems, it might not detect
unimplemented parts of the specification or missing requirements.

Visual testing

The aim of visual testing is to provide developers with the ability to examine what was
happening at the point of software failure by presenting the data in such a way that
the developer can easily find the information he or she requires, and the information
is expressed clearly.

At the core of visual testing is the idea that showing someone a problem (or a test
failure), rather than just describing it, greatly increases clarity and understanding.
Visual testing therefore requires the recording of the entire test process – capturing
everything that occurs on the test system in video format. Output videos are
supplemented by real-time tester input via picture-in-a-picture webcam and audio
commentary from microphones.

Visual testing provides a number of advantages. The quality of communication is


increased dramatically because testers can show the problem (and the events leading

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p50 of 105
up to it) to the developer as opposed to just describing it and the need to replicate
test failures will cease to exist in many cases. The developer will have all the evidence
he or she requires of a test failure and can instead focus on the cause of the fault and
how it should be fixed.

ii. Grey-box testing

Grey-box testing involves having knowledge of internal data structures and algorithms for
purposes of designing tests, while executing those tests at the user, or black-box level. The
tester is not required to have full access to the software's source code. Manipulating input
data and formatting output do not qualify as grey-box, because the input and output are
clearly outside of the "black box" that we are calling the system under test. This distinction is
particularly important when conducting integration testing between two modules of code
written by two different developers, where only the interfaces are exposed for test.

However, tests that require modifying a back-end data repository such as a database or a log
file does qualify as grey-box, as the user would not normally be able to change the data
repository in normal production operations Grey-box testing may also include reverse
engineering to determine, for instance, boundary values or error messages.

By knowing the underlying concepts of how the software works, the tester makes better-
informed testing choices while testing the software from outside. Typically, a grey-box tester
will be permitted to set up an isolated testing environment with activities such as seeding
a database. The tester can observe the state of the product being tested after performing
certain actions such as executing SQL statements against the database and then executing
queries to ensure that the expected changes have been reflected. Grey-box testing
implements intelligent test scenarios, based on limited information. This will particularly
apply to data type handling, exception handling, and so on.

Unit testing

Unit testing, also known as component testing, refers to tests that verify the
functionality of a specific section of code, usually at the function level. In an object-
oriented environment, this is usually at the class level, and the minimal unit tests
include the constructors and destructors.

These types of tests are usually written by developers as they work on code (white-
box style), to ensure that the specific function is working as expected. One function
might have multiple tests, to catch corner cases or other branches in the code. Unit
testing alone cannot verify the functionality of a piece of software, but rather is used

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p51 of 105
to ensure that the building blocks of the software work independently from each
other.

Unit testing is a software development process that involves synchronized application


of a broad spectrum of defect prevention and detection strategies in order to reduce
software development risks, time, and costs. It is performed by the software
developer or engineer during the construction phase of the software development
lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims
to eliminate construction errors before code is promoted to QA; this strategy is
intended to increase the quality of the resulting software as well as the efficiency of
the overall development and QA process.

Integration testing

Integration testing is any type of software testing that seeks to verify the interfaces
between components against a software design. Software components may be
integrated in an iterative way or all together ("big bang"). Normally the former is
considered a better practice since it allows interface issues to be located more quickly
and fixed.

Integration testing works to expose defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated and
tested until the software works as a system.

iii. Component interface testing


The practice of component interface testing can be used to check the handling of data
passed between various units, or subsystem components, beyond full integration testing
between those units. The data being passed can be considered as "message packets" and
the range or data types can be checked, for data generated from one unit, and tested for
validity before being passed into another unit.

One option for interface testing is to keep a separate log file of data items being passed,
often with a timestamp logged to allow analysis of thousands of cases of data passed
between units for days or weeks. Tests can include checking the handling of some extreme
data values while other interface variables are passed as normal values. Unusual data values
in an interface can help explain unexpected performance in the next unit. Component

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p52 of 105
interface testing is a variation of black-box testing with the focus on the data values beyond
just the related actions of a subsystem component.

iv. System testing

System testing, or end-to-end testing, tests a completely integrated system to verify that it
meets its requirements. For example, a system test might involve testing a logon interface,
then creating and editing an entry, plus sending or printing results, followed by summary
processing or deletion (or archiving) of entries, then logoff.

In addition, the software testing should ensure that the program, as well as working as
expected, does not also destroy or partially corrupt its operating environment or cause other
processes within that environment to become inoperative (this includes not corrupting
shared memory, not consuming or locking up excessive resources and leaving any parallel
processes unharmed by its presence).

v. Installation testing

An installation test assures that the system is installed correctly and working at actual
customer's hardware.

vi. Compatibility testing

A common cause of software failure (real or perceived) is a lack of its compatibility with


other application software, operating systems (or operating system versions, old or new), or
target environments that differ greatly from the original (such as
a terminal or GUI application intended to be run on the desktop now being required to
become a web application, which must render in a web browser). For example, in the case of
a lack of backward compatibility, this can occur because the programmers develop and test
software only on the latest version of the target environment, which not all users may be
running. This results in the unintended consequence that the latest work may not function
on earlier versions of the target environment, or on older hardware that earlier versions of
the target environment was capable of using. Sometimes such issues can be fixed by
proactive abstracting operating system functionality into a separate
program module or library.

vii. Smoke and sanity testing

Sanity testing determines whether it is reasonable to proceed with further testing.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p53 of 105
Smoke testing consists of minimal attempts to operate the software, designed to determine
whether there are any basic problems that will prevent it from working at all. Such tests can
be used as build verification test.

viii. Regression testing

Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features, including
old bugs that have come back. Such regressions occur whenever software functionality that
was previously working, correctly, stops working as intended. Typically, regressions occur as
an unintended consequence of program changes, when the newly developed part of the
software collides with the previously existing code. Common methods of regression testing
include re-running previous sets of test-cases and checking whether previously fixed faults
have re-emerged.

The depth of testing depends on the phase in the release process and the risk of the added
features. They can either be complete, for changes added late in the release or deemed to
be risky, or be very shallow, consisting of positive tests on each feature, if the changes are
early in the release or deemed to be of low risk. Regression testing is typically the largest
test effort in commercial software development due to checking numerous details in prior
software features, and even new software can be developed while using some old test-cases
to test parts of the new design to ensure prior functionality is still supported.

ix. Acceptance testing

Acceptance testing can mean one of two things:

1. A smoke test is used as an acceptance test prior to introducing a new build to the
main testing process, i.e. before integration or regression.

2. Acceptance testing performed by the customer, often in their lab environment on


their own hardware, is known as user acceptance testing (UAT). Acceptance testing
may be performed as part of the hand-off process between any two phases of
development.

x. Alpha testing

Alpha testing is simulated or actual operational testing by potential users/customers or an


independent test team at the developers' site. Alpha testing is often employed for off-the-
shelf software as a form of internal acceptance testing, before the software goes to beta
testing.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p54 of 105
xi. Beta testing

Beta testing comes after alpha testing and can be considered a form of external user
acceptance testing. Versions of the software, known as beta versions, are released to a
limited audience outside of the programming team. The software is released to groups of
people so that further testing can ensure the product has few faults or bugs. Sometimes,
beta versions are made available to the open public to increase the feedback field to a
maximal number of future users.

xii. Functional vs non-functional testing

Functional testing refers to activities that verify a specific action or function of the code.
These are usually found in the code requirements documentation, although some
development methodologies work from use cases or user stories. Functional tests tend to
answer the question of "can the user do this" or "does this particular feature work."

Non-functional testing refers to aspects of the software that may not be related to a specific
function or user action, such as scalability or other performance, behaviour under
certain constraints, or security. Testing will determine the breaking point, the point at which
extremes of scalability or performance leads to unstable execution. Non-functional
requirements tend to be those that reflect the quality of the product, particularly in the
context of the suitability perspective of its users.

xiii. Destructive testing


Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the
software functions properly even when it receives invalid or unexpected inputs, thereby
establishing the robustness of input validation and error-management routines. Software
fault injection, in the form of fuzzing, is an example of failure testing. Various commercial
non-functional testing tools are linked from the software fault injection page; there are also
numerous open-source and free software tools available that perform destructive testing.

xiv. Software performance testing

Performance testing is generally executed to determine how a system or sub-system


performs in terms of responsiveness and stability under a particular workload. It can also
serve to investigate, measure, validate or verify other quality attributes of the system, such
as scalability, reliability and resource usage.

Load testing is primarily concerned with testing that the system can continue to operate
under a specific load, whether that be large quantities of data or a large number of  users.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p55 of 105
This is generally referred to as software scalability. The related load testing activity of when
performed as a non-functional activity is often referred to as endurance testing. Volume is a
way to test software functions even when certain components (for example a file or
database) increase radically in size. Stress testing is a way to test reliability under
unexpected or rare workloads. Stability testing (often referred to as load or endurance
testing) checks to see if the software can continuously function well in or above an
acceptable period.

There is little agreement on what the specific goals of performance testing are. The terms
load testing, performance testing, scalability testing, and volume testing, are often used
interchangeably.

Real-time software systems have strict timing constraints. To test if timing constraints are
met, real-time testing is used.

xv. Usability testing

Usability testing is to check if the user interface is easy to use and understand. It is
concerned mainly with the use of the application.

xvi. Security testing

Security testing is essential for software that processes confidential data to prevent system
intrusion by hackers.

xvii. Internationalization and localization

The general ability of software to be internationalized and localized can be automatically


tested without actual translation, by using pseudo localization. It will verify that the
application still works, even after it has been translated into a new language or adapted for a
new culture (such as different currencies or time zones).

Actual translation to human languages must be tested, too. Possible localization failures
include:

 Software is often localized by translating a list of strings out of context, and the


translator may choose the wrong translation for an ambiguous source string.

 Technical terminology may become inconsistent if the project is translated by several


people without proper coordination or if the translator is imprudent.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p56 of 105
 Literal word-for-word translations may sound inappropriate, artificial or too technical
in the target language.

 Untranslated messages in the original language may be left hard coded in the source
code.

 Some messages may be created automatically at run time and the resulting string may
be ungrammatical, functionally incorrect, misleading or confusing.

 Software may use a keyboard shortcut which has no function on the source


language's keyboard layout, but is used for typing characters in the layout of the
target language.

 Software may lack support for the character encoding of the target language.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p57 of 105
SPECIFIC OUTCOME 5:
Implement the program to meet business requirements.
ASSESEMENT CRITERIA
 The implementation involves checking the program for compliance with user expectations and
any other applicable factors
 The implementation involves training of users to enable them to use the software to their
requirements
 The implementation involves planning of installation of the program that minimises disruption
to the user

5.1 Implement the program to meet business requirements

Business requirements are what must be delivered to provide value. Products, systems, software,


and processes are the ways how to deliver, satisfy, or meet the business requirements whats.
Consequently, the topic of business requirements often arises in the context of developing or
procuring software or other system; but business requirements exist much more broadly. That is,
'business' can be at work or personal, for profit or non-profit.

Business requirements in the context of software engineering or the software development life


cycle, is about eliciting and documenting business requirements of business users such as customers,
employees, and vendors early in the development cycle of a system to guide the design of the future
system. Business requirements are often captured by business, who analyze business activities and
processes, and often study As-is process to define a target To-be process.

Business requirements often include


 Business context, scope, and background, including reasons for change
 Key business stakeholders that have requirements
 Success factors for a future/target state
 Constraints imposed by the business or other systems
 Business process models and analysis, often using flowchart notations to depict either 'as-is' and
'to-be' business processes
 Logical data model and data dictionary references
 Glossaries of business terms and local jargon
 Data flow diagrams to illustrate how data flows through the information systems (different from
flowcharts depicting algorithmic flow of business activities)

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p58 of 105
SPECIFIC OUTCOME 6:
Document the program according to industry standards.
ASSESEMENT CRITERIA
 The documentation includes annotation of the program with a description of program purpose
and design specifics.
 The documentation includes the layout of the program code including indentation and other
acceptable industry standards
 The documentation includes full internal and external documentation, with a level of detail
that enables other programmers to analyse the program
 The documentation reflects the tested and implemented program, including changes made
during testing of the program

6.1 Plan and design documentation for a computer program to agreed standards.
Computer program documentation is written text that accompanies computer program or software.
It either explains how it operates or how to use it, or may mean different things to people in
different roles.

Role of documentation in software development


Documentation is an important part of software engineering. Types of documentation include:
1. Requirements - Statements that identify attributes capabilities, characteristics, or qualities
of a system. This is the foundation for what shall be or has been implemented.
2. Architecture/Design - Overview of software. Includes relations to an environment and
construction principles to be used in design of software components.
3. Technical - Documentation of code, algorithms, interfaces, and APIs.
4. End user - Manuals for the end-user, system administrators and support staff.
5. Marketing - How to market the product and analysis of the market demand.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p59 of 105
Process and Product Documentation

For large software projects, it is usually the case that documentation starts being generated well
before the development process begins. A proposal to develop the system may be produced in
response to a request for tenders by an external client or in response to other business strategy
documents. For some types of system, a comprehensive requirements document may be produced
which defines the features required and expected behavior of the system. During the development
process itself, all sorts of different documents may be produced – project plans, design
specifications, test plans etc.

It is not possible to define a specific document set that is required – this depends on the contract
with the client for the system, the type of system being developed and its expected lifetime, the
culture and size of the company developing the system and the development schedule that it
expected. However, we can generally say that the documentation produced falls into two classes:

1. Process documentation. These documents record the process of development and


maintenance. Plans, schedules, process quality documents and organizational and project standards
are process documentation.

2. Product documentation. This documentation describes the product that is being developed.
System documentation describes the product from the point of view of the engineers developing
and maintaining the system; user documentation provides a product description that is oriented
towards system users.

Process documentation is produced so that the development of the system can be managed.
Product documentation is used after the system is operational but is also essential for management
of the system development. The creation of a document, such as a system specification, may
represent an important milestone in the software development process.

Process documentation

Effective management requires the process being managed to be visible. Because software is
intangible and the software process involves apparently similar cognitive tasks rather than obviously
different physical tasks, the only way this visibility can be achieved is through the use of process
documentation.

Process documentation falls into a number of categories:

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p60 of 105
1. Plans, estimates and schedules. These are documents produced by managers which are used to
predict and to control the software process.

2. Reports. These are documents which report how resources were used during the process of
development.

3. Standards. These are documents which set out how the process is to be implemented. These
may be developed from organizational, national or international standards.

4. Working papers. These are often the principal technical communication documents in a
project. They record the ideas and thoughts of the engineers working on the project, are interim
versions of product documentation, describe implementation strategies and set out problems which
have been identified. They often, implicitly, record the rationale for design decisions.

5. Memos and electronic mail messages. These record the details of everyday communications
between managers and development engineers.

The major characteristic of process documentation is that most of it becomes out-dated. Plans may
be drawn up on a weekly, fortnightly or monthly basis. Progress will normally be reported weekly.
Memos record thoughts, ideas and intentions which change.

Although of interest to software historians, much of this process information is of little real use after
it has gone out of date and there is not normally a need to preserve it after the system has been
delivered. However, there are some process documents that can be useful as the software evolves in
response to new requirements.

For example, test schedules are of value during software evolution as they act as a basis for re-
planning the validation of system changes. Working papers which explain the reasons behind design
decisions (design rationale) are also potentially valuable as they discuss design options and choices
made. Access to this information helps avoid making changes which conflict with these original
decisions. Ideally, of course, the design rationale should be extracted from the working papers and
separately maintained. Unfortunately this hardly ever happens.

Product documentation

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p61 of 105
Product documentation is concerned with describing the delivered software product. Unlike most
process documentation, it has a relatively long life. It must evolve in step with the product which it
describes. Product documentation includes user documentation which tells users how to use the
software product and system documentation which is principally intended for maintenance
engineers.

User Documentation
Users of a system are not all the same. The producer of documentation must structure it to cater for
different user tasks and different levels of expertise and experience. It is particularly important to
distinguish between end-users and system administrators:

1. End-users use the software to assist with some task. This may be flying an aircraft, managing
insurance policies, writing a book, etc. They
want to know how the software can help them. They are not interested in computer or
administration details.

2. System administrators are responsible for managing the software used by end-users. This may
involve acting as an operator if the system is a large mainframe system, as a network manager is the
system involves a network of workstations or as a technical guru who fixes end-users software
problems and who liaises between users and the software supplier.

To cater for these different classes of user and different levels of user expertise, there are at least 5
documents (or perhaps chapters in a single document) which should be delivered with the software
system (Figure1).

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p62 of 105
The functional description of the system outlines the system requirements and briefly describes the
services provided. This document should provide an overview of the system. Users should be able to
read this document with an introductory manual and decide if the system is what they need.

The system installation document is intended for system administrators. It should provide details of
how to install the system in a particular environment. It should contain a description of the files
making up the system and the minimal hardware configuration required. The permanent files which
must be established, how to start the system and the configuration dependent files which must be
changed to tailor the system to a particular host system should also be described. The use of
automated installers for PC software has meant that some suppliers see this document as
unnecessary. In fact, it is still required to help system managers discover and fix problems with the
installation.

The introductory manual should present an informal introduction to the system, describing its
‘normal’ usage. It should describe how to get started and how end-users might make use of the
common system facilities. It should be liberally illustrated with examples. Inevitably beginners,
whatever their background and experience, will make mistakes. Easily discovered information on
how to recover from these mistakes and restart useful work should be an integral part of this
document.

The system reference manual should describe the system facilities and their usage, should provide a
complete listing of error messages and should describe how to recover from detected errors. It
should be complete. Formal descriptive techniques may be used. The style of the reference manual
should not be unnecessarily pedantic and turgid, but completeness is more important than
readability.

A more general system administrator’s guide should be provided for some types of system such as
command and control systems. This should describe the messages generated when the system
interacts with other systems and how to react to these messages. If system hardware is involved, it
might also explain the operator’s task in maintaining that hardware. For example, it might describe
how to clear faults in the system console, how to connect new peripherals, etc.

As well as manuals, other, easy-to-use documentation might be provided. A quick reference card
listing available system facilities and how to use them is particularly convenient for experienced

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p63 of 105
system users. On-line help systems, which contain brief information about the system, can save the
user spending time in consultation of manuals although should not be seen as a replacement for
more comprehensive documentation.

System Documentation
System documentation includes all of the documents describing the system itself from the
requirements specification to the final acceptance test plan. Documents describing the design,
implementation and testing of a system are essential if the program is to be understood and
maintained. Like user documentation, it is important that system documentation is structured, with
overviews leading the reader into more formal and detailed descriptions of each aspect of the
system.

For large systems that are developed to a customer’s specification, the system documentation
should include:

1. The requirements document and an associated rationale.

2. A document describing the system architecture.

3. For each program in the system, a description of the architecture of that program.

4. For each component in the system, a description of its functionality and interfaces.

5. Program source code listings. These should be commented where the comments should explain
complex sections of code and provide a rationale for the coding method used. If meaningful names
are used and a good, structured programming style is used, much of the code should be self-
documenting without the need for additional comments. This information is now normally
maintained electronically rather than on paper with selected information printed on demand from
readers.

6. Validation documents describing how each program is validated and how the validation
information relates to the requirements. A system maintenance guide which describes known
problems with the system, describes which parts of the system are hardware and software
dependent and which describes how evolution of the system has been taken into account in its
design.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p64 of 105
A common system maintenance problem is ensuring that all representations are kept in step when
the system is changed. To help with this, the relationships and dependencies between documents
and parts of documents should be recorded in a document management system as discussed in the
final part of this paper.

For smaller systems and systems that are developed as software products, system documentation is
usually less comprehensive. This is not necessarily a good thing but schedule pressures on
developers mean that documents are simply never written or, if written, are not kept up to date.
These pressures are sometimes inevitable but, in my view, at the very least you should always try to
maintain a specification of the system, an architectural design document and the program source
code.

Unfortunately, documentation maintenance is often neglected. Documentation may become out of


step with its associated software, causing problems for both users and maintainers of the system.
The natural tendency is to meet a deadline by modifying code with the intention of modifying other
documents later.

Often, pressure of work means that this modification is continually set aside until finding what is to
be changed becomes very difficult indeed. The best solution to this problem is to support document
maintenance with software tools which record document relationships, remind software engineers
when changes to one document affect another and record possible inconsistencies in the
documentation. Such a system is described by Garg and Scacchi (1990).

Document Quality
Unfortunately, much computer system documentation is badly written, difficult to understand, out-
of-date or incomplete. Although the situation is improving, many organizations still do not pay
enough attention to producing system documents which are well-written pieces of technical prose.

Document quality is as important as program quality. Without information on how to use a system
or how to understand it, the utility of that system is degraded. Achieving document quality requires
management commitment to document design, standards, and quality assurance processes.
Producing good documents is neither easy nor cheap and many software engineers find it more
difficult that producing good quality programs.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p65 of 105
Document structure

The document structure is the way in which the material in the document is organized into chapters
and, within these chapters, into sections and sub- sections. Document structure has a major impact
on readability and usability and it is important to design this carefully when creating documentation.
As with software systems, you should design document structures so that the different parts are as
independent as possible. This allows each part to be read as a single item and reduces problems of
cross-referencing when changes have to be made.

Structuring a document properly also allows readers to find information more easily. As well as
document components such as contents lists and indexes, well-structured documents can be skim
read so that readers can quickly locate sections or sub-sections that are of most interest to them.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p66 of 105
Documentation Standards

Documentation standards act as a basis for document quality assurance. Documents produced
according to appropriate standards have a consistent appearance, structure and quality. Other
standards that may be used in the documentation process are:

1. Process standards- These standards define the process which should be followed for high-
quality document production.

2. Product standards - These are standards which govern the documents themselves.

3. Interchange standards- It is increasingly important to exchange copies of documents via


electronic mail and to store documents in databases. Interchange standards ensure that all
electronic copies of documents are compatible.

Standards are, by their nature, designed to cover all cases and, consequently, can sometimes seem
unnecessarily restrictive. It is therefore important that, for each project, the appropriate standards
are chosen and modified to suit that particular project. Small projects developing systems with a
relatively short expected lifetime need different standards from large software projects where the
software may have to be maintained for 10 or more years.

Process standards

Process standards define the approach to be taken in producing documents. This generally means
defining the software tools which should be used for document production and defining the quality
assurance procedures which ensure that high-quality documents are produced. Document process
quality assurance standards must be flexible and must be able to cope with all types of document. In
some cases, where documents are simply working papers or memos, no explicit quality checking is
required. However, where documents are formal documents, that is, when their evolution is to be
controlled by configuration management procedures, a formal quality process should be adopted.

Drafting, checking, revising and re-drafting is an iterative process which should be continued until a
document of acceptable quality is produced. The acceptable quality level will depend on the
document type and the potential readers of the document.

Product standards

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p67 of 105
Product standards apply to all documents produced in the course of the software development.
Documents should have a consistent appearance and, documents of the same class should have a
consistent structure. Document standards are project-specific but should be based on more general
organizational standards.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p68 of 105
My notes

__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
________

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p69 of 105
Test a computer program against a given
specification(US115384)

Unit Std # 115384


NQF Level 5
Notional hours 60
Credit(s) 6
Field Field 03 - Physical, Mathematical, Computer and Life Sciences
Sub-Field Information Technology and Computer Sciences
Qualification National Certificate: Information Technology (Systems Development) LEVEL
5- SAQA- 48872- 131 CREDITS

Table of contents

Specific Outcome 1 : Test a computer program against given specifications according to test
plans.

 The testing executes operational steps identified in the test plan.


 The testing uses input data as specified in the test plan.
 The testing outlines deviations from the test plan, with explanations.
 The testing follows the standards and procedures specified in the test plan for testing and re-
testing.

Specific Outcome 2 : Record the results from testing a computer program


 The records are provided for all tests executed.
 The records identify variations from expected test results and gives reason where available.
 The recorded results are reproduced if the tests are repeated under the same conditions.
 The recorded results are recorded in a way that allows the results to be reviewed..
Specific Outcome 3 : Review the testing process for a computer program against organisation
policy and procedures
 The review allows improvements to be made to the application testing process.
 The review follows organisation policy and procedures.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p70 of 105
NOTIONAL HOURS BREAKDOWN

The candidate undertaking this unit standard is best advised to at least spend one hundred hours of
study time on this learning programme. Below is a table which demonstrates how these one
hundred hours could be spread:

TIMEFRAME

Total Notional Hours Contact Time Non contact-


Self-Study/Assessment
Credits (5) x 10 = 60 15HRS 35HRS
1. Learning Programme REFER TO COVER PAGE
Name:

2. SAQA Qualification/Unit REFER TO COVER PAGE


Standard Title:

3. Qualification/ 4 SAQA ID 5 NQF 5 6 Credits 6


. Number . Level .
Unit Standard

7. PURPOSE for offering REFER TO NEXT PAGE


this programme to your
learners:

8. TARGET AUDIENCE for REFER TO NEXT PAGE


this specific
programme:

9. Entry/Admission REFER TO NEXT PAGE


Requirements:

10 Timeframe for Training: Theory content –Role play, Simulation, Group work, Pair work =
. 15 hrs.
(Total
Hours/Days/Weeks) Non contact session- self-study, assignment, practise guided by
coach or mentor, formative assessment and summative
assessment =35 hrs.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p71 of 105
The Learner guide

At the end of this unit standard you will be able to Test a computer program against a
given specification

Purpose:
People credited with this unit standard are able to: 

 Test a computer program against a given specification

Specific outcome:
 Test a computer program against given specifications according to test plans
 Record the results from testing a computer program
 Review the testing process for a computer program against organisation policy and
procedures.

Learning assumed to be in place:

 Demonstrate understanding of Mathematics, at least at level 3.


 Explain how data is stored on computers

Equipment needed:
Learning material, Learner workbook, Pen, Ruler.
PLEASE NOTE: THE USE OF PENCILS OR TIPPEX IS NOT ALLOWED.
IF YOU USE A PENCIL THE VALIDITY OF YOUR WORK COULD BE QUESTIONABLE, AND THIS
COULD LEAD TO FRAUD.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p72 of 105
Resources (selective resources might be used, depending on the facilitator and venue
circumstances), one or all of the following can be used:
 Your facilitator/mentor
 Learning material
 Learner workbook
 Visual aids
 White board
 Flip chart
 Equipment
 Training venue

Venue, Date and Time:


Consult your facilitator should there be any changes to the venue, date and/or time.
Refer to your timetable.

Assessments:
The only way to establish whether you are competent and have accomplished the specific
outcomes is through continuous assessments
The given exercises can contain one or more of the following:
 Information for you to read
 Exercises that require you to have a problem-solving approach to communication
 Questions for you to answer
 Case studies with questions that follow

How to do the exercise:

 The facilitator will tell you which exercise you need to complete each day.
 You need to hand in your answers to the facilitator who will mark it for correctness.
 If you do not know the answer, you will have to go back to that particular section in
you learner guide and go over it again.
 Ask the facilitator for help, if you do not understand any of the questions asked.
 Always remember to give reasons for your answers

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p73 of 105
SPECIFIC OUTCOME 1:

Test a computer program against given specifications according to test


plans.

ASSESEMENT CRITERIA
 The testing executes operational steps identified in the test plan.
 The testing uses input data as specified in the test plan.
 The testing outlines deviations from the test plan, with explanations.
 The testing follows the standards and procedures specified in the test plan for testing and re-
testing.

1.1 Testing a computer program against given specifications according to test plans.

OUTCOME RANGE

 Black box testing - functional, data driven, I/O testing.


 White box testing - logic-driven, structural, glass box testing.
 Screen layout, Forms design, Data controls, Input validation, Ranges.

 Programme Testing refers to a set of activities conducted with the intent of finding
errors in software.

 Test plan refers to specification is called a test plan. The developers are well aware
what test plans will be executed and this information is made available to management
and the developers. The idea is to make them more cautious when developing their
code or making additional changes. Some companies have a higher-level document
called a test strategy.

1.1.1 Approaches to programme testing

(a) Static vs. dynamic testing

There are many approaches available in software testing. Reviews, walkthroughs, or inspections are


referred to as static testing, whereas actually executing programmed code with a given set of test
cases is referred to as dynamic testing. Static testing is often implicit, as proofreading, plus when

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p74 of 105
programming tools/text editors check source code structure or compilers (pre-compilers) check
syntax and data flow as static program analysis. Dynamic testing takes place when the program itself
is run. Dynamic testing may begin before the program is 100% complete in order to test particular
sections of code and are applied to discrete functions or modules. Typical techniques for this are
either using stubs/drivers or execution from a debugger environment.

Static testing involves verification, whereas dynamic testing involves validation. Together they help
improve software quality. Among the techniques for static analysis, mutation testing can be used to
ensure the test-cases will detect errors which are introduced by mutating the source code.

(b) The box approach

Software testing methods are traditionally divided into white- and black-box testing. These two
approaches are used to describe the point of view that a test engineer takes when designing test
cases.

xviii. White-Box testing

White-box testing (also known as clear box testing, glass box testing, transparent box


testing and structural testing) tests internal structures or workings of a program, as opposed
to the functionality exposed to the end-user. In white-box testing an internal perspective of
the system, as well as programming skills, are used to design test cases. The tester chooses
inputs to exercise paths through the code and determine the appropriate outputs. This is
analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the


software testing process, it is usually done at the unit level. It can test paths within a unit,
paths between units during integration, and between subsystems during a system–level test.
Though this method of test design can uncover many errors or problems, it might not detect
unimplemented parts of the specification or missing requirements.

Techniques used in white-box testing include:

 API testing (application programming interface) – testing of the application using


public and private APIs

 Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test
designer can create tests to cause all statements in the program to be executed at
least once)

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p75 of 105
 Fault injection methods – intentionally introducing faults to gauge the efficacy of
testing strategies

 Mutation testing methods

 Static testing methods

 Code coverage tools can evaluate the completeness of a test suite that was created
with any method, including black-box testing. This allows the software team to
examine parts of a system that are rarely tested and ensures that the most
important function points have been tested. Code coverage as a software metric can
be reported as a percentage for:

- Function coverage, which reports on functions executed

- Statement coverage, which reports on the number of lines executed to complete


the test.

- 100% statement coverage ensures that all code paths or branches (in terms
of control flow) are executed at least once. This is helpful in ensuring correct
functionality, but not sufficient since the same code may process different inputs
correctly or incorrectly.

xix. Black-box testing

Black-box testing treats the software as a "black box", examining functionality without any
knowledge of internal implementation. The testers are only aware of what the software is
supposed to do, not how it does it. Black-box testing methods include: partitioning,
boundary, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-
based testing, use case testing, exploratory and specification-based testing.

Specification-based testing 

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p76 of 105
Aims to test the functionality of software according to the applicable
requirements. This level of testing usually requires thorough test cases to be provided
to the tester, who then can simply verify that for a given input, the output value (or
behaviour), either "is" or "is not" the same as the expected value specified in the test
case. Test cases are built around specifications and requirements, i.e., what the
application is supposed to do. It uses external descriptions of the software, including
specifications, requirements, and designs to derive test cases. These tests can
be functional or non-functional, though usually functional.

Specification-based testing may be necessary to assure correct functionality, but it is


insufficient to guard against complex or high-risk situations.

One advantage of the black box technique is that no programming knowledge is


required. Whatever biases the programmers may have had, the tester likely has a
different set and may emphasize different areas of functionality. On the other hand,
black-box testing has been said to be "like a walk in a dark labyrinth without a
flashlight. Because they do not examine the source code, there are situations when a
tester writes many test cases to check something that could have been tested by only
one test case, or leaves some parts of the program untested.

This method of test can be applied to all levels of software


testing: unit, integration, system and acceptance. It typically comprises most if not all
testing at higher levels, but can also dominate unit testing as well.

Visual testing

The aim of visual testing is to provide developers with the ability to examine what was
happening at the point of software failure by presenting the data in such a way that
the developer can easily find the information he or she requires, and the information
is expressed clearly.

At the core of visual testing is the idea that showing someone a problem (or a test
failure), rather than just describing it, greatly increases clarity and understanding.
Visual testing therefore requires the recording of the entire test process – capturing
everything that occurs on the test system in video format. Output videos are
supplemented by real-time tester input via picture-in-a-picture webcam and audio
commentary from microphones.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p77 of 105
Visual testing provides a number of advantages. The quality of communication is
increased dramatically because testers can show the problem (and the events leading
up to it) to the developer as opposed to just describing it and the need to replicate
test failures will cease to exist in many cases. The developer will have all the evidence
he or she requires of a test failure and can instead focus on the cause of the fault and
how it should be fixed.

Visual testing is particularly well-suited for environments that deploy agile methods in


their development of software, since agile methods require greater communication
between testers and developers and collaboration within small teams.

Ad hoc testing and exploratory testing are important methodologies for checking


software integrity, because they require less preparation time to implement, while the
important bugs can be found quickly. In ad hoc testing, where testing takes place in an
improvised, impromptu way, the ability of a test tool to visually record everything that
occurs on a system becomes very important.

Visual testing is gathering recognition in customer acceptance and usability testing,


because the test can be used by many individuals involved in the development
process. For the customer, it becomes easy to provide detailed bug reports and
feedback, and for program users, visual testing can record user actions on screen, as
well as their voice and image, to provide a complete picture at the time of software
failure for the developer.

xx. Grey-box testing

Grey-box testing involves having knowledge of internal data structures and algorithms for
purposes of designing tests, while executing those tests at the user, or black-box level. The
tester is not required to have full access to the software's source code. Manipulating input
data and formatting output do not qualify as grey-box, because the input and output are
clearly outside of the "black box" that we are calling the system under test. This distinction is
particularly important when conducting integration testing between two modules of code
written by two different developers, where only the interfaces are exposed for test.

However, tests that require modifying a back-end data repository such as a database or a log
file does qualify as grey-box, as the user would not normally be able to change the data

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p78 of 105
repository in normal production operations Grey-box testing may also include reverse
engineering to determine, for instance, boundary values or error messages.

By knowing the underlying concepts of how the software works, the tester makes better-
informed testing choices while testing the software from outside. Typically, a grey-box tester
will be permitted to set up an isolated testing environment with activities such as seeding
a database. The tester can observe the state of the product being tested after performing
certain actions such as executing SQL statements against the database and then executing
queries to ensure that the expected changes have been reflected. Grey-box testing
implements intelligent test scenarios, based on limited information. This will particularly
apply to data type handling, exception handling, and so on.

Testing levels

There are generally four recognized levels of tests: unit testing, integration testing, system
testing, and acceptance testing. Tests are frequently grouped by where they are added in
the software development process, or by the level of specificity of the test. The main levels
during the development process as defined by the SWEBOK guide are unit-, integration-, and
system testing that are distinguished by the test target without implying a specific process
model. Other test levels are classified by the testing objective.

Unit testing

Unit testing, also known as component testing, refers to tests that verify the
functionality of a specific section of code, usually at the function level. In an object-
oriented environment, this is usually at the class level, and the minimal unit tests
include the constructors and destructors.

These types of tests are usually written by developers as they work on code (white-
box style), to ensure that the specific function is working as expected. One function
might have multiple tests, to catch corner cases or other branches in the code. Unit
testing alone cannot verify the functionality of a piece of software, but rather is used
to ensure that the building blocks of the software work independently from each
other.

Unit testing is a software development process that involves synchronized application


of a broad spectrum of defect prevention and detection strategies in order to reduce
software development risks, time, and costs. It is performed by the software

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p79 of 105
developer or engineer during the construction phase of the software development
lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims
to eliminate construction errors before code is promoted to QA; this strategy is
intended to increase the quality of the resulting software as well as the efficiency of
the overall development and QA process.

Depending on the organization's expectations for software development, unit testing


might include static code analysis, data flow analysis, metrics analysis, peer code
reviews, code coverage analysis and other software verification practices.

Integration testing

Integration testing is any type of software testing that seeks to verify the interfaces
between components against a software design. Software components may be
integrated in an iterative way or all together ("big bang"). Normally the former is
considered a better practice since it allows interface issues to be located more quickly
and fixed.

Integration testing works to expose defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated and
tested until the software works as a system.

xxi. Component interface testing


The practice of component interface testing can be used to check the handling of data
passed between various units, or subsystem components, beyond full integration testing
between those units. The data being passed can be considered as "message packets" and
the range or data types can be checked, for data generated from one unit, and tested for
validity before being passed into another unit.

One option for interface testing is to keep a separate log file of data items being passed,
often with a timestamp logged to allow analysis of thousands of cases of data passed
between units for days or weeks. Tests can include checking the handling of some extreme
data values while other interface variables are passed as normal values. Unusual data values
in an interface can help explain unexpected performance in the next unit. Component
interface testing is a variation of black-box testing with the focus on the data values beyond
just the related actions of a subsystem component.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p80 of 105
xxii. System testing

System testing, or end-to-end testing, tests a completely integrated system to verify that it
meets its requirements. For example, a system test might involve testing a logon interface,
then creating and editing an entry, plus sending or printing results, followed by summary
processing or deletion (or archiving) of entries, then logoff.

In addition, the software testing should ensure that the program, as well as working as
expected, does not also destroy or partially corrupt its operating environment or cause other
processes within that environment to become inoperative (this includes not corrupting
shared memory, not consuming or locking up excessive resources and leaving any parallel
processes unharmed by its presence).

xxiii. Installation testing

An installation test assures that the system is installed correctly and working at actual
customer's hardware.

xxiv. Compatibility testing

A common cause of software failure (real or perceived) is a lack of its compatibility with


other application software, operating systems (or operating system versions, old or new), or
target environments that differ greatly from the original (such as
a terminal or GUI application intended to be run on the desktop now being required to
become a web application, which must render in a web browser). For example, in the case of
a lack of backward compatibility, this can occur because the programmers develop and test
software only on the latest version of the target environment, which not all users may be
running. This results in the unintended consequence that the latest work may not function
on earlier versions of the target environment, or on older hardware that earlier versions of
the target environment was capable of using. Sometimes such issues can be fixed by
proactive abstracting operating system functionality into a separate
program module or library.

xxv. Smoke and sanity testing

Sanity testing determines whether it is reasonable to proceed with further testing.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p81 of 105
Smoke testing consists of minimal attempts to operate the software, designed to determine
whether there are any basic problems that will prevent it from working at all. Such tests can
be used as build verification test.

xxvi. Regression testing

Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features, including
old bugs that have come back. Such regressions occur whenever software functionality that
was previously working, correctly, stops working as intended. Typically, regressions occur as
an unintended consequence of program changes, when the newly developed part of the
software collides with the previously existing code. Common methods of regression testing
include re-running previous sets of test-cases and checking whether previously fixed faults
have re-emerged.

The depth of testing depends on the phase in the release process and the risk of the added
features. They can either be complete, for changes added late in the release or deemed to
be risky, or be very shallow, consisting of positive tests on each feature, if the changes are
early in the release or deemed to be of low risk. Regression testing is typically the largest
test effort in commercial software development due to checking numerous details in prior
software features, and even new software can be developed while using some old test-cases
to test parts of the new design to ensure prior functionality is still supported.

xxvii. Acceptance testing

Acceptance testing can mean one of two things:

3. A smoke test is used as an acceptance test prior to introducing a new build to the
main testing process, i.e. before integration or regression.

4. Acceptance testing performed by the customer, often in their lab environment on


their own hardware, is known as user acceptance testing (UAT). Acceptance testing
may be performed as part of the hand-off process between any two phases of
development.

xxviii. Alpha testing

Alpha testing is simulated or actual operational testing by potential users/customers or an


independent test team at the developers' site. Alpha testing is often employed for off-the-

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p82 of 105
shelf software as a form of internal acceptance testing, before the software goes to beta
testing.

xxix. Beta testing

Beta testing comes after alpha testing and can be considered a form of external user
acceptance testing. Versions of the software, known as beta versions, are released to a
limited audience outside of the programming team. The software is released to groups of
people so that further testing can ensure the product has few faults or bugs. Sometimes,
beta versions are made available to the open public to increase the feedback field to a
maximal number of future users.

xxx. Functional vs non-functional testing

Functional testing refers to activities that verify a specific action or function of the code.
These are usually found in the code requirements documentation, although some
development methodologies work from use cases or user stories. Functional tests tend to
answer the question of "can the user do this" or "does this particular feature work."

Non-functional testing refers to aspects of the software that may not be related to a specific
function or user action, such as scalability or other performance, behaviour under
certain constraints, or security. Testing will determine the breaking point, the point at which
extremes of scalability or performance leads to unstable execution. Non-functional
requirements tend to be those that reflect the quality of the product, particularly in the
context of the suitability perspective of its users.

xxxi. Destructive testing


Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the
software functions properly even when it receives invalid or unexpected inputs, thereby
establishing the robustness of input validation and error-management routines. Software
fault injection, in the form of fuzzing, is an example of failure testing. Various commercial
non-functional testing tools are linked from the software fault injection page; there are also
numerous open-source and free software tools available that perform destructive testing.

xxxii. Software performance testing

Performance testing is generally executed to determine how a system or sub-system


performs in terms of responsiveness and stability under a particular workload. It can also

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p83 of 105
serve to investigate, measure, validate or verify other quality attributes of the system, such
as scalability, reliability and resource usage.

Load testing is primarily concerned with testing that the system can continue to operate
under a specific load, whether that be large quantities of data or a large number of  users.
This is generally referred to as software scalability. The related load testing activity of when
performed as a non-functional activity is often referred to as endurance testing. Volume is a
way to test software functions even when certain components (for example a file or
database) increase radically in size. Stress testing is a way to test reliability under
unexpected or rare workloads. Stability testing (often referred to as load or endurance
testing) checks to see if the software can continuously function well in or above an
acceptable period.

There is little agreement on what the specific goals of performance testing are. The terms
load testing, performance testing, scalability testing, and volume testing, are often used
interchangeably.

Real-time software systems have strict timing constraints. To test if timing constraints are
met, real-time testing is used.

xxxiii. Usability testing

Usability testing is to check if the user interface is easy to use and understand. It is
concerned mainly with the use of the application.

xxxiv. Security testing

Security testing is essential for software that processes confidential data to prevent system
intrusion by hackers.

xxxv. Internationalization and localization

The general ability of software to be internationalized and localized can be automatically


tested without actual translation, by using pseudolocalization. It will verify that the
application still works, even after it has been translated into a new language or adapted for a
new culture (such as different currencies or time zones).

Actual translation to human languages must be tested, too. Possible localization failures
include:

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p84 of 105
 Software is often localized by translating a list of strings out of context, and the
translator may choose the wrong translation for an ambiguous source string.

 Technical terminology may become inconsistent if the project is translated by several


people without proper coordination or if the translator is imprudent.

 Literal word-for-word translations may sound inappropriate, artificial or too technical


in the target language.

 Untranslated messages in the original language may be left hard coded in the source
code.

 Some messages may be created automatically at run time and the resulting string may
be ungrammatical, functionally incorrect, misleading or confusing.

 Software may use a keyboard shortcut which has no function on the source


language's keyboard layout, but is used for typing characters in the layout of the
target language.

 Software may lack support for the character encoding of the target language.

 Fonts and font sizes which are appropriate in the source language may be
inappropriate in the target language; for example, CJK characters may become
unreadable if the font is too small.

 A string in the target language may be longer than the software can handle. This may
make the string partly invisible to the user or cause the software to crash or
malfunction.

 Software may lack proper support for reading or writing bi-directional text.

 Software may display images with text that was not localized.

 Localized operating systems may have differently named system configuration


files and environment variables and different formats for date and currency.

Important terms and concepts in programme testing

Acceptance Testing: Testing conducted to enable a user/customer to determine whether to


accept a software product. Normally performed to validate the software meets a set of agreed
acceptance criteria.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p85 of 105
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf,
blind, mentally disabled etc.).

Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying
the system's functionality. Can include negative testing as well. See also Monkey Testing.

Agile Testing: Testing practice for projects using agile methodologies, treating development as
the customer of testing and emphasizing a test-first design paradigm. See also Test Driven
Development.

Application Binary Interface (ABI): A specification defining requirements for portability of


applications in binary forms across different system platforms and environments.

Application Programming Interface (API): A formalized set of software calls and routines that
can be referenced by an application program in order to access supporting system or network
services.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools,
to improve software quality.

Automated Testing:

Testing employing software tools which execute tests without manual intervention. Can be
applied in GUI, performance, API, etc. testing.

The use of software to control the execution of tests, the comparison of actual outcomes to
predicted outcomes, the setting up of test preconditions, and other test control and test
reporting functions.

Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.

Basic Block: A sequence of one or more consecutive, executable statements containing no


branches.

Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the
program to design tests.

Basis Set: The set of tests derived using basis path testing.

Baseline: The point at which some deliverable produced during the software engineering
process is put under formal change control.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p86 of 105
Benchmark Testing: Tests that use representative sets of programs and data designed to
evaluate the performance of computer hardware and software in a given configuration.

Beta Testing: Testing of a rerelease of a software product conducted by customers.

Binary Portability Testing: Testing an executable application for portability across system


platforms and environments, usually for conformation to an ABI specification.

Black Box Testing: Testing based on an analysis of the specification of a piece of software


without reference to its internal workings. The goal is to test how well the component conforms
to the published requirements for the component.

Bottom Up Testing: An approach to integration testing where the lowest level components are
tested first, then used to facilitate the testing of higher level components. The process is
repeated until the component at the top of the hierarchy is tested.

Boundary Testing: Test which focus on the boundary or limit conditions of the software being
tested. (Some of these tests are stress tests).

Boundary Value Analysis: In boundary value analysis, test cases are generated using the
extremes of the input domain, e.g. maximum, minimum, just inside/outside boundaries, typical
values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner
cases".

Branch Testing: Testing in which all branches in the program source code are tested at least
once.

Breadth Testing: A test suite that exercises the full functionality of a product but does not test
features in detail.

Bug: A fault in a program which causes the program to perform in an unintended or


unanticipated manner.

CAST: Computer Aided Software Testing.

Capture/Replay Tool: A test tool that records test input as it is sent to the software under test.
The input cases stored can then be used to reproduce the test at a later time. Most commonly
applied to GUI test tools.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p87 of 105
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging
the maturity of the software processes of an organization and for identifying the key practices
that are required to increase the maturity of these processes.

Cause Effect Graph: A graphical representation of inputs and the associated outputs effects
which can be used to design test cases.

Code Complete: Phase of development where functionality is implemented in entirety; bug


fixes are all that are left. All functions found in the Functional Specifications have been
implemented.

Code Coverage: An analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been executed and
therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a
group who ask questions analysing the program logic, analysing the code with respect to a
checklist of historically common programming errors, and analysing its compliance with coding
standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a
small set of test cases, while the state of program variables is manually monitored, to analyse
the programmer's logic and assumptions.

Coding: The generation of source code.

Compatibility Testing: Testing whether software is compatible with other elements of a system


with which it should operate, e.g. browsers, Operating Systems, or hardware.

Component: A minimal software item for which a separate specification is available.

Component Testing: See Unit Testing.

Concurrency Testing: Multi-user testing geared towards determining the effects of accessing


the same application code, module or database records. Identifies and measures the level of
locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing: The process of testing that an implementation conforms to the


specification on which it is based. Usually applied to testing conformance to a formal standard.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p88 of 105
Context Driven Testing: The context-driven school of software testing is flavour of Agile
Testing that advocates continuous and creative evaluation of testing opportunities in light of
the potential information revealed and the value of that information to the organization right
now.

Conversion Testing: Testing of programs or procedures used to convert data from existing


systems for use in replacement systems.

Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box


testing.

Data Dictionary: A database that contains definitions of all data items defined during analysis.

Data Flow Diagram: A modelling notation that represents a functional decomposition of a


system.

Data Driven Testing: Testing in which the action of a test case is parameterized by externally
defined data values, maintained as a file or spreadsheet. A common technique in Automated
Testing.

Debugging: The process of finding and removing the causes of software failures.

Defect: Non-conformance to requirements or functional / program specification

Dependency Testing: Examines an application's requirements for pre-existing software, initial


states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it. See also Static Testing.

Emulator: A device, computer program, or system that accepts the same inputs and produces
the same outputs as a given system.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged
execution.

End-to-End testing: Testing a complete application environment in a situation that mimics real-


world use, such as interacting with a database, using network communications, or interacting
with other hardware, applications, or systems if appropriate.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p89 of 105
Equivalence Class: A portion of a component's input or output domains for which the
component's behaviour is assumed to be the same from the component's specification.

Equivalence Partitioning: A test case design technique for a component in which test cases are
designed to execute representatives from equivalence classes.

Error: A mistake in the system under test; usually but not always a coding mistake on the part of
the developer.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for
an element of the software under test.

Functional Decomposition: A technique used during planning, analysis and design; creates a
functional hierarchy for the software.

Functional Specification: A document that describes in detail the characteristics of the product
with regard to its intended features.

Functional Testing: See also Black Box Testing.

Testing the features and operational behaviour of a product to ensure they correspond to its
specifications.

Testing that ignores the internal mechanism of a system or component and focuses solely on
the outputs generated in response to selected inputs and execution conditions.

Glass Box Testing: A synonym for White Box Testing.

Gorilla Testing: Testing one particular module, functionality heavily.

Grey Box Testing: A combination of Black Box and White Box testing methodologies: testing a


piece of software against its specification but using some knowledge of its internal workings.

High Order Tests: Black-box tests conducted once the software has been integrated.

Independent Test Group (ITG): A group of people whose primary responsibility is software
testing.

Inspection: A group review quality improvement process for written material. It consists of two
aspects; product (document itself) improvement and process improvement (of both document
production and inspection).

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p90 of 105
Integration Testing: Testing of combined parts of an application to determine if they function
together correctly. Usually performed after unit and functional testing. This type of testing is
especially relevant to client/server and distributed systems.

Installation Testing: Confirms that the application under test recovers from expected or
unexpected events without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions.

Load Testing: See Performance Testing.

Localization Testing: This term refers to making software specifically designed for a specific
locality.

Loop Testing: A white box testing technique that exercises program loops.

Metric: A standard of measurement. Software metrics are the statistics describing the structure
or content of a program. A metric should be a real objective measurement of something such as
number of bugs per lines of code.

Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there
to ensure the system or an application does not crash out.

Mutation Testing: Testing done on the application where bugs are purposely added to it.

Negative Testing: Testing aimed at showing software does not work. Also known as "test to
fail". See also Positive Testing.

N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which
errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The
cycles are typically repeated until the solution reaches a steady state and there are no errors.
See also Regression Testing.

Path Testing: Testing in which all paths in the program source code are tested at least once.

Performance Testing: Testing conducted to evaluate the compliance of a system or component


with specified performance requirements. Often this is performed using an automated test tool
to simulate large number of users. Also know as "Load Testing".

Positive Testing: Testing aimed at showing software works. Also known as "test to pass".

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p91 of 105
Quality Assurance: All those planned or systematic actions necessary to provide adequate
confidence that a product or service is of the type and quality needed and expected by the
customer.

Quality Audit: A systematic and independent examination to determine whether quality


activities and related results comply with planned arrangements and whether these
arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle: A group of individuals with related interests that meet at regular intervals to
consider problems or other matters related to the quality of outputs of a process and to the
correction of problems or to the improvement of quality.

Quality Control: The operational techniques and the activities used to fulfill and verify
requirements of quality.

Quality Management: That aspect of the overall management function that determines and
implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as


formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures, processes, and


resources for implementing quality management.

Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at


least one of which is a write, with no mechanism used by either to moderate simultaneous
access.

Ramp Testing: Continuously raising an input signal until the system breaks down.

Recovery Testing: Confirms that the program recovers from expected or unexpected events
without loss of data or functionality. Events can include shortage of disk space, unexpected loss
of communication, or power out conditions.

Regression Testing: Retesting a previously tested program following modification to ensure that


faults have not been introduced or uncovered as a result of the changes made.

Release Candidate: A pre-release version, which contains the desired functionality of the final
version, but which needs to be tested for bugs (which ideally should be removed before the
final version is released).

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p92 of 105
Sanity Testing: Brief test of major functional elements of a piece of software to determine if it’s
basically operational. See also Smoke Testing.

Scalability Testing: Performance testing focused on ensuring the application under test


gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access to authorized
personnel and that the authorized personnel can access the functions available to their security
level.

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work.
Originated in the hardware testing practice of turning on a new piece of hardware for the first
time and considering it a success if it does not catch on fire.

Soak Testing: Running a system at high load for a prolonged period of time. For example,
running several times more transactions in an entire day (or night) than would be expected in a
busy day, to identify and performance problems that appear after a large number of
transactions have been executed.

Software Requirements Specification: A deliverable that describes all data, functional and
behavioural requirements, all constraints, and all validation requirements for software/

Static Analysis: Analysis of a program carried out without executing the program.

Static Analyser: A tool that carries out static analysis.

Static Testing: Analysis of a program carried out without executing the program.

Storage Testing: Testing that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected termination resulting
from lack of space. This is external storage as opposed to internal storage.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of


its specified requirements to determine the load under which it fails and how. Often this
is performance testing using a very high level of simulated load.

Structural Testing: Testing based on an analysis of internal workings and structure of a piece of


software. See also White Box Testing.

System Testing: Testing that attempts to discover defects that are properties of the entire
system rather than of its individual components.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p93 of 105
Testability: The degree to which a system or component facilitates the establishment of test
criteria and the performance of tests to determine whether those criteria have been met.

Testing:

The process of exercising software to verify that it satisfies specified requirements and to detect
errors.

The process of analysing a software item to detect the differences between existing and
required conditions (that is, bugs), and to evaluate the features of the software item.

The process of operating a system or component under specified conditions, observing or


recording the results, and making an evaluation of some aspect of the system or component.

Test Bed: An execution environment configured for testing. May consist of specific hardware,
OS, network topology, configuration of the product under test, other application or system
software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case:

Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing.
A Test Case will consist of information such as requirements testing, test steps, verification
steps, prerequisites, outputs, test environment, etc.

A set of inputs, execution preconditions, and expected outcomes developed for a particular
objective, such as to exercise a particular program path or to verify compliance with a specific
requirement.

Test Driven Development: Testing methodology associated with Agile Programming in which


every chunk of code is covered by unit tests, which must all pass all the time, in an effort to
eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of
tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.

Test Environment: The hardware and software environment in which tests will be run, and any
other software with which the software under test interacts when under test including stubs
and test drivers.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p94 of 105
Test First Design: Test-first design is one of the mandatory practices of Extreme Programming
(XP).It requires that programmers do not write any production code until they have first written
a unit test.

Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.

Test Plan: A document describing the scope, approach, resources, and schedule of intended
testing activities. It identifies test items, the features to be tested, the testing tasks, who will do
each task, and any risks requiring contingency planning.

Test Procedure: A document providing detailed instructions for the execution of one or
more test cases.

Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are
to be executed.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried
out by an automated test tool.

Test Specification: A document specifying the test approach for a software feature or
combination or features and the inputs, predicted results and execution conditions for the
associated tests.

Test Suite: A collection of tests used to validate the behaviour of a product. The scope of a Test
Suite varies from organization to organization. There may be several Test Suites for a particular
product for example. In most cases however a Test Suite is a high level concept, grouping
together hundreds or thousands of tests related by what they are intended to test.

Test Tools: Computer programs used in the testing of a system, a component of the system, or
its documentation.

Thread Testing: A variation of top-down testing where the progressive integration of


components follows the implementation of subsets of the requirements, as opposed to the
integration of components by successively lower levels.

Top Down Testing: An approach to integration testing where the component at the top of the
component hierarchy is tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components. The process is repeated until
the lowest level components have been tested.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p95 of 105
Total Quality Management: A company commitment to develop a process that achieves high
quality product and customer satisfaction.

Traceability Matrix: A document showing the relationship between Test Requirements and Test
Cases.

Usability Testing: Testing the ease with which users can learn and use a product.

Use Case: The specification of tests that are conducted from the end-user perspective. Use
cases tend to focus on operating software as an end-user would conduct their day-to-day
activities.

User Acceptance Testing: A formal product evaluation performed by a customer as a condition


of purchase.

Unit Testing: Testing of individual software components.

Validation: The process of evaluating software at the end of the software development process
to ensure compliance with software requirements. The techniques for validation is testing,
inspection and reviewing.

Verification: The process of determining whether of not the products of a given phase of the
software development cycle meet the implementation steps and can be traced to the incoming
objectives established during the previous phase. The techniques for verification are testing,
inspection and reviewing.

Volume Testing: Testing which confirms that any values that may become large over time (such
as accumulated counts, logs, and data files), can be accommodated by the program and will not
cause the program to stop working or degrade its operation in any manner.

Walkthrough: A review of requirements, designs or code characterized by the author of the


material under review guiding the progression of the review.

White Box Testing: Testing based on an analysis of internal workings and structure of a piece of
software. Includes techniques such as Branch Testing and Path Testing. Also known as Testing
and Glass Box Testing. Contrast with Black Box Testing.

Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are
expected to be utilized by the end-user.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p96 of 105
Revision: 1.0 Date: 16 June 2018
M5 Gathering Techniques for Computer System Development - Learner Guide p97 of 105
SPECIFIC OUTCOME 2 :
Record the results from testing a computer program.
ASSESEMENT CRITERIA
 The records are provided for all tests executed.
 The records identify variations from expected test results and gives reason where available.
 The recorded results are reproduced if the tests are repeated under the same conditions.
 The recorded results are recorded in a way that allows the results to be reviewed..

2.1 Record the results from testing a computer program.

Test log, Test incident report, Error flags, Schedule of tests.

Recording test execution results is very important part of testing, whenever test execution cycle is
complete, tester should make a complete test results report which includes the Test Pass/Fail status
of the test cycle.

If manual testing is done then the test pass/fail result should be captured in an excel sheet and if
automation testing is done using automation tool then the HTML or XML reports should be provided
to stakeholders as test deliverable.

2.2 Test results

Testing results include the following:


 Test plan status
 Test documentation status
 Test execution status(defect status)

Test Plan: It is enough to communicate with the rest of the project teams, when a test plan is
created or when a major change is made to it.
Test documentation – Let all the teams know when the designing of the tests, data gathering and
other activities have begun and also when they are finished. This report will not only let them know
about the progress of the task but also signal the teams that need to review and provide signoff on
the artefacts, that they are up next.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p98 of 105
Test execution– Execution is the phase of a project when the testing team is the primary focus –
positively and negatively – we are both the heroes and the villains.
A typical day during a test cycle is not done, unless the daily status report is sent out. In some teams,
they could agree on a weekly report, but having it sent daily is the norm.

What should programme test result show?


a) Number of tests done
b) Number of test cases executed
c) Number of defects encountered that day/and their respective states
d) Number of defect encountered so far/and their respective states
e) Number of critical defects- still open
f) Environment downtimes – if any
g) Showstoppers – if any
h) Attachment of the test execution sheet / Link to the test management tool where the test
cases are placed
i) Attachment to the bug report/link to the defect/test management tool used for incident
management

Few pointers to help the process along:


a) Be concise at the same time complete
b) Make sure the results you report are accurate
c) Use bulleted points to make the report very readable
d) Double check to include the right date, subject, to list and attachments.
e) If the report is too big and has too many factors to report: place it in a common location as a
file and send a link in the email instead of the file itself.(Be sure the recipients have access
permissions to this location and the file)
f) If it is a status meeting – Be prepared for the presentation, arrive on time and most
importantly, maintain an even tone (don’t be too proud of the defects – they are in general
“bad news”).

Below are samples reports of programme test results.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p99 of 105
Revision: 1.0 Date: 16 June 2018
M5 Gathering Techniques for Computer System Development - Learner Guide p100 of 105
Revision: 1.0 Date: 16 June 2018
M5 Gathering Techniques for Computer System Development - Learner Guide p101 of 105
SPECIFIC OUTCOME 3:
Review the testing process for a computer program against
organisation policy and procedures.
ASSESEMENT CRITERIA
 The review allows improvements to be made to the application testing process.
 The review follows organisation policy and procedures.

3.1 Review the testing process for a computer program against organisation policy and
procedures.

OUTCOME RANGE

Test deliverables, Review of test incident report, Pass/Fail criteria, Contingencies, QA Approval.

3.1.1 Programme review

A programme review is "A process or meeting during which a software product is examined by a
project personnel, managers, users, customers, user representatives, or other interested parties for
comment or approval".

In this context, the term "software product" means "any technical document or partial document,
produced as a deliverable of a software development activity", and may include documents such as
contracts, project plans and budgets, requirements documents, specifications, designs, source code,
user documentation, support and maintenance documentation, test plans, test specifications,
standards, and any other type of specialist work product.

3.1.2 Varieties of software review


Software reviews may be divided into three categories:

 Software peer reviews are conducted by the author of the work product, or by one or more
colleagues of the author, to evaluate the technical content and/or quality of the work.

 Software management reviews are conducted by management representatives to evaluate


the status of work done and to make decisions regarding downstream activities.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p102 of 105
 Software audit reviews are conducted by personnel external to the software project, to
evaluate compliance with specifications, standards, contractual agreements, or other
criteria.

3.1.3 Different types of reviews

 Code review is systematic examination (often as peer review) of computer source code.

 Pair programming is a type of code review where two persons develop code together at the
same workstation.

 Inspection is a very formal type of peer review where the reviewers are following a well-
defined process to find defects.

 Walkthrough is a form of peer review where the author leads members of the development
team and other interested parties through a software product and the participants ask
questions and make comments about defects.

 Technical review is a form of peer review in which a team of qualified personnel examines
the suitability of the software product for its intended use and identifies discrepancies from
specifications and standards.

3.1.4 Formal versus informal reviews

"Formality" identifies the degree to which an activity is governed by agreed (written) rules. Software
review processes exist across a spectrum of formality, with relatively unstructured activities such as
"buddy checking" towards one end of the spectrum, and more formal approaches such as
walkthroughs, technical reviews, and software inspections, at the other.

Research studies tend to support the conclusion that formal reviews greatly outperform informal
reviews in cost-effectiveness. Informal reviews may often be unnecessarily expensive (because of
time-wasting through lack of focus), and frequently provide a sense of security which is quite
unjustified by the relatively small

Differing types of review may apply this structure with varying degrees of rigour, but all activities are
mandatory for inspection:

 [Entry evaluation]: The Review Leader uses a standard checklist of entry criteria to ensure
that optimum conditions exist for a successful review.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p103 of 105
 Management preparation: Responsible management ensure that the review will be
appropriately resourced with staff, time, materials, and tools, and will be conducted
according to policies, standards, or other relevant criteria.

 Planning the review: The Review Leader identifies or confirms the objectives of the review,
organises a team of Reviewers, and ensures that the team is equipped with all necessary
resources for conducting the review.

 Overview of review procedures: The Review Leader, or some other qualified person,


ensures (at a meeting if necessary) that all Reviewers understand the review goals, the
review procedures, the materials available to them, and the procedures for conducting the
review.

 [Individual] Preparation: The Reviewers individually prepare for group examination of the


work under review, by examining it carefully for anomalies (potential defects), the nature of
which will vary with the type of review and its goals.

 [Group] Examination: The Reviewers meet at a planned time to pool the results of their
preparation activity and arrive at a consensus regarding the status of the document (or
activity) being reviewed.

 Rework/follow-up: The Author of the work product (or other assigned person) undertakes
whatever actions are necessary to repair defects or otherwise satisfy the requirements
agreed to at the Examination meeting. The Review Leader verifies that all action items are
closed.

 [Exit evaluation]: The Review Leader verifies that all activities necessary for successful
review have been accomplished, and that all outputs appropriate to the type of review have
been finalised.

3.1.5 Value of reviews

The most obvious value of software reviews (especially formal reviews) is that they can identify
issues earlier and more cheaply than they would be identified by testing or by field use (the defect
detection process). The cost to find and fix a defect by a well-conducted review may be one or two
orders of magnitude less than when the same defect is found by test execution or in the field.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p104 of 105
A second, but ultimately more important, value of software reviews is that they can be used to train
technical authors in the development of extremely low-defect documents, and also to identify and
remove process inadequacies that encourage defects (the defect prevention process).

This is particularly the case for peer reviews if they are conducted early and often, on samples of
work, rather than waiting until the work has been completed. Early and frequent reviews of small
work samples can identify systematic errors in the Author's work processes, which can be corrected
before further faulty work is done. This improvement in Author skills can dramatically reduce the
time it takes to develop a high-quality technical document, and dramatically decrease the error-rate
in using the document in downstream processes.

As a general principle, the earlier a technical document is produced, the greater will be the impact of
its defects on any downstream activities and their work products. Accordingly, greatest value will
accrue from early reviews of documents such as marketing plans, contracts, project plans and
schedules, and requirements specifications. Researchers and practitioners have shown the
effectiveness of reviewing process in finding bugs and security issues.

Revision: 1.0 Date: 16 June 2018


M5 Gathering Techniques for Computer System Development - Learner Guide p105 of 105

You might also like