You are on page 1of 19

CAP526 – Software Testing and Quality Assurance

HOME-WORK 2

SOFTWARE TESTING AND QUALITY ASSURANCE

SUBMITTED TO:

Mr Deepak Mehta

SUBMITTED BY:

VIKAS RANA

ROLL NO. 03

REG. NO. 7010070026

BCA (H) – MCA


CAP526 – Software Testing and Quality Assurance

Q1. Why does knowing how the Software works


influence how and what you should test?

Ans: It is easy to tell when software is ready to release but


difficult to implement. Those days have arrived, software is
becoming more pervasive all the time. Unfortunately,
software isn't as reliable as it ought to be, and its inherent
unreliability is passed on to every system it touches. And
while consumers would like software to be perfect before
they ever encounter it, perfect software doesn't exist;
however, deciding when it's "good enough" to release is no
simple task. Project managers frequently must make tough
calls when it comes to deciding when software is ready to
market. There are deadlines that must be met and other
pressures to consider, and sometimes the actual
functionality of the program suffers. Not to mention the
hidden costs of rushing a product to market.

In a classical software project, it's easy to tell when


software is ready to be released. It has to fulfill the
requirements, it has to have passed testing, and it has to
be ready for the user. It has to have the appropriate
documentation and/or user's manual and/or online help.
But, is it Software Operating Usably and Properly (SOUP)?

With so much software being produced throughout the


world (made available across the Internet) some important
things are being forgotten. When software is released, it's
supposed to be usable, it's supposed to show that the
company or group of people who have developed the
software are professionals, and that they know what they
are doing. The difference between good SOUP and bad
SOUP is really up to the user, but there are many things
that the people developing the software need to address
before the software is released.
CAP526 – Software Testing and Quality Assurance

How little functionality is too little functionality? How much


usability is enough? These are questions that have to be
addressed, and within a customer driven project (one which
the customer is actively involved in, such as a proprietary
project for a specific customer, or Open Source/Free
Software), these issues are taken care of on the fly – the
customer is involved in all aspects of the development, and
have a steady rapport with the development team.

In other types of software projects – such as new types of


applications (and even new operating systems), the target
is a moving one, and is difficult to assess. Typically, the
potential customers won't even know that they want the
product, or how they would use it – and because of this,
SOUP is that much more important to these projects.

Marketing an incomplete product with features that haven't


been tested yet increases expectations, and is good
business in the classical sense. The risk, though, is that
expectations will not be met – and the users will see bugs
in their SOUP.

Q2. What is the biggest problem of White-Box


Testing either Static or Dynamic?

Ans: White Box Testing (also known as Clear Box Testing,


Open Box Testing, Glass Box Testing, Transparent Box
Testing or Structural Testing) is a software testing method
in which the internal structure/design/implementation of
the item being tested is known to the tester. The tester
chooses inputs to exercise paths through the code and
determines the appropriate outputs. Programming know-
how and the implementation knowledge is essential. White
CAP526 – Software Testing and Quality Assurance

box testing is testing beyond the user interface and into


the nitty-gritty of a system.

White Box Testing method is named so because the


software program, in the eyes of the tester, is like a
white/transparent box; inside which one clearly sees.
Eg:- A tester, usually a developer as well, studies the
implementation code of a certain field on a webpage,
determines all legal (valid and invalid) AND illegal inputs
and verifies the outputs against the expected outcomes,
which is also determined by studying the implementation
code.

DISADVANTAGES OF WHITE BOX TESTING

• Since tests can be very complex, highly skilled


resources are required, with thorough knowledge of
programming and implementation.
• Test script maintenance can be a burden if the
implementation changes too frequently.
• Since this method of testing it closely tied with the
application being testing, tools to cater to every kind
of implementation/platform may not be readily
available.

Q3. How could you guarantee that your Software


would never have a Configuration Problem?
ANS: As we know new technologies are emerging day by
day and user requirements and expectations keep on
changing so it is not possible to fully guarantee that our
software can never have a configuration Problem.

For this we have to give guarantee to customers that


this software is running in any type of environment . it can
CAP526 – Software Testing and Quality Assurance

support this type of hardware requirements , if this


requirements is matched with the user hardware then we
can say that it does not generate kind of problem.

For example: by purchasing a video game , we can say that


a video game is set of instruction that it is developed in
code so it is a software . when we purchased it from the
market the in the back side of that cd there is page in
which hardware requiremaents are mentioned this means
that this video is run only in that kind of envoirnment ,
foreg. It can support window VISTA or higher version then it
can work in windows XP

Q4. Create the equivalence partitioning and write


test cases to test the login screen containing
username and password?

The following sections describe details of these steps.

Step 1: Identify variables for each use case step

You need to identify all input variables in all of the steps in


the given scenario. For example, if in some step the user
enters a user ID and password, there are two variables.
One variable is the user ID, and the second variable is the
password. The variable can also be a selection that the
user can make (for instance, Save changes or Cancel).

Step 2 : Identify significantly different options for each


variable

Options are "significantly different" if they may trigger


different system behavior. For example, if we select a user
id, which is supposed to be from 6 to 10 characters long,
the following entries are significantly different:
CAP526 – Software Testing and Quality Assurance

• Alex -- because it is too short, and we expect an error


message to appear
• Alexandria -- because it is a valid user id
• Alexandrena -- because it is too long, and we expect
the system to prevent us from entering a user id that
long

However, "Alexandria" and "JohnGordon" are not


significantly different, because they are both valid user ids
that should cause the system to react in the same way.

The following guidelines describe some specific


cases.

An option can be considered significantly different


if:

1. It triggers different flow of the process (usually an


alternative flow)

Example

o Entering invalid password will trigger Alternative


Flow 2
2. It triggers different error message

Example

o If email is to long, the message is "Email should


have no more than 50 characters"
o If email does not contain @ sign, the message is:
"Invalid email address"

PASSWORD

Example

Since password should have at least 6 characters we


should test:

o Password with 5 characters


o Password with 6 characters
CAP526 – Software Testing and Quality Assurance

If we are testing numbers, we may consider the following


options:

• Regular number, reasonable from the application


point of view
• Zero
• Negative number
• A number with two decimals
• The biggest number that can be entered
(99999999999999 - as many nines as can fit)

How do you know what is the minimum and maximum


allowed length of a field? This requirement can come from
different sources. Sometimes it comes from the business
analyst or a customer. For example, if we enter a Dun and
Bradstreet number that identifies a company, it should
always be a number containing 9 digits. It is a business
requirement.

Quite often, however, it doesn't come from the customer or


the user. If you ask the customer how big the last name
field should be, they might say that they don't care and ask
you to make it whatever is reasonable. In this case it is a
design step rather than a requirement step to decide how
long the variable should be.

In another situation, it may be suggested by the data


analyst or database designer-- for example, if all other
applications in the corporation store last names in 30-
character long fields, your application should probably
comply with this standard as well.

Regardless of the source of the requirement, it should


always be agreed upon and documented before we do the
test cases.

Step 3:  Combine options to be tested into test


cases
CAP526 – Software Testing and Quality Assurance

In the previous step you identified all the options. In this


step, you need to combine them in the sequence of test
case steps.

Figure 10 graphically illustrates the options to be tested. In


each column, there is an input variable to be tested, and
each row is one option: R is regular, E is empty, and then
one character, 50 characters, 51, and so forth. "L" means
very large, and "I" means illegal.

To create the first test case, you can pick and connect any
options. When you create the second test case, pick one of
the options that was not used in the first one. Continue
adding test cases until all nodes of the graph (as shown in
Figure 11) are covered. Usually you'll need from 4 to 6 test
cases to cover all the options that should be tested.
However, some specific situations may require more.

Allocation of test cases can also be represented in


the form of a test case allocation matrix, as shown in
Table 4.
Step Variable
numbe or TC1 TC2 TC3 TC4
r selection
Actual
B1 Website Actual URL Actual URL Actual URL
URL
Min Max
B2 Email Regular allowed (1 allowed (50 Regular
char) char)
Min Max
Min allowed
B2 Password Regular allowed (6 allowed (10
(6 char)
char) char)

Table 4 describes the graph from Figure 11 in the form of a


matrix where every column contains a different test case.
Each row corresponds to one variable entered by a user.

Step 4: Assign values to variables


CAP526 – Software Testing and Quality Assurance

Q5. Explain the key elements involved in


formal reviews?
Formal Reviews

A formal review is the process under which static white-box


testing is performed. A formal review can range from a simple
meeting between two programmers to a detailed, rigorous
inspection of the software's design or its code.

There are four essential elements to a formal review:

Identify Problems. The goal of the review is to find


problems with the softwarenot just items that are wrong,
but missing items as well. All criticism should be directed
at the design or code, not the person who created it.
Participants shouldn't take any criticism personally. Leave
your egos, emotions, and sensitive feelings at the door.

Follow Rules. A fixed set of rules should be followed.


They may set the amount of code to be reviewed (usually a
couple hundred lines), how much time will be spent (a
couple hours), what can be commented on, and so on. This
is important so that the participants know what their roles
are and what they should expect. It helps the review run
more smoothly.

Prepare. Each participant is expected to prepare for and


contribute to the review. Depending on the type of
review, participants may have different roles. They need
to know what their duties and responsibilities are and be
ready to actively fulfill them at the review. Most of the
problems found through the review process are found
during preparation, not at the actual review.

Write a Report. The review group must produce a


written report summarizing the results of the review and
make that report available to the rest of the product
CAP526 – Software Testing and Quality Assurance

development team. It's imperative that others are told


the results of the meetinghow many problems were
found, where they were found, and so on.

What makes formal reviews work is following an established


process. Haphazardly "getting together to go over some code"
isn't sufficient and may actually be detrimental. If a process is
run in an ad-hoc fashion, bugs will be missed and the
participants will likely feel that the effort was a waste of time.

If the reviews are run properly, they can prove to be a great


way to find bugs early. Think of them as one of the initial nets
(see Figure 6.1) that catches the big bugs at the beginning of
the process. Sure, smaller bugs will still get through, but they'll
be caught in the next testing phases with the smaller nets with
the tighter weave.

In addition to finding problems, holding formal reviews


has a few indirect results:

• Communications. Information not contained in the


formal report is communicated. For example, the black-
box testers can get insight into where problems may lie.
Inexperienced programmers may learn new techniques
from more experienced programmers. Management may
get a better feel for how the project is tracking its
schedule.
• Quality. A programmer's code that is being gone over in
detail, function by function, line by line, often results in
the programmer being more careful. That's not to say
that he would otherwise be sloppyjust that if he knows
that his work is being carefully reviewed by his peers, he
might make an extra effort to triple-check it to make sure
that it's right.
• Team Camaraderie. If a review is run properly, it can be
a good place for testers and programmers to build
CAP526 – Software Testing and Quality Assurance

respect for each other's skills and to better understand


each other's jobs and job needs.
• Solutions. Solutions may be found for tough problems,
although whether they are discussed depends on the
rules for the review. It may be more effective to discuss
solutions outside the review.

These indirect benefits shouldn't be relied on, but they do


happen. On many teams, for whatever reasons, the members
end up working in isolation. Formal reviews are a great way to
get them in the same room, all discussing the same project
problems.

Peer Reviews

The easiest way to get team members together and doing


their first formal reviews of the software is through peer
reviews, the least formal method. Sometimes called buddy
reviews, this method is really more of an "I'll show you mine if
you show me yours" type discussion.

Walkthroughs

Walkthroughs are the next step up in formality from peer


reviews. In a walkthrough, the programmer who wrote the
code formally presents (walks through) it to a small group of
five or so other programmers and testers. The reviewers
should receive copies of the software in advance of the review
so they can examine it and write comments and questions that
they want to ask at the review. Having at least one senior
programmer as a reviewer is very important.

Inspections

Inspections are the most formal type of reviews. They are


highly structured and require training for each participant.
Inspections are different from peer reviews and walkthroughs
in that the person who presents the code, the presenter or
reader, isn't the original programmer. This forces someone
else to learn and understand the material being presented,
potentially giving a different slant and interpretation at the
CAP526 – Software Testing and Quality Assurance

inspection meeting.

Inspections have proven to be very effective in finding bugs in


any software deliverable, especially design documents and
code, and are gaining popularity as companies and product
development teams discover their benefits.

Q6. Is it acceptable to release a Software Product


that has Configuration Bugs?

ANS:NO its not accepatale to release software product that


having configuration bugs because it
Effect the reliability or usability of the software and also affect
the reputation of the software so never should release the
software that having configuration bugs. To avoid this
company or programmer should having concentrate on the
following areas.

1. Once you release the first version of your software


product and the marketing machine is starting to roll on,
you might think about what to do next. You probably
think about all the great features which haven’t made it
into the 1.0 and are now waiting to be implemented. You
can’t wait to start your editor and start hacking right
away. But wait! Think twice about what you do next,
because adding features to software is quite different
after you released it.

Backwards Compatibility

Ever wondered why Microsoft is so successful? It is widely


known that they had really buggy releases in the past. And it is
also known that many products of their competitors are of the
same or even better quality. But today they are the biggest
software company in the world and this is obviously a big
accomplishment.

Besides great marketing and delivering products the users


want, they are known for keeping their software backward
CAP526 – Software Testing and Quality Assurance

compatible. They go to great strength to guarantee that


applications still work on newer versions of their operating
system, even when they have to work around bugs in the
original software application.

Listen To Your Customers

One of the great aspects when your software goes public is


that you get more feedback. If something is missing or not
working as expected, your customers will tell you faster than
you can fix or implement it. Some feature requests or bug
reports might be surprising, something you never thought of
when you developed your software. Others are more expected.

Whatever customers report, keep track of it. If you have many


requests for a particular feature and it seems logical to
implement it, then look at it in more detail. You should also try
to find similarities between the different feature requests and
then choose the best way to implement it. Try to bring your
software to the next level based on your customer
suggestions.

Carefully Choose New Features

Once you have a list of features you want to add, you should
look at them in more detail. In my opinion, it’s usually a good
idea to look at the usability improvements first. Normally those
things are easy to change or enhance and they add a great
value to your product.

There are more things to pay attention to. Think about


improving your documentation. A detailed and useful
documentation can save you and your customers a lot of time
if it’s so good that there’s no need to ask you how a specific
feature works or how to use it. When it comes to bigger
changes, verify if they are really worth it. They need to add
real value to your program. Don’t add any features if they are
CAP526 – Software Testing and Quality Assurance

not going to be used anyway.

Take Up Time to Release

You should always plan to do a comprehensive testing phase


before releasing a new version. You don’t want to release a
version that has more quirks and bugs than your previous one,
do you? Carefully test new and old functionality. Think about
possible side effects your changes might cause.

A good idea is to use automated tests. This way you can


always easily ensure that old functionality still works as
expected after changing your code. This is called regression
testing – verifying that a previously working feature still works
correctly after adding new features or fixing bugs.

Q7. In addition to age and popularity what other


criteria might you use to equivalence Partition
Hardware for Configuration testing?

In addition to age and popularity other criteria that we might


use to equivalence Partition Hardware for Configuration testing
are: -

Type: - First work to do this type of equivalence partitioning is


to break the software world into types such as painting,
writing, accounting, databases, communications, and so on like
software is for only accounting like Tally software or for
creating high level graphics and motion pictures used for CAD,
as presentation graphics that needs special hardware. Then
next thing to do is select hardware from each category for
testing.

Manufacturer: -Another criterion would be to pick hardware


based on the company that created it.

we have chosen, place or country is a possibility as some


hardware devices such as DVD players only work with DVDs in
their geographic region. Another might be consumer or
CAP526 – Software Testing and Quality Assurance

business. Some hardware is specific to one, but not the other.

Q8. What are the different levels of Testing and the


goals of different levels? For each level Which
Testing Approach is more suitable?

Unit testing refers to tests that verify the functionality of a


specific section of code, usually at the function level. In an
object-oriented environment, this is usually at the class level,
and the minimal unit tests include the constructors and
destructors.
These type of tests are usually written by developers as they
work on code (white-box style), to ensure that the specific
function is working as expected. One function might have
multiple tests, to catch corner cases or other branches in the
code. Unit testing alone cannot verify the functionality of a
piece of software, but rather is used to assure that the building
blocks the software uses work independently of each other.
Unit testing is also called Component Testing.
Integration testing is any type of software testing that seeks
to verify the interfaces between components against a
software design. Software components may be integrated in
an iterative way or all together ("big bang"). Normally the
former is considered a better practice since it allows interface
issues to be localised more quickly and fixed.
Integration testing works to expose defects in the interfaces
and interaction between integrated components (modules).
Progressively larger groups of tested software components
corresponding to elements of the architectural design are
integrated and tested until the software works as a system.
System testing tests a completely integrated system to
CAP526 – Software Testing and Quality Assurance

verify that it meets its requirements

System integration testing verifies that a system is


integrated to any external or third party systems defined in the
system requirements.

Regression testing focuses on finding defects after a major


code change has occurred. Specifically, it seeks to
uncover software regressions, or old bugs that have come
back. Such regressions occur whenever software functionality
that was previously working correctly stops working as
intended. Typically, regressions occur as an unintended
consequence of program changes, when the newly developed
part of the software collides with the previously existing code.
Common methods of regression testing include re-running
previously run tests and checking whether previously fixed
faults have re-emerged. The depth of testing depends on the
phase in the release process and the risk of the added
features. They can either be complete, for changes added late
in the release or deemed to be risky, to very shallow,
consisting of positive tests on each feature, if the changes are
early in the release or deemed to be of low risk.

Acceptance testing can mean one of two things:

1. A smoke test is used as an acceptance test prior to


introducing a new build to the main testing process, i.e.
before integration or regression.
2. Acceptance testing performed by the customer,
often in their lab environment on their own HW, is known
as user acceptance testing (UAT). Acceptance testing
may be performed as part of the hand-off process
between any two phases of development
CAP526 – Software Testing and Quality Assurance

Alpha testing
Alpha testing is simulated or actual operational testing by
potential users/customers or an independent test team at the
developers' site. Alpha testing is often employed for off-the-
shelf software as a form of internal acceptance testing, before
the software goes to beta testing.
Beta testing
Beta testing comes after alpha testing. Versions of the
software, known as beta versions, are released to a limited
audience outside of the programming team. The software is
released to groups of people so that further testing can ensure
the product has few faults or bugs. Sometimes, beta versions
are made available to the open public to increase
the feedback field to a maximal number of future users.[

Q9. Relate verification and validation to the Quality


control and Quality Assurance with an example?

Verification and validation is the process of checking


that a product, service, or system meets specifications and
that it fulfills its intended purpose. These are critical
components of a quality management system such as ISO
9000. Sometimes preceded with "Independent" (or IV&V) to
ensure the validation is performed by a disinterested third
party.

Verification is a Quality control process that is used to


evaluate whether or not a product, service, or system
complies with regulations, specifications, or conditions
CAP526 – Software Testing and Quality Assurance

imposed at the start of a development phase. Verification


can be in development, scale-up, or production. This is
often an internal process.

Validation is a Quality assurance process of establishing


evidence that provides a high degree of assurance that a
product, service, or system accomplishes its intended
requirements. This often involves acceptance of fitness for
purpose with end users and other product stakeholders.

It is sometimes said that validation can be expressed by


the query "Are you building the right thing?" and
verification by "Are you building it right?" "Building the
right thing" refers back to the user's needs, while "building
it right" checks that the specifications be correctly
implemented by the system. In some contexts, it is
required to have written requirements for both as well as
formal procedures or protocols for determining compliance

Q10. In a code review check list there are some


items as given below .Categories them.
1. Is the entire conditional path reachable?
2. If the pointers are used, are they initialized
properly?
3. Is there any part of code unreachable?
4. Has the use of similar looking operators (e.g.
&,&& or =,== in C)checked ?
Does the code follow the coding conventions of
the organization?

1. Entire conditional path categorize in the control flow


errors of code review checklist.

2. If pointers are used they are initialized they are


coming under the category of data declaration errors.
because if they are not initialized then they can have
garbage value and that leads to dangling pointer
CAP526 – Software Testing and Quality Assurance

3. No, every part of the code is reachable, which is based


on the conditions to execute that statements bunch.

4. Logical operator comes under the category of


comparison error . Because they have different
meaning in different situations. For example ‘=’
means assign the value. And ‘&&’ means combining
two conditions.

You might also like