You are on page 1of 8

MOSHI CO-OPERATIVE UNIVERSITY

(MoCU)

FACULTY OF BUSINESS AND INFORMATION SCIENCE (FBIS)

PROGRAM: BACHELLOR OF SCIENCE IN BUSINESS INFORMATION AND


COMMUNICATION TECHNOLOGY (BSc-BICT)

COURSE NAME : SOFTWARE QUALITY ASSURANCE

COURSE ANTE : CIT 307

COURSE INSTRUCTOR : MR. S. MADILA

TASK : GROUP ASSIGNMENT

SUBMISSION DATE : 18TH JUNE 2020


1 MIRAJI D. MIRAJI MoCU/BSc-bict/146/17

2 JOSEPH M. MARO MoCU/BSc-bict/182/17

3 FRABK J. KIVUYO MoCU/BSc-bict/164/17

4 SHEDRACK A. KALESHU MoCU/BSc-bict/152/17

5 PAULO F. MMASSY MoCU/BSc-bict/145/17

6 JOSHUA J. IREGE MoCU/BSc-bict/148/17

7 IBRAHIMU I. NDOSI MoCU/BSc-bict/143/17

8 LOVENESS P. MEENA MoCU/BSc-bict/159/17

9 PETER ERASMI MoCU/BSc-bict/092/17

10 MILINGA HILOLIMSI MoCU/BSc-bict/174/17

QNS:

1: Measuring quality in Software


- User’s view
- Manufacture’s view
INTRODUCTION

Quality: The degree to which a component, system or process meets specified requirements
and/or user/customer needs and expectations.

Software quality: The totality of functionality and features of a software product that bear on its
ability to satisfy stated or implied needs. Or Software quality is defined as a field of study and
practice that describes the desirable attributes of software products. There are two main
approaches to software quality: defect management and quality attributes.

Software

In the context of software engineering, software quality measures how well software is designed
(quality of design), and how well the software conforms to that design (quality of conformance),
although there are several different definitions. It is often described as the ‘fitness for purpose' of
a piece of software. Whereas quality of conformance is concerned with implementation (see
Software Quality Assurance), quality of design measures how valid the design and requirements
are in creating a worthwhile product.

Measuring quality in software

Reliability

The reliability of the software represents the ability to perform the intended function properly
without any failures. That is maintaining a level of services under specific condition within
specific period of time during system operation. Therefore, the reliability measures the failures
occurred in the software product within defined period of time.

Performance Efficiency

The main idea of any software product is to perform specific business function. Therefore, the
functionality of the software product considered as crucial factor in the software quality, which
identify whether the software is usable or useless, regardless the values of other software quality
factors.
Maintainability:

Is the ease with which you can modify software, adapt it for other purposes, or transfer it from
one development team to another. Compliance with software architectural rules and use of
consistent coding across the application combine to make software maintainable.

Security
Assesses how well an application protects information against the risk of software breaches. The
quantity and severity of vulnerabilities found in a software system are indicators of its security
level. Poor coding and architectural weaknesses often lead to software vulnerabilities.

Measuring quality in software based on:

User view:

The user view is more concrete, grounded in product characteristics that meet the user's needs.
This view of quality evaluates the product in a task context and can thus be a highly personalized
view. In reliability and performance modeling, the user view is inherent, since both methods
assess product behavior with respect to operational profiles (that is, to expected functionality and
usage patterns). Product usability is also related to the user view: in usability laboratories,
researchers observe how users interact with software products.

ISO 9126 defined three main different views of software quality. The quality of the software
product is required for three different people for different purposes: Manager, Developer, and
User. The manager is interested in the overall quality characteristics. That he has to balance the
quality with management criteria, achieve the user’s requirements with a certain level of quality
within specific time, limited resources (human, tools), and limited cost. The users are mainly
interested in the software usage without knowing it is internal aspects. Therefore, they
considered the reliability and the ability of the software to perform the required functions easily
and efficiently in different environments.

The developers are mainly required to develop software products within certain level of quality
as users’ needs. On the other hand, the developers are interested on the internal quality
characteristics. That affects their tasks of software development process. Therefore, the
differentiation between these two sides is very important.

According to ISO 9126, the main consideration of the users is the software usability,
performance, and its effects without knowing what inside it, how it is work, or how it was
developed. Since, the software users does not care about all of software characteristics that are
required to identify the quality of the software product, it is seems to be inaccurate to show them
the quality that they are looking for. Therefore, the quality of the software as users need is very
important in the market. The current software quality models combine the different points of
views: Manager, Developer, and user. Therefore, the models did not show the quality of the
software product as a user need, it shows a combination of users’ and developers quality factors.

When users think of software quality, they often think of reliability: how long the product
functions properly between failures. Reliability models plot the number of failures over time.
These models sometimes use an operational profile, which depicts the likely use of different
system functions.
Users, however, often measure more than reliability. They are also concerned about usability,
including ease of installation, learning, and use. Tom Gilb suggests that these characteristics can
be measured directly.
For example, learning time can be captured as the average elapsed time (in hours) for a typical
user to achieve a stated level of competence.
Gilb's technique can be generalized to any quality feature. The quality concept is broken down
into component parts until each can be stated in terms of directly measurable attributes. Thus,
each quality-requirement specification includes a measurement concept, unit, and tool, as well as
the planned level (the target for good quality), the currently available level, the best possible
level (state-of-the-art), and worst level. Gilb does not prescribe a universal set of quality
concepts and measurements, because different systems will require different qualities and
different measurements.
Measuring quality in Software based on:

Manufacture’s view

The manufacturing view of quality suggests two characteristics to measure: defect counts and
rework costs.

Defect counts.

Defect counts are the number of known defects recorded against a product during development
and use. For comparison across modules, products, or projects, you must count defects in the
same way and at the same time during the development and maintenance processes.

For more detailed analysis, you can categorize defects on the basis of the phase or activity where
the defect was introduced, as well as the phase or activity in which it was detected. This
information can be especially helpful in evaluating the effects of process change (such as the
introduction of inspections, tools, or languages).

The relationship between defects counts and operational failures is unclear. However, you can
use defect counts to indicate test efficiency and identify process-improvement areas.
In addition, a stable environment can help you estimate post-release defect counts.

To compare the quality of different products, you can "normalize" defect count by product size,
to yield a defect density. This measure lets you better compare modules or products that differ
greatly in size.

In addition, you can "normalize" post release defect counts by the number of product users, the
number of installations, or the amount of use.

Dividing the number of defects found during a particular development stage by the total number
of defects found during the product's life helps determine the effectiveness of different testing
activities.
Rework costs.

Rework is defined as any additional effort required to find and fix problems after documents and
code are formally signed-off as part of configuration management. Thus, end-phase verification
and validation are usually excluded, but debugging effort during integration and system testing is
included. To compare different products, rework effort is sometimes "normalized" by being
calculated as a percentage of development effort. Because we want to capture the cost of non
conformance, we must be sure to distinguish effort spent on enhancements from effort spent on
maintenance. Only defect correction should count as rework. It is also important to separate pre-
and post release rework. Post release rework effort is a measure of delivered quality;
Pre release rework effort is a measure of manufacturing efficiency. If we can attribute the
prerelease rework effort to specific phases, we can use it to identify areas for process
improvement. Developers and customers alike are interested in knowing as early as possible the
likely quality of the delivered product.
REFERENCES:

Kan, S. H. (2002). Metrics and models in software quality engineering. Addison-Wesley


Longman Publishing Co., Inc..

Slaughter, S. A., Harter, D. E., & Krishnan, M. S. (1998). Evaluating the cost of software
quality. Communications of the ACM, 41(8), 67-73.

Futrell, R. T., Shafer, L. I., & Shafer, D. F. (2001). Quality software project management.

Herbsleb, J., Zubrow, D., Goldenson, D., Hayes, W., & Paulk, M. (1997). Software
quality and the capability maturity model. Communications of the ACM, 40(6), 30-40.

You might also like