You are on page 1of 50

UNIT – III

INCORPORATING SPECIALIZED TESTING RESPONSIBILITIES


Testing Client/Server Systems – Rapid Application Development Testing – Testing in a
Multiplatform Environment – Testing Software System Security - Testing Object-Oriented
Software – Object Oriented Testing – Testing Web based systems – Web based system – Web
Technology Evolution – Traditional Software and Web based Software – Challenges in Testing
for Web-based Software –Testing a Data Warehouse - Case Study for Web Application Testing.
----------------------------------------------------------------------------------------------------
TESTING CLIENT-SERVER SYSTEM
The term client-server system represents the process of communication between front-
end and back-end. Here the client is represented as front-end and server is act as back-end. Client
request some data from server and the server will respond for that request.
While testing client-server system maximum number of testing will use 2-tier client-
server architecture.
OVERVIEW:
This type of testing is usually done for 2-tier architecture. The application launched on
frontend will be having forms and reports which will be monitoring and manipulating data.
Eg; application developed in VB, VC++, Core Java, C, C++
Backend use MS_Access, SQL, Oracle, Sybase, MySQL
TESTINGS FOR CHECKING BACK-END & FRONT-END:
 User Interface Testing
 Manual support Testing
 Compatibility Testing
 Configuration Testing
 Functionality Testing
 Inter system Testing
APPLICATION SOFTWARE:
The client system will request any data/information or request server to perform any
operation. The application server receives the request and performs some operation with that
request and responds back to the client.
CONCERN:
- Organizational readiness ie, for client support & installation
- Client installation
- Security: for the production of hardware & software component
- Client data
- Client-server standard
i) Testers need to determine adequate controls.
ii) Adequate controls helps in correct searching of information needed from the
database.
INPUT:
It includes server technology and capability of communication through network and
client work.
CLIENT-SERVER ARCHITECTURE:

WORK BENCH:

DO PROCEDURE:
TASK 1: ACCESS READINESS
Responsibility of sponsors to ensure that the organization is ready for client-server
technology.
8 dimensions:
1. Motivation: to derive improvements in quality
2. Investment: approval of budget
3. Client-server skills: ability of client-server installation team to incorporate the client-
server technology concepts.
4. User education: awareness by the individuals
5. Culture: new concepts/new approach
6. Client-server support staff: required number of staff for developing frontend/backend
environment
7. Client-server aids & tools:
8. s/w development process maturity: ability to produce high quality s/w on a consistent
basis.
Software development maturity model:

Adhoc -SDLC process is loosely defined.(not in sequence)


Repeatable - stable process(one after another)
Consistent - consistent implementation
Measured - process measurement and analysis(process completion)
Optimized - optimization of the process(process)
Conducting client-server readiness:

Rate each readiness in one of the following.


1. High 2. Medium 3. Low 4. None
Preparing client-server readiness footprint chart:

- A footprint chart graphically illustrating readiness


- Chart is completed in 2 steps.
1. Record the point on dimension lines(corresponds to rating)
2. Connect all points.
TASK 2: ACCESS KEY POINT
Key components of technology:
- Client installation are done correctly
- Adequate security is provided for client-server system
- Client data is adequately protected
- Client-server standards are in place and working.

TASK 3: TEST SYSTEM/ACCESS THE CLIENT NEEDS

i) Can the system do what the client needs?


ii) Can the client produce results consistent with other clients and other system?

Client-server characteristics:

1. Data format
2. Completeness of data
3. Understandable documentation
4. Ease of use
5. Sufficient help routines
OUTPUT:
Finally, based on this testing process a test report is generated.

RAPID APPLICATION DEVELOPMENT TESTING

Rapid application development (RAD) is an effective s/w development paradigm that


provides a systematic and automatable means of developing a s/w system under circumstances
where initial requirements are not well known.

OVERVIEW:
The testing strategy for RAD is spiral testing. This testing strategy assumes the RAD
system is:

- Is iterative
- Is evolutionary
- Contains RAD language with a defined grammar
- Provides reusable components ability
- Uses implementation code from reusable components written in a high level language
- Contains a sophisticated support environment
OBJECTIVES:
The RAD testing methodology described in this testing process is designed to take
maximum advantage of the iterative nature of RAD.
It should also focus on the requirements capturing purpose of developing.
RAD based testing must provide tools and methods to analyze system requirements and
capture requirements changes.
CONCERNS:
It describes the 4 concerns the testers should have about RAD based testing.
i) Testing iteration:
- The iterative nature of s/w development implies that the system must track revision
histories and maintain version controls of alternate RAD versions.
- Requirements may be added deleted or changed.
- Test goals must easily change to fit modified requirements.
ii) Testing components:
- The use of reusable components raises reliability concerns.
- The testing methodology must consider how information on past component testing
can be recorded and referenced to determine what unit testing might still be needed
and what integration testing strategies might best check the components in their
instantiated context.
iii) Testing performance:
- One necessary testing component is a set of test conditions.
- Requirements –based and functional testing base test conditions on some stated
form of behavior or required performance standard, such as formal specifications or a
requirements document.
- The testing methodology must establish an objective standard of the intended
behavior of the RAD under consideration.
iv) Recording test information:
- The software development environment not only should capture requirements,
assumptions and design decisions, but ideally should map these into the RAD in a
way useful to both rapid application development and testing.
- These mapping need to be rapidly revisable to quickly make the next RAD
iteration. WORK BENCH:

INPUT:
The only input
to this test process
is the RAD
requirements.
Because of the
nature of RAD, the
requirements are
incomplete when
development
begins.
The
requirements will
be continually
developed throughout various iterations.
DO PROCEDURE:
It describes iterative RAD and spiral testing and then integrates those topics into 3 task
strategy.
Testing within iterative RAD:
The most obvious approach to testing during RAD would be to treat that each
development iteration as one s/w lifecycle.
The process’s complexity is also compounded by the need to conduct a full cycle of
testing for each iteration, even though the early iterations will almost never contain detailed or
unchanging requirements.
An alternative test approach is to iterate test planning in parallel with the developing
iterations. This will simplify testing and reduce overall testing costs when compared to the
preceding approach.
As RAD iterations proceed, the test plan would expand to incorporate the latest iteration
modification.

Spiral testing:
The proposed RAD testing strategies termed spiral testing, remains iterative and parallel
the RAD process. Spiral testing distinguishes between the initial few RAD testing iterations,
subsequent iterations and final few iterations.
The framework for intermediate testing activity and final acceptance testing, to include
test derivation is laid in the initial iterations.
Unit and integration testing will likely be confined to subsequent and final testing
iteration.
Subsequent testing iteration will have less framework-related activity and more
acceptance test oracle derivation activity.
Final iterations are where developers return to their RAD to fix identified errors and testers
conduct final integration, acceptance and regression testing.
TASK 1: DETERMINE APPROPRIATENESS OF RAD
There are strength and weakness to using the RAD concept to build s/w. RAD
development offers the following strengths:
- System users get a quick view of system deliverables.
- The cost of risk associated with development is reduced.
- Customer satisfaction can be improved.
- Using powerful development tools can reduce the time.
Problems associated with using RAD model:
- Users and developers must be committed to rapid-fire activities.
- Diffusers are not continuously involved throughout the RAD cycles.
TASK 2: TEST PLANNING ITERATIONS
To mirror this process for test planning purposes, the initial test plan iterations consider
the development results and frame the testing process.
This is logical point for testers to determine the major critical portions of RAD &
establish test priorities.
In initial iteration, test team will forecast the most important portion of system to test. It
is to recommend that the document for each iteration of RAD process be subject to inspection
process.
TASK 3: TEST SUBSEQUENT PLANNING ITERATION
The subsequent testing iterations will be categorized by unit, integration testing and
continued requirement review to determine whether any requirements are missing.
Reusable components are instantiated to provide required function to the test and design
team perform unit testing with these modules. The test team can commence integration test
planning for those component.
TASK 4: TEST THE FINAL PLANNING ITERATION
Once developer establish all requirements the final few iterations of development are
devoted to implementing the remaining functionality, followed by error corrections.
Therefore, testers can devote their work to completing the test for acceptance testing and
to remaining unit and integration testing.
The final testing planning iterations commence with the completion of the operational
RAD and prior to final user acceptance.
OUTPUT:
After performing RAD testing, a RAD test report is generated as output.
---------------------------------------------------------------------------------------------------------------------
TESTING IN MULTIPLATFORM ENVIRONMENT
Testing in multiplatform environment is conducted to test the software that produces
same result with different platform/runtime environments and in different situations.
Here software is designed to run on different platform must undergo for test. It will check
the intended functionality and check whether it works in same manner and produce correct
output. Validation testing is used to test.
OVERVIEW:
Test the software that is capable to run on different platform. There can be slight
variation while working on various platform but check to determine whether the software
produces correct results on various platforms.
OBJECTIVE:
Six activities are performed to obtain the correct results in different platform.
CONCERN:
i) The platforms in the test lab will not be representative of the platform in the real
world.
ii) The s/w will be expected to work on platforms not included in the test lab.
iii) The supporting s/w on various platforms is not comprehensive.
WORK BENCH:

DO PROCEDURE:
TASK 1: DEFINE PLATFORM CONFIGURATION CONCERN
1. Develop a lot of potential concerns about the environment.
2. Error guessing to anticipate problem.
3. Error guessing requires 2 pre requisites
i) Error guessing group understands how the platform works.
ii) Error guessing group knows how to handle the problems arrives.
TASK 2: LIST NEEDED PLATFORM CONFIGURATION
- Test must identify the platforms that must be tested. This would be the input to the test
process.
- Determine if those platforms are available in the test data.
- If exact platform is not available, the testers need to determine whether an existing
platform is acceptable.
TASK 3: ACCESS TEST COMPONENT/TEST ROOM CONFIGURATION CONCERN
Testers needed to determine whether the platforms available in the test room are
acceptable for testing.
2steps:
1. Needed platform is listed.
2. Make a determination to whether the available platform is acceptable for testing.
TASK 4: LIST STRUCTURAL COMPONENT/ LIST STRUCTURAL COMPONENT
AFFECTED BY THE PLATFORM
- Structural testing data with architecture of the system.
- Architecture describes how the system is put together.
Some of the architectural problems:
1. Maximum size of fields.
2. Disk storage limitation.
3. Performance limitation.
TASK 5: LIST INTERFACE PLATFORM AFFECTS
- The point at which control is passed from one processing component to another.
- The task is to identify those interface, so that they can be tested.
2 important task:
1. Identify interface.
2. Whether those interface affected by specific platform.
TASK 6: EXECUTE THE TESTS
Finally the testing process is get executed.
OUTPUT:
The test report can be as,
- Structural components that works/not works.
- Interface that works/not works.
- Multiplatform operational concern that works/not works.
- Platform on which s/w should operate that works/not works.

TESTING SOFTWARE SYSTEM SECURITY


In today’s environment, security is becoming an important organizational strategy.
Physical security is effective, but one of the greatest risks organizations now face is software
security.
This risk occurs both internally and externally. Testing software system security is a
complex and costly activity.
Penetration point matrix: effectiveness of security testing can be improved by focusing on the
points where security has the highest probability of being compromised.
A testing tool that has proved effective in identifying these points is the penetration point
matrix (PPM).
OVERVIEW:
This test process provides two resources:
i) A security baseline
ii) An identification of the points in an information system that have a high risk of being
penetrated.
The penetration point tool involves building a matrix.
- One dimension is the activity that may need security controls.
- Other dimension is potential points of penetration.
OBJECTIVE:
The objective of security baseline is to determine the current level of security. The
objective of penetration point matrix is to enable organizations to focus security measures on the
points of highest risk.
CONCERNS:
There are two major concerns:
- Security risks must be identified.
- Adequate controls must be installed to minimize these risks.
WORK BENCH:
This workbench assumes a team knowledgeable about the information system to be
secured.
- Communication network in use
- Who has access to those network
- Data/processes that require protections
- Processing flow of software system
- Security systems and concepts
- Security penetration methods
INPUT:
- The input to this process is a team that is knowledgeable about the information system to
be protected and about how to achieve security for a s/w system.
DO PROCEDURE:
TASK 1: ESTABLISH A SECURITY BASELINE
A baseline is a snapshot of the organization’s security program at a certain time. There
are two categories of baseline information.
1. Related to security process ie, policies, procedures, methods, tools
2. Related to security act.
Creating baseline: 3 key aspects
- What to collect?
- From whom will the information be collected.
- The precision of information collected.
Six- step baseline procedure:
1. Establishing baseline team:
Team members must exhibit the following characteristics:
- Be representative of groups.
- Believe their responsible for performance.
- Responsible for using the results of baseline.
2. Set baseline requirements and objectives:
To record the information regarding
i) Resources production
ii) Resources used
iii) Methods
iv) Training
v) Awareness
vi) support
3. Design baseline data collection methods:
- Collect the feedback about methods.
4. Train baseline participants:
- Baseline awareness
- Collection methods
- Forms completion
5. Collect baseline data:
- Collect the adequate data from the system.
6. Analyze & report computer security status:
- 2-3=> poor
- 3-4=> effective
- 4-5=> excellent
TASK 2: BUILDING A PENETRATION POINT MATRIX
This task identifies the activities that the need control, as well as the data flow points
where penetration is most likely to occur.
Controlling people by controlling activities:
The only effective way to control people is to continually monitor their activities.
Selecting security activities:
It is based on 2 factors.
1. Type and magnitude of risks.
2. Type and extent of security controls.
The activities are categorized into 3 subsections:
1. Interface activities
2. Development activities
3. Operations activities
TASK 3: ANALYZE THE RESULTS:
- Testers can analyze the results from testing computer security.
- Analysis provides a baseline on the points at which security most widely could be
penetrated.
- Whether adequate control is exist at the point of highest probability.
- Adequacy of controls is used to protect your system.
- Strength and weakness in baseline assessment.
- Penetration point for which there are no controls.
- Evaluating the adequacy of security.
OUTPUT:

This testing will produce three outputs.


- Security baseline
- Penetration point matrix
- Security analysis
TESTING WEB-BASED SYSTEMS
Web-based systems are those systems that use the internet, intranet, and extranet. The
internet is a worldwide collection of interconnected networks.
An intranet is a private network inside a company using web-based applications, but for
use only within an organization.
An extranet is a private network that allows external access to customers and suppliers
using web0based applications.
OVERVIEW:
Web-based architecture is an extension of client/server architecture.
For web-based systems, the browsers reside on client workstations. These client
workstations are networked to a web server, either through a remote connection or through a
network such as a local area network (LAN) or wide area network (WAN).
As the web server receives and processes requests from the client workstation, requests
may be sent to the application server to perform actions such as data quires, electronic commerce
transactions and so forth.
CONCERNS:
**Browser compatibility **Functional correctness
**Integration **Usability
**Security **Performance
**Verification of code
WORKBENCH:
INPUT:
The input to this test process is the description of web-based technology used in the
system being tested. The following list shows how web-based systems differ from other
technologies. The description of the web-based systems under testing should address these
differences:
- Uncontrolled user interface(browser)
- Complex distributed systems
- Security issues
- Multiple layers in architecture
- New terminology and skill sets
- Object-oriented
DO PROCEDURE:
TASK 1: SELECT WEB-BASED RISKS TO INCLUDE IN THE TEST PLAN
Risks are important to understand because they reveal what to test. The degree of testing
should be based on risk. The detailed descriptions of the concerns associated with each risk are:
- Security concern
- Performance concern
- Correctness concern
- Compatibility concern
- Reliability concern
- Data integrity concern
- Usability concern
- Recoverability concern
Security concerns:
To validate the application and data are protected from outside intrusion or unauthorized
access. Some of the detailed security risk are:
a) External intrusion: this can include intrusion from people who are trying to gain access
to sensitive information.
b) Protection of secured transactions: this is especially true in dealing with electronic
commerce transactions. Many consumers are reluctant to give credit card information
over the internet for fear that information will be intercepted and used for fraudulent
purposes.
c) Viruses: viruses are contained in downloaded files that can be distributed from web sites
and e-mail.
d) Access control: the only authorized users have security access to a particular application
or portion of an application. This access is typically granted with a user ID and password.
e) Authorization level: authorization level refers to the ability of the application to restrict
certain transactions only to those users who have a certain level of authorization.
Performance concerns:
The most common kind of performance testing for internet applications is load testing.
Load testing seeks to determine how the application performs under expected and greater-than-
expected levels of activity. Application load can be assessed in a variety of ways:
a) Concurrency: to validate the performance of an application with a given number of
concurrent interactive users.
b) Stress: to validate the performance of an application when certain aspects of the
application are stretched to their maximum limits.
c) Throughput: to validate the number of transactions to be processed by an application
during a given period of time. For example, one type of throughput test might be to
attempt to process 100,000 transactions in one hour.
Correctness concerns:
An application functions correctly. This can include not only the functionality of buttons
and “behind the scenes” instructions but also calculations and navigation of the application.
a) Functionality: application performs its intended tasks as defined by a stated set of
specifications.
b) Calculations: these calculations must be tested to ensure correctness and to find defects.
c) Navigations: include testing links, buttons, and general navigation through a web site or
web-based applications.
Compatibility concerns:
A web-based application must be able to work correctly on a wide variety of system
configurations including browsers, operating systems, and hardware systems.
Reliability concerns:
An internet application must have a high level of availability and the information
provided from the application must be consistent and reliable to the user.
Data integrity concerns:
- Ensuring only the correct data is accepted
- Ensuring data stays in a correct state
Usability concerns:
- Ensuring the application is easy to use and understand
- Ensuring that users know how to interpret and use the information delivered from the
application.
- Ensuring that navigation is clear and correct
Recoverability concerns:
The remote accessibility of internet application makes the following recoverability
concerns important:
- Lost connections
- Timeouts
- Dropped lines
- Client system crashes
- Server system crashes or other application problems
TASK 2: SELECT WEB-BASED TESTS
Types and phases of testing needed to validate them.
Unit or component:
This includes testing at the object, component, page, or applet level. Edits and
calculations can also be tested at the unit level.
Integration:
Integration is the passing of data and/or control between units or components, which
includes testing navigation (i.e., the path the test data will follow)
System:
System testing examines the web application as a whole and with other systems. The
classic definition of system testing is that it validates that a computing system functions
according to written requirements and specifications. System testing typically includes hardware,
software, data, procedures and people.
User acceptance:
This includes testing that the web application supports business needs and processes, to
ensure that the end product supports the user’s needs.
Performance:
Testing that the system will perform as specified at predetermined levels, including wait
times, static processes, dynamic processes, and transaction processes.
Load/stress:
This type of testing checks to see that the server performs as specified at peak concurrent
loads or transactions throughput. It includes stressing servers, networks, and databases.
Regression:
Regression testing checks that unchanged parts of the application work correctly after a
change has been made.
Usability:
This type of testing assesses the ease of use of an application.
Compatibility:
Compatibility testing ensures that the application functions correctly on multiple
browsers and system configurations.
TASK 3: SELECT WEB-BASED TEST TOOLS
The more common web-based test tools follows:
- HTML tools
- Site validation tools
- Load/stress testing tools
- Test case generators.
TASK 4: TEST WEB-BASED SYSTEM
The tests to be performed for web-based testing will be the types of testing described in
the seven-step testing process.
CHECK PROCEDURE:
The web-based test team should use work paper 22-4 to verify that the web-based test
planning has been conducted effectively.
OUTPUT:
The only output from this test process is a report on the web-based system.
- A brief description of the web-based system
- The types of testing performed, and types of testing not performed
- The tools used
- The web-based functionality and structure tested that performed correctly/not correctly.
TESTING A DATA WAREHOUSE
OVERVIEW:
A data warehouse is a central repository of data made available to users. The centralized
storage of data provides significant processing advantages but at the same time raises concerns of
the data’s security, accessibility, and integrity.
CONCERNS:
** Inadequate assignments of responsibilities. **Inadequate service level
**Losing an update to a single data item **Improper use of data
**Inaccurate or incomplete data in a data warehouse **Improper use of data
**Inadequate audit trail to reconstruct transactions **Inadequate documentation
**Unauthorized access to data in a data warehouse
**Loss of continuity of processing **Lack of management support

WORKBENCH:

INPUT:
Organizations implementing the data warehouse activity need to establish processes to
manage operate and control that activity. Enterprise-wide requirements applicable to the data
warehouse includes:
Data accessibility: who has access to the data warehouse
Update controls: who can change data within the data warehouse
Date controls: the date that the data is applicable for different types of process
Usage controls: how data can be used by the users of the data warehouse
Documentation controls: how the data within the data warehouse is to be described to users
DO PROCEDURE:
TASK 1: MEASURE THE MAGNITUDE OF DATA WAREHOUSE CONCERNS
This task involves two activities.
i) The first activity is to confirm that the 14 data warehouse concerns described earlier
is appropriate for the organization. The list of concerns can be expanded or reduced.
ii) Once the list of potential data warehouse concerns has been finalized, the magnitude
of those concerns must be determined.
A team of testers knowledgeable in both testing and the data warehouse activity should be
assembled. For each concern, work paper lists several criteria. The each criteria should be
answered with a yes or no response.
Yes response- criterion has been met
No response- either the criterion has not been established
Comments column- to clarify the yes and no responses.
TASK 2: IDENTIFY DATA WAREHOUSE ACTIVITIES TO TEST
The more common process associated with data warehouse activity
Organizational process:
The data warehouse administration function normally has baseline responsibilities for
data documentation, system development procedures and standards for those applications using
data warehousing technology. The database administrator (DBA) function also has indirect or
dotted-line responsibilities to computer operations and users of data warehouse technology
through providing advice and directions.
Data documentation process:
Many existing systems are process-driven, whereas data warehouse technology involves
data-driven systems. This change in emphasis necessities better data documentation. The data
dictionary can be used as a standalone automated documentation tool or integrated into the
processing environment.
System development process:
The system development process in the data warehouse technology has the following
three objectives:
1. To familiarize the system’s development people with the resources and capabilities
available
2. To ensure that the proposed application system can be integrated into the existing data
warehouse structure, and if not, to modify the application and/or the data warehouse
structure
3. To ensure that application processing will preserve the consistency, reliability and
integrity of the data warehouse.
Access control process:
The access control function has two primary purposes.
- The first is to identify the resources requiring control and determine who should be given
access to those resources.
- The second is to define and enforce the control specifications
The access control function can be performed by the data warehouse administration function
or an independent security officer.
Data integrity process:
The following tasks need to be performed:
1. Identify the method of ensuring the completeness of the physical records in the data
warehouse.
2. Determine the method of ensuring the completeness of the logical structure of the data
warehouse (i.e., schema)
3. Determine which users are responsible for the integrity of which segments of the data
warehouse.
4. Develop methods to enable those users to perform their data integrity responsibilities.
Operation process:
Computer operators face the following challenges when operating data warehouse
technology:
1. Monitoring space allocation to ensure minimal disruptions because of space management
problems.
2. Understanding and using data warehouse software operating procedures and messages
3. Monitoring service levels to ensure adequate resources for users
4. Maintaining operating statistics so that the data warehouse performance can be monitored
5. Reorganizing the data warehouse as necessary
Backup/recovery process:
Recovery can occur only if adequate backup data is maintained. The recovery procedure
involves the following four major challenges:
1. Verifying that the integrity of the data warehouse has been lost
2. Notifying users that the data warehouse is inoperable and providing them with alternate
processing means
3. Ensuring and having adequate backup data ready
4. Performing the necessary procedures to recover the integrity of the data warehouse
TASK 3: TEST THE ADEQUACY OF DATA WAREHOUSE ACTIVITY PROCESS
- This take is to evaluate that each of the seven identified processes contains controls that
are adequate to reduce the concerns identified.
- A control is any means used to reduce the probability of a failure
- The tests are those focused on determining that specific controls exist, then the testers can
assume that the process is adequately controlled so that the probability of failure is
minimized.
CHECK PROCEDURE:
A quality control checklist for testing a data warehouse;
- Yes response : good test practices
- No response : warrant additional investigation
- Comments column : to explain No response and to record results of investigation
- The N/A response : the checklist item is not applicable to the test situation
OUTPUT:
- The output from data warehouse test process is an assessment of the adequacy of the data
warehouse activity processes.
- The assessment report should indicate the concerns addressed by the test team, the
processes in place in the data warehouse activity, and the adequacy of those processes.
Debugging:

 It is the systematic process of spotting and fixing the number of bugs, or defects,
in a piece of software so, that software is behaving as expected.
 Debugging is harder for complex systems in particular when various subsystems
are tightly coupled as changes in one system or interface may cause bugs to
emerge in another.
 Debugging is a developer activity & effective debugging is very important before
testing begins to increase the quality of the system.
 Debugging will not give confidence that the system meets its requirements
completely but testing gives confidence.

Defect Logging and Tracking:


 Logging is a process of finding defects in the application under test or product by
testing or recording feedback from customers and making new versions of the
product that fix the defects or the client’s feedback.
 Defect tracking is an important process in software engineering as complex and
business critical systems have hundreds of defect. One of the challenging factors
is managing, evaluating and prioritizing the defects. The number of defects gets
multiplied over a period of time and to effectively manage them, defect tracking
system is used to make the job easier. (Example: HP quality center)

Defect tracking parameters:


 Defect are tracked based on various parameters such as
 Defect id
 Priority
 Security
 Created by
 Created date
 Assigned to
 Resolved date
 Resolved by
 Status
Defect life cycle:
 Defect life cycle, also known as “Bug life cycle” is the journey of a defect life
cycle, which a defect goes through during its life time.
 It varies from organization to organization and also from project to project as it is
governed by the software testing process ad also depends upon the tools used.

Defect life cycle states:
New – Potential defect that is raised and yet to be validated.
Assigned – Assigned against a development team to address it but not yet resolved.
Active – The defect is being addressed by the developer and investigation is under progress. At
this stage there are 2 possible outcomes: Deferred or rejected.
Test – The defect is fixed and ready for testing.
Verified – The defect that is retested and the test has been verified by Quality Assurance (QA).
Closed – The final state of the defect that can be closed after the QA retesting or can be closed if
the defect is duplicate or considered as NOT a defect.

Defect Life cycle-work flow:


:stin

Re opened

Dependency testing:

 Dependency testing is a testing technique in which application requirements are


pre-examined for existing software, initial states in order to test the proper
functionality.
 The impacted areas of the application are also tested when testing the new
features or existing features.

Destructive testing:

 Destructive testing is a testing technique in which the application is made to fail


in an uncontrolled manner to test the robustness of the application and also to find
the point of failure.
 Destructive testing is performed under the most severe operating conditions and
it is continued until the application breaks.
 The main purpose of destructive testing is not only to determine the design
weaknesses if any which may not show up under normal working conditions but
also to determine the service life of the product
Depth testing:

Depth Testing is a testing technique in which feature of a product is tested in full


detail. Each of the feature is tested exhaustively during the integration phase and
the defects are logged, are captured across all parameters, functional and non-
functional.
Documentation testing:
 Documentation Testing involves testing of the documented artifacts that are
usually developed before or during the testing of Software.
 Documentation for Software testing helps in estimating the testing effort required,
test coverage, requirement tracking/tracing, etc.
 This section includes the description of some commonly used documented
artifacts related to Software development and testing, such as:
 Test Plan
 Requirements
 Test Cases
 Traceability Matrix
Domain testing:
 Domain testing is a software testing technique in which selecting a small number
of test cases from a nearly infinite group of test cases. For testing few
applications, Domain specific knowledge plays a very crucial role.
 Domain testing is a type of functional testing and tests the application by feeding
interesting inputs and evaluating its outputs.
 Equivalence class carries its own significance when performing domain testing.
Different ways of equivalence class are:
 Intuitive equivalence
 Specified equivalence
 Subjective equivalence
 Risk-based equivalence

Durability testing
 Durability Testing is a Performance testing technique used to determine the
characteristics of a system under various load conditions over time.
 This testing helps us to identify the stability of transaction response times over the
duration of the test.
 The following parameters are measured while testing for durability:
 Memory leaks
 Evaluated I/O activity levels
 Valuate database resource consumption.

Use of Object Oriented Testing


Testing is a continuous activities, object oriented testing encompasses levelwise testing
like unit testing, sub system testing, system testing.
The main advantage of object oriented testing are,
 Reusability
 Reliability
 Interoperability
 Extendibility
Comparison between traditional software and web based software.

FEATURES TRADITIONAL WEB BASED SOFTWARE


SOFTWARE
Cost Cost is high compared to Mostly free of cost.
web based software.
File storage & Backup Store the files locally. Store the files globally.
Availability Availability lesser than web Available anywhere.
based software.
UNIT V
SOFTWARE TESTING AND QUALITY METRICS
Testing Software System Security - Six-Sigma – TQM - Complexity Metrics and Models –
Quality Management Metrics - Availability Metrics - Defect Removal Effectiveness - FMEA -
Quality Function Deployment – Taguchi Quality Loss Function – Cost of Quality. Case Study
for Complexity and Object Oriented Metrics.

SIX SIGMA :DEFINITION:


Six sigma means a measure of quality that strives for near perfection. Six sigma is a
disciplined, data-driven approach and methodology for eliminating defects (driving towards six
standard deviations between the mean and the nearest specification limit) in any process from
production to transactional and from product to service.
The statistical representation of six sigma describes quantitatively how a process is
performing. The term sigma means standard deviation. Standard deviation measures how much
variation exists in a distribution of data.
The term “six sigma” refers to the ability of highly capable processes to produce output
within specification. In particular, processes that operate with six sigma quality produce at defect
levels below 3.4 defects per (one) million opportunities (DPMO). Six sigma’s implicit goal is to
improve all processes to that level of quality or better.
METHODOLOGY:

The DMADV model includes the following five steps:


- Define: determine the project goals and the requirements of customers (external and
internal).
- Measure: assess customer needs and specifications.
- Analyze: examine process options to meet customer requirements.
- Design: develop the process to meet the customer requirements.
- Verify: check the design to ensure that it’s meeting customer requirements.
The DMAIC model includes the following five steps:
- Define the projects, goals and deliverables to customers (internal and external). Describes
and quantify both the defects and the expected improvements.
- Measure the current performance of the process. Validate data to make sure it is credible
and set the baselines.
- Analyze and determine the root causes of the defects. Narrow the casual factors to the
vital few.
- Improve the process to eliminate defects. Optimize the vital few and their
interrelationships.
- Control the performance of the process.
Fish Bone/Ishikawa Diagram:
- The fish bone or Ishikawa diagram is one important concept which can help to find the
root cause of the problem.
- Fish bone was conceptualized by Ishikawa, so in honor of its inventor, this concept was
named the Ishikawa diagram.
- Inputs to conduct a fish bone diagram come from discussion and brainstorming with
people involved in the project. The following figure shows the structure of the Ishikawa
diagram.

To know the cause we have taken four main bones as input: finance, process, people and
tools.
Different Kinds Of Variations Used In Six Sigma:
There are four basic ways of measuring variations: Mean, Median, Mode, and Range.

Standard Deviation:
The most accurate method of quantifying variations is by using standard deviation. It
indicates the degree of variation in a set of measurements or a process by measuring the average
spread of data around the mean. It’s more complicated than the deviation process discussed in
the previous question, but it does give accurate information.
Roles and Responsibilities:
Quality Leader (QL)/Quality Manager (QM):
The leader represents the expectations of the customer and takes necessary actions to
improve the operational effectiveness of the company. Generally the quality function is separate
from the production or other transaction processing function just to exercise impartiality. The
quality manager is one of the senior most executive directly reporting to the CEO of the
company.
Master Black Belt (MBB):
Master Black Belts are the senior executives responsible to handle some specific
important function of the company. These functions can be HR, or legal, or some other process
specific areas. Master Black Belts work in tandem with the process owners & are responsible to
ensure that quality objectives and targets are fixed, plans are set, process is continuously tracked,
and training is provided to the concerned. MBB closely interact with the process owners and
share information on daily basis.
Process Owner (PO):
Process owners are the real doers & are responsible for specific processes. For example,
in Code Development department there one head shall be there- maybe the PM/GM who
becomes the process owner. According to the size of the company and major activities, there can
be process owners at junior levels of the company structure.
Black Belt (BB):
Black Belts executives are the heart and soul of the Six Sigma companies. The objective
of having BB in the company is to provide effective leadership to the quality projects and these
BB work full time until the project gets completed. Black Belts are responsible to provide
necessary training to their Green Belts working on the project.
Green Belts (GB):
Green Belts are junior employees especially trained in Six Sigma. GB spend apportion of
their time completing various projects assigned to them in addition to their regular work role and
responsibilities. Depending on their workload, they usually around 10% to 50% of their time on
their projects. As the Six Sigma quality program evolves in the company, all employees start
adopting the Six Sigma methodology in their daily routine & a time comes when they don’t need
any percentage limit for their time. They get so much used to the new style that they start
devoting 100% of their time the new way.
TOTAL QUALITY MANAGEMENT (TQM)

INTRODUCTION:
Quality Definition:
According to Deming, “quality may be defined as an excellent product or services that
fulfills or exceeds customer exceptions”.
Quality can be quantified as follows
Q=P/E
Where, Q=Quality P=Performance E=Exception
Definition of TQM:
- Total Quality Management (TQM) is an enhancement to the traditional way of doing
business.
Total – Made up of the whole
Quality – Degree of Excellence a Product or Service provides.
Management – Art of handling, controlling, directing etc.
- TQM is the application of quantitative methods and human resources to improve all the
processes within an organization and exceed customer needs now and in the future.
- Total quality management is defined as philosophy and a set of guiding principles that
represent the foundation of a continuously improving organization.
Basic approach:
TQM requires six basic concepts:
1. A commitment and involved management to provide long-term top-to-bottom
organizational support.
2. An unwavering focuses on the customer, both internally and externally.
3. Effective involvement and utilization of the entire work force.
4. Continuous improvement of the business and production process.
5. Treating suppliers as partners.
6. Establish performance measures such as uptime, percent nonconforming, absenteeism
and customer satisfaction should be determined for the process. Quantitative data are
necessary to measure the continuous quality improvement activity.
EVOLUTION OF TQM:
Inspection - inspect the finished product and eliminate defective.
Quality control - also to determine the cause of defects and correcting it.
Quality assurance - products and processes are good.
TQM - continuous process improvements through measurements.
PRINCIPLES OF TQM:
- Quality is everyone’s business
- Customer emphasis
- Quality must be built into the product
- TQM requires management commitment and involvement at all levels
- TQM accomplishment involves continual training
- Long term emphasis on measurable processes and productivity improvements
- Understand the current process before improvement begins
- Cross functional orientation and teamwork
- Effective use of statistical methods & quality control
- Information sharing
- Eliminate communication barriers
FRAMEWORK OF TQM:

TQM GURUS (Philosophies of TQM):


(i) Deming’s Cycle or PDCA cycle:
P – PLAN (process the improvement)
D – DO (implement the plan)
C – CHECK (see how closely result meets goals)
A – ACT (use the improvement process as standard practice)
(ii) JURAN PHILOSOPHY:
Juran’s quality trilogy
Quality planning - Identify who are the customers
- Determine the needs of those customers.
- Translate those needs into our language.
- Develop a product that can respond to those needs.
- Optimize the product features so as to meet our
needs and customer needs.

Quality improvement - Develop a process which is able to produce the


products.
- Optimize the process.
Quality control - Prove that the process can produce under
operating conditions with minimal inspection.
-
Transfer the process to operations.
Juran’s 10 steps for quality improvement:
According to him, quality means – fitness for use
1. Build awareness for the need and opportunity for improvement.
2. Set goals for improvement.
3. Organize people to reach the goals.
4. Provide training throughout the organization.
5. Carry out projects to solve the problems.
6. Report progress.
7. Give recognition.
8. Communication skills.
9. Keep score.
10. Maintain momentum by making annual improvement part of the regular system.
(iii) Crosby philosophy:
He developed four absolutes of quality management.
- Quality means conformance to requirements, not goodness.
- Quality is reached by prevention, not appraisal.
- Quality has a performance standard of zero defects. Not acceptable quality levels.
- Quality is measured by the price of nonconformance, not indexes.
(iv) Contribution of Ishikawa:
- Ishikawa sees the cause-and-effect diagram or Ishikawa diagram, like other tools, as a
device to assist groups or quality circle in quality improvement.
- Other than technical contributions to quality, Ishikawa is associated with the Company-
Wide Quality Control (CWQC) movement as implies that quality does not only mean the
quality of product, but also of after sales service, quality of management, the company
itself and the human life.
Ishikawa’s PDCA model:
Plan
- determine goals and targets
- Determine methods of reaching goals
Do
- Engage in education and training
- Implement work
Check
- Check the effects of implementation
Act
- Take the appropriate action
TQM TOOLS AND TECHNIQUES:
(i) BENCH MARKING
Bench marking is a systematic method by which organizations can measure
themselves against the best industry practices.
Reasons to benchmark:
- It is a tool to achieve business and competitive objectives
- It can inspire managers (and organizations) to complete
- It is time and cost effective
- It constantly scans the external environment to improve the process
- Potential and useful technological breakthroughs can be located and adopted early.
Types of benchmarking:
a. Strategic benchmarking
Used where organizations seek to improve their overall performance by examining the
long-term strategies and general approaches those have enabled high-performers to
succeed.
b. Performance benchmarking
Or competitive benchmarking is used where organizations consider their positions in
relation to performance characteristic of key products and services.
c. Process benchmarking
Used when the focus is an improving specific critical processes and operations
d. Functional benchmarking
Or generic benchmarking is used when organizations look to benchmark with partners
drawn from different business sectors and areas of activity to find ways of improving
similar functions or work processes.
e. Internal benchmarking
Involves seeking partners from within the same organization, i.e., from business units
located in different areas.
f. External benchmarking
Involves seeking outside organizations that are known to be best in class.
g. International benchmarking
Used where partners are sought from other countries because best practitioners are
located elsewhere in the world and/or there are too few benchmarking partners within the
same country to produce valid results.
Process of benchmarking:
The following six steps contain the core techniques of benchmarking
1. Decide what to benchmark
2. Understand current performance
3. Plan
4. Study others
5. Learn from the data
6. Using the findings
Benefits from benchmarking
1. Step changes in performance and innovation
2. Improving quality and productivity
3. Improving performance measurement
(i) Seven new management tools:
- Affinity diagram
- Inter-relationship diagram
- Tree diagram
- Matrix diagram
- Matrix diagram
- Matrix data diagram
- Process decision program chart
- Arrow diagram
(ii) Seven quality tools:
- Histograms
It represents variation in sets of data through bar charts, thus demonstrating
“distribution” in the level of variation.
- Scatter diagram and stratification
In scatter diagram, three types of co-relations exists.
1. Positive correlation.
2. Negative correlation.
3. No correlation.
Purpose of the scatter diagram is thus; to display what happens to one variable
when another variable is changed.
The diagram is used to test a theory, that the two variables are related.
- Pareto diagram
 Joseph juran observed that most of the quality problems are generally created by
only the few causes.
 For example: 80% of all internal failures are due to one or two manufacturing
problems.
 Identifying these “vital few” and ignoring the “trivial many” will make the
corrective action give a high return for a low money input.
- Check sheet
- Cause and effect diagram
- Flow chart
- Control chart
(iii) Statistical process control:
Statistics is defined as the science that deals with the collection, tabulation, analysis,
interpretation and presentation of quantitative data.
Data collected for quality purposes are obtained by direct observation and are classified as
1. Variables (Measurable quality characteristics like length measured in meters)
2. Attributes (Quality characteristics which are classified as either conforming or non-
conforming to specifications, such as “go & no-go” gauge.
Measures of central tendency and dispersion
There are two important analytical methods of describing a collection of data as
1. Measures of central tendency.
2. Measures of dispersion.
i) A measure of central tendency of a distribution is a numerical value that describes
how the data tend to build up in the centre. There are three measures in quality as
a. Average
b. Median
c. Mode
a) Average is the sum of observations divided by the number of observations.

Where, n= number of observations


Xi= observed value
b) Median is the value which divides a series of nodes ordered observations so that the
number of items above it is equal to the number of items below it.
c) Mode is the value which occurs with the greatest frequency in a set of numbers.
ii) Measure of dispersion describes how the data are spread out on each side of the
central value. The two measures of dispersion are
a. Range
b. Standard deviation
a) Range is the difference between the largest and smallest values of observations in a series
of numbers.
Range = R = Xh-Xl
Where, R- range
Xh- highest observation in a series
Xl- lowest observation in a series
b) Standard deviation measures the spreading tendency of the data. Larger the standard
deviation, greater the variability of data.
BARRIERS TO TQM IMPLEMENTATION (Obstacles)
The barriers to TQM implementation were
- Lack of management commitment
- Inability to change organizational culture
- Improper planning
- Lack of continuous training and education
- Incompatible organization structure and isolated individuals and departments
- Ineffective measurement techniques and lack of access to data and results
- Paying inadequate attention to internal and external customer
- Inadequate use of empowerment and teamwork
- Failure to continually improve
BENEFITS OF TQM
- Improved quality
- Employee participations
- Teamwork
- Working relationship
- Customer satisfaction
- Employee satisfaction
- Productivity
- Communication
- Profitability
- Market share
COST OF QUALITY
Quality statements
Quality statements includes:
Vision statement: a clear declaration of what an organization aspires to be in the future.
Mission statement: it provides as clear statement of the purpose of all those involved in
the business.
Quality policy statement: this statement serves as a guide for everyone in the
organization. It clarifies the employee about how the product and services must be provided to
the customer.
DIMENSIONS OF QUALITY:
Performance : primary product characteristics, such as the brightness of the picture.
Features : secondary characteristics, added features, such as remote conrol.
Conformance : meeting specifications or industry standards, workmanship.
Reliability : consistency of performance over time, average time for the unit to fail.
Durability : useful life, includes repair.
Service : resolution of problems and complaints, ease of repair
Reputation : human-to-human interface, such as the courtesy of the dealer
Aesthetics : sensory characteristics, such as exterior finish.
Response : past performance and other intangible, such as being ranked first.
COST OF QUALITY:
Cost of quality is the amount of money a business loses because its product or service
was not done in the right place or the cost associated in providing poor quality product and
services is known as cost of quality.
Cost of quality is defined as those costs associated with the non-achievement of product
or service quality as defined by requirements established by the organization and its contracts
with customers and society.
The four categories of quality cost includes:
1. Internal failure cost- the cost associated with defects that are found prior to transfer of
the product to the customer.
2. External failure cost- the cost associated with defects that are found after product is
shipped to the customer.
3. Appraisal cost- the cost incurred in determining the degree of conformance to quality
requirement.
4. Prevention cost- the cost incurred in keeping failure and appraisal costs to a minimum.
The companies estimate quality cost for the following reasons:
- To improve communication between middle managers and upper managers
- To improve major opportunities for cost reduction
- To identify the opportunities for reducing customer dissatisfaction and associated threats
to product scalability.
Objectives:
- Establish bare line measures.
- Determine which processes need to be improved.
- Indicate process gains and losses.
- Compare goals with actual performance.
- Provide information for individual and team evaluation.
- Provide information to make informed decisions.
- Determine the overall performance of the organization.
Typical measurements:
1. Human resources
2. Customers
3. Productions
4. Research + development
5. Suppliers
6. Marketing sales
7. Administration
FAILURE MODE EFFECT ANALYSIS (FMEA)
FMEA is an analytical technique that combines the technology and experience of people
in identifying failure modes of a product or process and planning for its elimination.
It is a group of activities comprising the following:
1. Recognize the potential failure of a product or process.
2. Identify actions that eliminate/reduce the potential failure
3. Document the process
Intent of FMEA:
- Continually measuring the reliability of a machine, product or process.
- To detect the potential product- related failure mode
- FMEA evaluation to be conducted immediately following the design phase
Two important types of FMEA are
1. Design FMEA
2. Process FMEA
Design FMEA (DFMEA)
- The design FMEA is used to analyze products before they are released to production
- It focuses on potential failure modes of products caused by design deficiencies.
- Design FMEAs are normally done at three levels
a) System
b) Subsystem
c) Component levels
- This type of FMEA is used to analyze hardware, functions or a combination.
Process FMEA (PFMEA)
- The process FMEA is normally used to analyze manufacturing and assembling processes
at the system, subsystem or component levels.
- This types of FMEA focuses on potential failure modes of the process that are caused by
manufacturing or assembling process deficiencies.
FMEA team
Engineers from
- Assembly
- Manufacturing
- Materials
- Quality
- Services
- Suppliers
- Customers
Benefits of FMEA:
- Having a systematic review of components failure modes to ensure that any failure
produces minimal damage.
- Determining the effects of any failure on other items
- Providing input data for exchange studies
- Determining how the high-failure rate components can be adapted to high-reliability
components
- Eliminating/ minimizing the adverse effects that failures could generate
- Helping uncover the misjudgments, errors etc.
- Reduce development time and cost of manufacturing
FMEA documentation:
The purpose of FMEA documentation is
- To allow all involved engineers to have access to others thoughts
- To design and manufacture using these collective thoughts (promotes team approach)

QUALITY FUNCTION DEPLOYMENT (QFD)


- Quality Function Deployment is a planning tools used to fulfill customer expectations.
- Quality Function Deployment focuses on customer expectations or requirements, often
referred to as voice of the customer.
- Quality function development is a team-based management tool in which customer
expectations are used to drive the product development process. Conflicting
characteristics or requirements are identified early in the QFD process. Conflicting
characteristics or requirements are identified early in the QFD process and can be
resolved before production.
QFD TEAM:
There are two types of teams namely
1. Team for designing a new product
2. Team for improving an existing product
BENEFITS OF QFD:
1. Improves customer satisfaction
- Creates focus on customer requirements
- Uses competitive information effectively
- Prioritizes resources
- Identifies items that can be acted upon
2. Reduces implementation time
- Decreases midstream design changes
- Limits post introduction problems
- Avoids future development redundancies
3. Promotes team work
- Based on consensus
- Creates communication
- Identifies actions
4. Provides documentation
- Documents rational for design
- Adds structure to the information
- Adapts to changes (a living document)
5. The voice of the customer
- Customer satisfaction, likely quality, is defined as meeting or exceeding customer
expectations. Words used by the customers to describe their expectations are often
referred to as the voice of the customer.
- Sources for determining customer expectations are focus groups, surveys, complaints,
consultants, standards and federal regulations. Quality function deployment begins with
marketing to determine what exactly the customer desires from a product.
- There are many different types of customer information and ways that an organization
can use to collect data, as shown in figure.
- The organization can search (solicited) for the information, or the information can be
volunteered (unsolicited) to the organization. Solicited and unsolicited information can be
further categorized into measurable (quantitative) or subjective (qualitative) data.
Furthermore, qualitative information can be found in a routine (structured) manner or
haphazard (random) manner.
HOUSE OF QUALITY (HOQ):
- Primary planning to be used in QFD.
- It translates the voice of customer into design requirements that meet specific target
values and matches those against how an organization will meet those requirements.
- Many engineers and managers consider the HOQ to be the primary chart in quality
planning.
THE STEPS IN BUILDING A HOQ ARE:
1. List customer requirements (WHAT’s)
Quality function deployment starts with a list of goals/objectives. This list is often
referred to as the WHAT’s that a customer needs or expects in a particular product. This list of
primary customer requirements is usually vague and very general in nature.
2. List technical descriptors (HOW’s)
The QFD team should come up with engineering characteristics or technical descriptors
(HOW’s) that will affect one or more of the customer requirements. These technical descriptors
make up the ceiling, or second floor, of the house of quality.
3. Develop a relationship matrix between WHAT’s and HOW’s
The next step in building a house of quality is to compare the customer requirements and
technical descriptors and determine their respective relationships.
4. Develop an inter-relationship matrix between WHAT’s and HOW’s
The relationship matrix is used to represent graphically the degree of influence between
each technical descriptor and each customer requirements.
5. Competitive assessments
The competitive assessments are a pair of weighted table (or graph) that depict item for
how competitive products compare with current organization products. The competitive
assessment tables are separated into two categories
a. Customer competitive assessments
b. Technical competitive assessments
6. Develop prioritized customer requirements
The prioritized customer requirements make up a block of columns corresponds to each
customer requirements in the house of quality on the right side of the customer competitive
assessment. These prioritized customer requirements contain columns for importance to
customer, targets value, scale-up factor, sales point and an absolute weight.
7. Develop prioritized technical descriptors
These prioritized technical descriptors contain a degree of technical difficulty, target
value and absolute and relative weights. The QFD team identifies technical descriptors that are
the most needed to fulfill customer requirements and need improvement.

Phases:
1. Product planning : customer specification turned into requirements
2. Part deployment : requirements turned into parts requirements
3. Process planning : process selected to meet part requirements
4. Process control : process control, inspection and test methods developed
TAGUCHI QUALITY LOSS FUNCTION
INTRODUCTION:
- Genichi Taguchi, a Japanese engineer.
- He realized the importance of cost associated with poor quality and its impact on
corporate profitability.
- Taguchi did not confine himself to the corporate losses alone but look into consideration
the losses (due to poor quality) to the society.
- Using the Taylor Expansion Series, Taguchi developed a mathematical model in which
loss is a quadratic function of the deviation of the quality of interest from its target value.

- In Taguchi philosophy, the definition of quality is changed from “achieving conformance


to specifications” to “minimizing the variability while achieving the target”.
- DOE (DESIGN OF EXPERIMENTS) in order to identify the factors which are
responsible for the variation, to find the relative impact of the factors on and hence to
select a combination of input parameters to achieve the results.
DOE (DESIGN OF EXPERIMENTS)
- DOE is a structural method and is not a hit-or-miss experimentation where input
parameters are adjusted randomly hoping to achieve process improvement.
- Taguchi method uses the orthogonal array in order to express the relationship among the
factors under navigation.
- The design of an orthogonal array does not require that all combinations of factors be
tested. Thus, the experiment is cost effective.
- If suppose the number of factors is 3 and levels considered within each factor are 2, then
number of experiments (runs) required are (levels) factors=(2)3=8.
DEFECT REMOVAL EFFECTIVENESS
The phase-based defect removed pattern is an extension of the test defect density metric.
In addition to testing, it requires the tracking of defects at all phases of the development cycle,
including the design reviews, code inspections, and formal verifications before testing.
Defect removal effectiveness (or efficiency, as used by some writers) can be defined as
follows:

Latent defects in the product=defects removed during the phase + defects found later
- The metric can be calculated for the entire development process, for the front end (before
code integration), and for each phase.
- It is called early defect removal and phase effectiveness when used for the front end and
for specific phases.
- The higher the value of the metric, the more effective the development process.

The various phase of defect removal are


- High-level design review (l0),
- Low-level design review (l1),
- Code inspection (l2),
- Unit Test (UT),
- Component Test (CT),
- System Test (ST).

SOFTWARE QUALITY METRICS


- Software quality metrics are a subset of software metrics that focus on the quality aspect
of the product, process, and project. In general, software quality metrics are more closely
associated with process and product metrics than with project metrics.
- Software metrics can be classified into three categories:
Product metrics,
Process metrics, and
Project metrics.
Product metrics describes the characteristics of the product such as size, complexity,
design features, performance and quality level.
Process metrics can be used to improve software development and maintenance.
Examples include the effectiveness of defect removal during development, the pattern of testing
defect arrival, and the response time of the fix process.
Project metrics describes the project characteristics and execution. Examples include the
number of software developers, the staffing pattern over the lifecycle of the software, cost,
schedule and productivity.
PRODUCT QUALITY METRICS
1. Mean time to failure (MTTF)
The probability of failure associated with a latent defects is called its size, or “bug size”.
2. Defect density
Defect rate is the number of defects over the opportunities for error (OFE) during a
specific time frame.
a) Lines of code (LOC)
LOC is defined as a lone of code is any line of program text that is not a comment or
blank line, regardless of the number of statements or fragments of statements on the line. This
specifically includes all lines containing program headers, declarations and executables and non-
executable statements.
Jones (1986) describes several variations:
- Count only executable lines
- Count executable lines plus data definitions
- Count executable lines, data definitions, and comments
- Count executable lines, data definitions, comments and job control language
- Count lines as physical lines on an input screen
- Count lines as terminated by logical delimiters
To calculate defect rate for the new and changed code, the following must be available:
- LOC count: the entire software product as well as the new and changed code of the
release must be available.
- Defect tracking: defects must be tracked to the release origin- the portion of the code
that contains the defects and at what release the portion was added, changed, or
enhanced.
b) Function points:
A function can be defined as a collection of executable statements that performs a
certain tasks, together with declarations of the formal parameters and local variables
manipulated by those statements.
The ultimate measure of software productivity is the number of functions a
development team can produce given a certain amount of resource, regardless of the size
of the software in lines of code.
It is a weighted total of five major components that comprise an application:
- Number of external inputs (e.g., transaction types) x 4
- Number of external outputs (e.g., report types) x 5
- Number of logical internal files (files as the user might conceive them, not physical files)
x 10
- Number of external interface file (files accessed by the application but not maintained by
it) x 7
- Number of external inquiries (types of online inquiries supported) x 4
3. Customer problems
PUM = Total problems that customers reported (true defects and non-defect-
oriented problems) for a time period (div) Total number of license-months of the
software during the period.
4. Customer satisfaction
Customer satisfaction is often measured by customer survey data via the five-point scale:
Very satisfied
Satisfied
Neutral
Dissatisfied
Very dissatisfied
UNIT – IV
TEST AUTOMATION
Selecting and Installing Software Testing Tools - Software Test Automation – Skills needed for
Automation – Scope of Automation – Design and Architecture for Automation – Requirements
for a Test Tool – Challenges in Automation – Tracking the Bug – Debugging – Case study using
Bug Tracking Tool.

SELECTING AND INSTALLING SOFTWARE TESTING TOOL


Selecting the test tool is an important aspect of test automation for several reasons as given
below:
- Free tools are not well supported and get phased out soon
- Developing in-house tools takes time
- Test tools sold by vendors are expensive
- Test tools require strong training
- Test tools generally do not meet all the requirements for automation
- Not all test tools run on all platforms.
CRITERIA FOR SELECTING TEST TOOLS
**Meeting requirements **Technology expectations
**Training/skills **Management aspects
1. Meeting requirements:
i) There are plenty of tools available in the market but rarely do they meet all the
requirements of a given product or a given organization. Evaluating different tools
for different requirements involves significant effort, money and time.
ii) Test tools are usually one generation behind and may not provide backward or
forward compatibility with the product under test.
iii) Test tools may not go through the same amount of evaluation for new
requirements.
iv) A number of test tools cannot differentiate between a product failure and a test
failure. The test tools may not provide the required amount of trouble
shooting/debug/error messages to help in analysis. This can result in increased log
message/may result in going through the test manually.
2. Technology expectations:
i) Test tools in general may not allow test developers to extend/modify the
functionality of the framework. So extending the functionality requires going
back to the tool vendor and involves additional cost & effort.
ii) A good number of test tools require their libraries to be linked with product
binaries. When these libraries are linked with the source code of the product, it is
called “instrumented code”.
3. Test tools are not 100% cross-platform. They are supported only on some operating
system platforms and the scripts generated from tools may not be compatible on other
platforms.
4. Training skills: While test tools require plenty of training, very few vendors provide the
training to the required level. Test tools expect the users to learn new language/scripts
and may not use standard language/script.
This increases skill requirement automation and increase the needs for learning curve inside
the organization.
5. Management aspects:
A test tool increases the system requirement and requires the hardware and software to be
upgraded. When selecting the test tool, it is important to note the system requirements and cost
involved in upgrading the s/w & h/w needs to be included with the cost of the tools.
Deploying a test tool requires as much effort as deploying a product in a company.
STEPS FOR TOOL SELECTION AND DEPLOYMENT:
1. Identify your test suite requirements among the generic requirements discussed. Add
other requirements (if any).
2. Make sure experiences discussed in previous sections are taken care of.
3. Collect the experiences of other organizations which used similar test tools.
4. Keep a checklist of questions to be asked to the vendors on cost/effort/support.
5. Identify list of tools that meet the above requirements
6. Evaluate and shortlist one/set of tools and train all test developers on the tool.
7. Deploy the tools across test teams after training all potential users of the tool.

SOFTWARE TEST AUTOMATION


DEFINITION:
“Developing software to test the software is called test automation”. Test automation can
help address several problems.
Automation saves time as software can execute test cases faster than human do. This help
in running the tests overnight or unattended. The time thus saved can be used effectively for the
test engineers to
1. Develop additional test cases to achieve better coverage.
2. Perform some esoteric or specialized tests like ad hoc testing.
3. Perform some extra manual testing.
Thus with test automation, the saved time will be used to develop additional test cases,
improves coverage of testing and enable to release the product frequently.
- Test automation can free the test engineers from mundane tasks and make them focus on
more creative tasks.
- Automated tests can be more reliable: when an engineer executes a particular test case
many times manually, there is a chance for human error. With machine-oriented
activities, automation can be expected to produce more reliable results every time.
- Automation helps in immediate testing: automation reduces the time gap between
development and testing as script can be executed as soon as the product build is ready.
- Automation can protect an organization against attrition of test engineers:
automation can also be used as a knowledge transfer tool to train test engineers on the
product as it has a repository of different tests for the product.
- Test automation opens up opportunities for better utilization of global resources:
manual testing requires the presence of test engineers, but automated tests can be run
round the clock, 24hrs a day and 7days a week.
- Certain types of testing cannot be executed without automation: test cases for certain
testing such as reliability testing, stress testing, load and performance testing cannot be
executed without automation.
- Automation means end-to-end, not test execution alone: automation does not end with
developing programs for test cases. It should also consider all activities such as picking
up the right product build, choosing right configuration, running the tests, etc.
TERMS USED IN AUTOMATION:
A test case is a set of sequential steps to execute a test operating on a set of predefined
inputs to produce certain expected outputs. There are two types of test cases:
i) Manual test case: is executed manually
ii) Automated test case: is executed using automation
- A test case should always have an expected result associated when executed.
- A test case can be represented in many forms. It can be documented as a set of simple
steps, or it could be assertion statement/set of assertions.
- Testing involves several phases and several types of testing. Some test cases are repeated
several times during a product release because the product is built several times.
s.no Test cases for testing Belongs to what type of testing
.
1 Check whether log in works Functionality
2 Repeat log in operation in a loop for 48 hours Reliability
3 Perform log in from 10000 clients Load/stress testing
4 Measure time taken for log in operations in Performance
different conditions
5 Run log in operation from a machine running internationalization
Japanese language

From the table observe that there are two important dimensions:
i) What operations have to be tested
ii) How the operations have to be tested

- The “how” portion of the test case is called scenarios


- “what an operation has to do” is a product-specific feature and “how they are to be run”
is a framework-specific requirements.

SKILLS NEEDED FOR AUTOMATION


There are different “generations of automations”. The skills required for automation
depends on what generation of automation the company is in or desires to be in the near future.
The automation of testing is broadly classified into three generations.
1. First generation – record and playback
2. Second generation – data-driven
3. Third generation – action-driven
First generation- record and playback:
Record and playback avoids the repetitive nature of executing tests. Almost all the test
tools available in the market have the record and playback feature.
A test engineer records the sequence of actions by keyboard characters or mouse clicks
and those recorded scripts are played back later in the same order they are recorded.
Avoiding repetitive works, it is also simple to record and save the script.
Second generation- data-driven:
This method helps in developing test scripts that generates the set of input conditions and
corresponding expected output.
This enables the tests to be repeated for different input and output conditions.
Third generation- action-driven:
This technique enables a layman to create automated tests. There are no input and
expected output conditions required for running the tests.
All actions appeared on the application are automatically tested, based on generic
controls defined for automation.
The set of action are represented as objects and those objects are reused.
The input and output conditions are automatically generated and used.
CLASSIFICATION OF SKILLS FOR AUTOMATION:
Automation-first Automation–second Automation-third generation
generation generation
Skills for test case Skills for test case Skills for test case Skills for
automation automation automation automation
Scripting language Scripting language Scripting language Programming
language
Record-playback Programming language Programming Design and
tools usage language architecture skills for
framework creation
Knowledge of data Design and Generic test
generation techniques architecture of the requirements for
product under test multiple products
Usage of the product Usage of framework
under test

SCOPE OF AUTOMATION
The automation requirements define what needs to be automated looking into various
aspects. The specific requirements can vary from product to product, from situation to situation,
from time to time.
1. Identify the types of testing amenable to automation
Certain types of test automatically lend themselves to automation.
a) Stress, reliability, scalability and performance testing: these type of testing require the
test cases to be run from a large number of different machines for an extended period of
time, such as 24hr, 48hr and so on.
- Test cases belonging to these testing types become the first candidates of automation.
b) Regression tests: This test is repetitive in nature; these test cases are executed multiple
times during the product development phases. Here the automation will save significant
time and effort in the long run.
c) Functional tests: these kinds of tests may require a complex set up and thus require
specialized skill. Automating these once, using the expert skills sets, can enable using
less-skilled people to run these tests on an ongoing basis.
2. Automating area less prone to change
In a product scenario, the changes in requirements are quite common. In such situation
automation should consider those areas where requirements go through lesser or no
changes.
Normally changes in requirements cause scenarios and new features to be impacted, not
the basic functionality of the product.
To avoid rework on automation test cases, proper analysis has to be done to find out the
areas of changes to user interfaces and automate only those areas that will go through
relatively less change.
3. Automate tests that pertain to standards
Product tests may have to undergo is compliance to standards. For example: a product
providing a JDBC interface should satisfy the standard JDBC tests.
Automating for standards provides a dual advantage. Test suites developed for standards
are not only used for product but can also be sold as test tools for the market.
To certify the s/w or h/w, a test suite is developed and handled over the different
companies.
4. Management aspects in automation
Prior to starting automation, adequate effort has to be spent to obtain management
commitments.
Automation generally is a phase involving a large amount of effort and is not necessarily
a one-time activity.
The automated test cases need to be maintained till the product reaches obsolescence. It
involves significant effort to develop and maintain automated tools.

DESIGN AND ARCHITECTURE FOR AUTOMATION


Design and architecture is an important aspect of automation. As in product development,
the design has to represent all requirements in modules and in the interactions between modules.
Integration testing, both internal interfaces and external interfaces have to be captured by
design and architecture.
The thin arrow represents the internal interfaces and the direction of flow and thick
arrows show the external interfaces.
All the modules, their purpose and interactions between them are describes in the
subsequent sections.

Architecture for test automation involves two major heads:


i) A test infrastructure: that covers a test case database and a defect database or
defect repository.
ii) The test framework: provides a backbone that ties the selection and execution of
the test cases.
EXTERNAL MODULE:
There are two external modules in automation:
1. TCDB
2. Defect DB
TCDB:
All the test cases, the steps to execute them and the history of their execution such as
when a particular test case was run and whether it passed/failed are stored in the TCDB.
The test cases in TCDB can be manual or automated.
The interface shown by thick arrows represents the interaction between TCDB and the
automation framework only for automated test cases.
The manual test cases do not need any interaction between the framework and TCDB.
Defect DB:
Defect database/defect repository contains details of all the defects that are found in
various products that are tested in a particular organization.
It contains defects and all the related information “when the defect was found, to whom it
is assigned, what is the current status, the type of defects, its impacts, and so on.
Test engineers submit the defects for manual test cases.
For automated test cases, the framework can automatically submit the defects to the
defect DB during execution.
SCENARIO AND CONFIGURATION FILE MODULES
Scenarios are nothing but information on “how to execute a particular test case”.
A configuration file contains a set of variables that are used in automation. The variables
could be for the test framework or for other modules in automation such as tools and metrics or
for set of test cases or for a particular test case.
A configuration file is important for running the test cases for various execution
conditions and for running the tests for various input and output conditions and states.
The values of variables in this configuration file can be changed dynamically to achieve
different execution, input, output and state conditions.
TEST CASES AND TEST FRAMEWORK MODULES
A test cases means the automated test cases that are taken from TCDB and executed by
the framework. Test case is an object for execution for other modules in the architecture and
does not represent any interaction by itself.
A test framework is a module that combines “what to execute” and “how they have to be
executed”.
It picks up the specific test cases that are automated from TCDB and picks up the
scenarios and executes them.
The variables and their defined values are picked up by the test framework and the test
cases are executed for those values.
The test framework is considered the core of automation design. It subjects the test cases
to different scenarios.
A test framework can be developed by the organization internally or can be bought from
the vendor.
TOOLS AND RESULTS MODULES
When a test framework performs its operations, there are a set of tools that may be
required. For example: when test cases are stored as source code files in TCDB, they need to be
extracted and compiled by build tools.
In order to run the compiled code, certain runtime tools and utilities may be required. In
this case, the test framework involves all these different tools and utilities.
When a test framework executes a set of test cases with a set of scenarios for the different
values provided by the configuration file, the results for each test case along with scenarios and
variable values have to be stored for future analysis and action.
The history of all the previous tests run should be recorded and kept as archives.
REPORT GENERATORS AND REPORTS/METRICS MODULES
Once the results of a test run are available, the next step is to prepare the test reports and
metrics.
Preparing reports is a complex and time-consuming effort and hence it should be part of
the automation design.
There should be customized reports such as an executive report, which gives very high
level status; technical reports, which give a moderate level of details of test run and detailed or
debug reports which are generated for developers to debug the failed test cases and the product.
The periodicity of the reports is different, such as daily, weekly, monthly, and milestone
reports.
“The module that takes the necessary inputs and prepares a formatted report is called a
report generator”. Once the results are available, the report generator can generate “metrics”.
All the reports and metrics that are generated are stored in the reports/metrics module of
automation for future use and analysis.

REQUIREMENTS FOR TEST TOOL


1. No hard coding in the test suite
2. Test case/suite expandability
3. Reuse of code for different types of testing, test cases
4. Automatic setup and cleanup
5. Independent test cases
6. Test case dependency
7. Insulating test cases during execution
8. Coding standards and directory structure
9. Selective execution of test cases
10. Random execution of test cases
11. Parallel execution of test cases
12. Looping the test cases
13. Grouping the test scenarios
14. Test case execution based on previous results
15. Remote execution of test cases
16. Automatic archival of test data
17. Reporting scheme
18. Independent of languages
19. Portability to different platforms

CHALLENGES IN AUTOMATION
Test automation presents some very unique challenges.
- The most important of these challenges is management commitment.
- Automation takes time and effort and pays off in the long run.
- However, automation requires significant initial outlay of money as a steep learning
curve for the test engineers before it can start paying off.
- Management should have patience and persist with automation.
- The main challenge here is because of the heavy front-loading of cost of test automation,
management starts to look for an early payback
- Successful test automation are characterized by unflinching management commitment a
clear vision of the goals and the ability to set realistic short-term goals that track progress
with respect to the long-term vision.

You might also like