You are on page 1of 79

1.

Quality Principles
A. Before an organization can begin to assess the quality of its products and services and identify
opportunities for improvement, it first must have a working knowledge of quality principles. This category
will test the CSQA candidate’s understanding and ability to understand and apply these principles.

B. Definitions of Quality:
1. Quality
i. Totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs.
The term "quality" should not be used as a single term to express a degree of excellence in a
comparative sense, nor should it be used in a quantitative sense for technical evaluations. To
express these meanings, a qualifying adjective should be used.
ii. QUALITY - The degree to which a system, component, or process meets specified
requirements, or customer or user needs or expectations.
iii. QUALITY FACTORS - The characteristics used to formulate measures of assessing information
system quality.
iv. The New 2000 ISO 9000 Standards - The four primary standards are as follows:
 ISO 9000: Quality management systems - Fundamentals and vocabulary
 ISO 9001: Quality management systems - Requirements
 ISO 9004: Quality management systems - Guidance for Performance Improvement
 ISO 19011: Guidelines on Quality and Environmental Auditing
v. Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. It will depend on who the 'customer' is and their overall influence in the scheme
of things. A wide-angle view of the 'customers' of a software development project might include
end-users, customer acceptance testers, customer contract officers, customer management, the
development organization's management/accountants/testers/salespeople, future software
maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will
have their own slant on 'quality' - the accounting department might define quality in terms of
profits while an end-user might define quality as user-friendly and bug-free.
vi. In Quality Is Free, Phil Crosby describes quality as "conformance to requirements.”
vii. J.M. Juran’s definition of quality. He spends a goodly portion of an early chapter in Juran on
Planning for Quality discussing the meaning of quality, but he also offers a pithy definition:
fitness for use. In other words, quality exists in a product—a coffee maker, a car, or a software
system—when that product is fit for the uses for which the customers buy it and to which the
users set it. A product will be fit for use when it exhibits the predominant presence of customer-
satisfying behaviors and a relative absence of customer-dissatisfying behaviors.

2. Producer’s View of Quality


i. A more objective view
ii. Conformance requirements
iii. Costs of quality (prevention, appraisal, scrap & rework, warranty costs)
 Prevention costs: training, writing quality procedures
 Appraisal costs: inspecting and measuring product characteristics
 Scrap and Rework costs: internal costs of defective products
 Warranty costs: external costs for product failures in the field

3. Customer’s View of Quality


i. A more subjective view
ii. Quality of the design (look, feel, function)
iii. Consider both feature and performance measures to asses value
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 1
 Value = quality / price (determined by individual customers)
4. International Standards Organization document ISO 9126.
i. This standard proposes that the quality of a software system can be measured along six major
characteristics:
 Functionality: Does the system provide the required capabilities?
 Reliability: Does the system work as needed when needed?
 Usability: Is the system intuitive, comprehensible, and handy to the users?
 Efficiency: Is the system sparing in its use of resources?
 Maintainability: Can operators, programmers, and customers upgrade the system as
needed?
 Performance: Does the system fulfill the users’ requests speedily?

C. Quality Concepts:
1. Cost of Quality
i. Prevention costs – maintaining a quality system
ii. Appraisal costs – maintaining a quality assurance system
iii. Internal failures – manufacturing losses, scrap, rework
iv. External failures – warranty, repair, customer, product service
v. Jim Campenella illustrates a technique for analyzing the costs of quality in Principles of Quality
Costs. Campenella breaks down those costs as follows:
 Cquality=Cconformance+Cnonconformance
vi. Conformance costs include prevention costs and appraisal costs. Prevention costs include money
spent on quality assurance—tasks like training, requirements and code reviews, and other
activities that promote good software. Appraisal costs include money spent on planning test
activities, developing test cases and data, and executing those test cases once. Nonconformance
costs come in two flavors: internal failures and external failures. The costs of internal failure
include all expenses that arise when test cases fail the first time they’re run, as they often do. A
programmer incurs a cost of internal failure while debugging problems found during her own
unit and component testing
vii. The costs of external failure are those incurred when, rather than a tester finding a bug, the
customer does. These costs will be even higher than those associated with either kind of internal
failure, programmer-found or tester-found. In these cases, not only does the same process
described for tester-found bugs occur, but you also incur the technical support overhead and the
more expensive process of releasing a fix to the field rather than to the test lab. In addition,
consider the intangible costs: angry customers, damage to the company image, lost business, and
maybe even lawsuits.
viii. The flip side of the quality approach is Philip Crosby's “quality if free" idea. Basically, Crosby's
thesis is that bad quality is very expensive. If you add up the costs of scrap, rework, delays in
scheduling, the need for extra inventories to compensate for schedule changes, field service
costs, product warranty expenses, and most of all customer dissatisfaction with your product,
that can cost one heck of a lot of money. Companies need to get their arms around these costs of
quality and quantify their impact. Most companies have estimated that their cost of quality is
25% to 35% of product cost. With that as the incentive, a firm can start to go to work and attack
the root causes that result in those bad quality costs, reduce them, and end up at the same place
as the Deming approach which is to produce high quality goods and eliminate screw-ups in the
manufacturing process. They both have the same objective, to get the quality up throughout the
whole process instead of waiting and inspecting the product at the end.

2. Plan-Do-Check-Act
i. A Problem Solving Process
ii. The well known Deming cycle instructs us to Plan, Do, Study and then Act upon our findings, in
order to obtain continuous improvement. For a software organisation this might be rephrased as
"Plan the project, Develop the system, Scrutinise its implementation and Amend the process".

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 2
iii. The continuous improvement uses a process that follows the plan-do-check-act cycle. The
situation is analyzed and the improvement is planned (Plan). The improvement is tried (Do).
Then data is gathered to see how the new approach works (Check or study) and then the
improvement is either implemented or a decision is made to try something else (Act). This
process of continuous improvement makes it possible to reduce variations and lower defects to
near zero.
 Plan – The Change
1) Step 1: Identify the Problem
i. Select the problem to be analyzed
ii. Clearly define the problem and establish a precise problem statement
iii. Set a measurable goal for the problem solving effort
iv. Establish a process for coordinating with and gaining approval of
leadership
2) Step 2: Analyze the Problem
i. Identify the processes that impact the problem and select one
ii. List the steps in the process as it currently exists
iii. Map the Process
iv. Validate the map of the process
v. Identify potential cause of the problem
vi. Collect and analyze data related to the problem
vii. Verify or revise the original problem statement
viii. Identify root causes of the problem
ix. Collect additional data if needed to verify root causes

 Do – Implement The Change


1) Step 3: Develop Solutions
i. Establish criteria for selecting a solution
ii. Generate potential solutions that will address the root causes of the
problem
iii. Select a solution
iv. Gain approval and supporters of the chosen solution
v. Plan the solution
2) Step 4: Implement a Solution
i. Implement the chosen solution on a trial or pilot basis
ii. If the Problem Solving Process is being used in conjunction with the
Continuous Improvement Process, return to Step 6 of the Continuous
Improvement Process
iii. If the Problem Solving Process is being used as a standalone,
continue to Step 5

 Check – Monitor and Review The Change


1) Step 5: Evaluate The Results
i. Gather data on the solution
ii. Analyze the data on the solution

 Act - Revise and plan how to use the learning’s


1) Step 6: Standardize The Solution (and Capitalize on New Opportunities)
i. Identify systemic changes and training needs for full implementation
ii. Adopt the solution
iii. Plan ongoing monitoring of the solution
iv. Continue to look for incremental improvements to refine the solution
v. Look for another improvement opportunity

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 3
Plan-Do-Check-Act Cycle

3. Six Sigma
i. Six Sigma is a highly disciplined process that helps us focus on developing and delivering near-
perfect products and services. Why "Sigma"? The word is a statistical term that measures how
far a given process deviates from perfection. The central idea behind Six Sigma is that if you
can measure how many "defects" you have in a process, you can systematically figure out
how to eliminate them and get as close to "zero defects" as possible.
ii. Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects
(driving towards six standard deviations between the mean and the nearest specification limit) in
any process -- from manufacturing to transactional and from product to service.
iii. The objective of Six Sigma Quality is to reduce process output variation so that ±six standard
deviations lie between the mean and the nearest specification limit. This will allow no more than
3.4 defect Parts Per Million (PPM) opportunities, also known as Defects Per Million
Opportunities (DPMO), to be produced.

As the process sigma value increases from zero to six, the variation of the process around the
mean value decreases. With a high enough value of process sigma, the process approaches zero
variation and is known as 'zero defects.’

Decrease your process variation (remember variance is the square of your process standard
deviation) in order to increase your process sigma. The end result is greater customer satisfaction
and lower costs.

iv. The statistical representation of Six Sigma describes quantitatively how a process is performing.
To achieve Six Sigma, a process must not produce more than 3.4 defects per million
opportunities. A Six Sigma defect is defined as anything outside of customer specifications. A
Six Sigma opportunity is then the total quantity of chances for a defect. Process sigma can easily
be calculated using a Six Sigma calculator.

v. What’s Involved In a Six Sigma Initiative?

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 4
 Six Sigma is a philosophy to eliminate variation in process. It can be applied to all
disciplines: production, sales, marketing, service, quality. Sigma is a metric that
indicates how well the process is performing. A three Sigma indicates a level of 66,807
defects per one million opportunities, whereas 6 sigma brings defect levels to 3.4 only
per one million opportunities. Considering the enormous reduction in defect level and
variations, the philosophy is called the Six Sigma philosophy. However, one must note
that we cannot achieve a Six Sigma level in one go. Every time we improve the process,
the sigma level goes up. This process is to be repeated until we reach a defect level of
3.4 or lower for every million opportunities.
 Six Sigma is the disciplined application of statistical problem-solving tools that show
you where wasteful costs are and points you toward precise steps to take for
improvement. These tools apply a refined methodology of measurement and discovery
to gain a comprehensive understanding of
performance and key variables affecting the quality of a company’s products and
services. A level of Six Sigma represents the peak of quality — the virtual elimination
of defects from every product and process within an organization. As sigma increases,
customer satisfaction goes up while at the same time cycle time goes down and costs
plummet.
 Listed below is a high level overview of the Six Sigma improvement methodology that
various companies have used to practice its process improvement initiative.
 The Six Sigma Revolution, George Eckes, Pg 34.
1) Define. Define the customers, their requirements, the team charter, and the key
process that affects that customer.
2) Measure. Identify the key measures, the Data Collection Plan for the process
in question, and execute the plan for data collection.
3) Analyze. Analyze the data collected as well as the process to determine the
root causes for why the process is not performing as desired.
4) Improve. Generate and determine potential solutions and plot them on a small
scale to determine if they positively improve process performance.
5) Control. Develop, document, and implement a plan to ensure that performance
improvement remains at the desired level.
 The essence of this method centers on identifying problems, determining their root
causes, formulating ideas around what would result in improvement, testing those
improvements, and maintaining improvement.
 Achieving Six Sigma performance across an organization is an enormous challenge.
Going from Four to Six Sigma is almost a 2,000 percent improvement! No one person
and no one area can accomplish this alone. The challenge to leadership is to harness the
ideas and energy of many people across functions, sites, and even business groups.
 Implementing Six Sigma requires mobilizing people resources and arming them with
the tools they need to accomplish the goal of quality improvement and impressive
financial results. Training these individuals to become Black Belts and “change agents”
is critical to successful implementation of Six Sigma Problem Solving Technology. To
become a Six Sigma company, it takes more than technology, knowledge and
organization. This quantum leap in quality needs people to make it happen. While all
employees need to understand the vision of Six Sigma and use some of its tools to
improve their work, there are six distinctive roles in the implementation process.
 Six Sigma Champions: As a group, executive managers provide overall leadership and
must own and drive Six Sigma. From within this group, a senior management leader or
leaders is assigned to provide day-to-day top management leadership during
implementation. These individuals are referred to as Champions.
 Supervisory-Level Management: These managers play a pivotal role because they own
the processes of the business and must ensure that improvements to the process are
captured and sustained. They typically also manage the individuals who are selected for
Black Belt training, and must understand the challenges facing them as well as be
willing and empowered to remove any roadblocks to progress.
 Master Black Belts: These are the full-time trainers for a company’s Six Sigma efforts.
They act as coaches and mentors for Black Belts. They will grow them from the ranks
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 5
of the Black Belts with the help of the Six Sigma partner’s team of experts. To sustain a
program, the best Black Belts become the Master Black Belts.
 Black Belts: Full-time employees who are 100 percent focused on identifying, leading
and facilitating the completion of Six Sigma projects.
 Green Belts: As part-time resources, they help Black Belts complete projects and
extend the reach of Black Belts. When a Black Belt has access to the time and expertise
of Green Belts, it allows the Black Belt to work on overlapping projects, thus
completing more projects in a given period of time. Green Belts also work on smaller
projects inside their functional areas.
 Project Team Members: These are the project- specific, part-time people resources, “the
teams”, that provide process and cross-functional knowledge. They help sustain the
gains achieved by Six Sigma projects, and eventually take 100 percent ownership of a
Black Belt project.
 In summary, Six Sigma is:
1) A measure of variation that achieves 3.4 defects per million opportunities.
2) A cultural value or philosophy toward your works.
3) A measurement system.
4) A goal

4. Benchmarking
i. The process of identifying, sharing, and using knowledge and best practices. It focuses on how
to improve any given business process by exploiting top-notch approaches rather than merely
measuring the best performance. Finding, studying and implementing best practices provides the
greatest opportunity for gaining a strategic, operational, and financial advantage.
ii. BENCHMARK - A standard against which measurements or comparisons can be made. (SW-
CMM (IEEE-STD-610)

5. Continuous Improvement
i. Continuous improvement, in regard to organizational quality and performance, focuses on
improving customer satisfaction through continuous and incremental improvements to
processes, including by removing unnecessary activities and variations.
ii. (ISO 9001) A complete cycle to improve the effectiveness of the quality management system.

6. Best Practices
i. The revisions of ISO 9001 and 9004 are based on eight quality management principles that
reflect best management practices.
ii. These eight principles are:
 Customer focused organization
 Leadership
 Involvement of people
 Process approach
 System approach to management
 Continual improvement
 Factual approach to decision making
 Mutually beneficial supplier relationship

D. Quality Objectives:
1. For the corporate quality policy, management should define objectives pertaining to key elements of
quality, such as fitness for use, performance, safety and reliability. (Source ISO 9004: 1987, 4.3.1.)
i. Improve Customer Satisfaction
ii. Reduce development costs/improve time-to-market capability
iii. Improve Processes

E. Quality Attributes:
1. Reliability
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 6
i. Extent to which a system or release can be expected to perform its intended function with
required precision and without interruption to execution and delivered functionality.
ii. RELIABILITY - Automated applications are not run in a sterile environment. Busy people
prepare input and make mistakes in input preparation. Forms are misinterpreted, instructions are
unknown, and users of systems experiment with input transactions. People operate the system
and make mistakes, such as using wrong program versions, not including all of the input or
adding input which should not be included, etc. Outputs may be lost, mislaid, misdirected, and
misinterpreted by the people that receive them. All of these affect the correctness of the
application results. The reliability factor measures the consistency with which the system can
produce correct results. For example, if an input transaction is entered perfectly, and the system
can produce the desired result correctly, then the correctness quality factor would be rated
perfect. On the other hand, that same system which processed using imperfect input may fail to
produce correct results. Thus, while correctness would score high, reliability may score low.

2. Maintainability
i. Effort required to learn, operate, maintain, and test the system or project enhancement from user,
production control, and application support personnel perspectives. Effort required to implement
new enhancements or fix operational errors.

3. Correctness
i. Effort required to implement zero-defect functionality.

4. Flexibility
i. Effort and response time required to enhance an operational system or program.
ii. The characteristics of software that allow or enable adjustments or other changes to the business
process.
 System adaptability is the capability to modify the system to cope with major changes
in business processes with little or no interruption to business operations.
 System versatility (or system robustness) is the capability of the system to allow flexible
procedures to deal with exceptions in processes and procedures.

5. Interoperability
i. Effort required to couple or interface one system with another.
ii. The ability of software and hardware on different machines from different vendors to share data.

6. Standardization
i. Conformance to accepted software standards to include additional enhancements to the
standards.
ii. Establishing standards and procedures for software development is critical, since these provide
the framework from which the software evolves. Standards are the established criteria to which
the software products are compared. Procedures are the established criteria to which the
development and control processes are compared. Standards and procedures establish the
prescribed methods for developing software; the SQA role is to ensure their existence and
adequacy. Proper documentation of standards and procedures is necessary since the SQA
activities of process monitoring, product evaluation, and auditing rely upon unequivocal
definitions to measure project compliance.
 Documentation Standards specify form and content for planning, control, and product
documentation and provide consistency throughout a project. The NASA Data Item
Descriptions (DIDs) are documentation standard.
 Design Standards specify the form and content of the design product. They provide
rules and methods for translating the software requirements into the software design
and for representing it in the design documentation.
 Code Standards specify the language in which the code is to be written and define any
restrictions on use of language features. They define legal language structures, style
conventions, rules for data structures and interfaces, and internal code documentation

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 7
7. Testability
i. Effort required to prepare for and test a system or program to assure it performs its intended
functionality without degraded operational performance.
ii. The resources that need to be utilized to test the system to ensure the specified quality has been
achieved. Testing, like the other quality factors, should be discussed and negotiated with the user
responsible for the application. However, the amount of resources allocated to testing can vary
based on the degree of reliability that the user demands from the project.
 (1) The degree to which a system or component facilitates the establishment of test
criteria and the performance of tests to determine whether those criteria have been met.
 (2) The degree to which a requirement is stated in terms that permit establishment of
test criteria and performance of tests to determine whether those criteria have been met
(IEEE-STD-610).

8. Performance
i. That attribute of a computer system that characterizes the timeliness of the service delivered by
the system.

9. Usability
i. Usability is the measure of the quality of a user's experience when interacting with a product or
system — whether a Web site, a software application, mobile technology, or any user-operated
device. Usability is a combination of factors that affect the user's experience with the product or
system.
ii. Usability: To create good user interfaces, attention must focus on clarity, comprehension, and
consistency.
 CLARITY: The screen layout needs to be clear and uncluttered. The wording should be
considered carefully.
 COMPREHENSION: Developing on-line documentation, HELP, and tutorial
explanations requires the ability to write clear English and avoid jargon. Pictures may
also be extremely helpful. They should be used where appropriate to make applications
easy to learn and use. Hyper documents, an extension of hypertext, are also an
appropriate form of on-line documentation.
 CONSISTENCY: A new application should look and feel as familiar as possible to
users. there should be consistent use of screen layouts, function keys, pull-down menus,
multiple-choice mechanisms, and so on. The choice of words and colors should be
consistent.

10. Portability
i. An application is portable across a class of environments to the degree that the effort required to
transport and adapt it to a new environment in the class is less than the effort of redevelopment.

11. Scalability
i. It is the ability of a computer application or product (hardware or software) to continue to
function well as it (or its context) is changed in size or volume in order to meet a user need.
ii. It is the ability not only to function well in the rescaled situation, but to actually take full
advantage of it.

12. Availability
13. Security
14. Quality Metric Criteria:

Criteria Quality Measurement


Accuracy Those attributes of the software products that provide the required precision in calculations and output
products and fully meet the functional, performance, and operational requirements.
Clarity Those attributes of the software that provide for useful inputs and outputs which are readily assimilated.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 8
Communications Those attributes of the software that provide the use of commonality standard protocols and interface
routines.
Communicativeness Those attributes of the software products that provide useful inputs and outputs that can be assimilated.
Conciseness Those attributes of the software products that provide for simplicity and completeness in presentation
and for implementation of a function with a minimum amount of code, lending itself to simplicity and
modularity.
Consistency Those attributes of the software products that provide uniform design and implementation techniques.
Consistency is measured for report and screen formats, programming techniques, JCL coding, etc.
Data commonality Those attributes of the software that provide the use of standard data representations and structures.
Encapsulation Software objects that protect themselves and their associated data, eliminating random effects on other
objects in the same system when encountering error conditions.
Error Tolerance Those attributes of the software products that provide continuity of operation under non-nominal
conditions.
Execution efficiency Those attributes of the software that provide for minimum execution processing time without decrease
in functionality.
Expandability Those attributes of the software products that provide for increasing, changing, and customizing
functionality.
Modularity Those attributes of the software products that provide a structure of highly independent modules with
each serving a particular function and accordingly, lending itself to simplicity, ease of maintenance, and
future expansion.
Self-descriptiveness Those attributes of the software that provide explanation of the maintenance and implementation of a
function.
Simplicity Those attributes of the software products that provide maintenance and implementation of the functions
in the most understandable manner.
Timeliness Those attributes of the software products that are delivered on time or run on schedule. For Year 2000
projects, also those attributes of the software products that can be delivered before the event horizon.

F. Quality Assurance vs. Quality Control:


1. ISO 9000 Definitions:
i. Quality Control
 The operational techniques and activities that are used to fulfill
requirements for quality
1) Problem Identification  Problem Analysis  Problem
Correction & Feedback to QA
2) Focus on Product, Reactive, Line Function, Find Defects
ii. Quality Assurance
 All those planned and systematic activities implemented to provide
adequate confidence that an software package will fulfill
requirements for quality
1) Data Gathering  Problem Trend Analysis  Process
Identification  Process Analysis  Process Improvement
2) Focus on Process, Proactive, Staff Function, Prevent Defects
iii. SOFTWARE QUALITY ASSURANCE - A planned and systematic pattern of all actions
necessary to provide adequate confidence that a software work product conforms to established
technical requirements

G. Quality Pioneers:

1. Walter A. Shewart
i. Pioneer of Modern Quality Control.
ii. Recognized the need to separate variation into assignable and unassignable causes.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 9
iii. Founder of the control chart.
iv. Originator of the plan-do-check-act cycle.
v. Perhaps the first to successfully integrate statistics, engineering, and economics.
vi. Defined quality in terms of objective and subjective quality.
 objective quality: quality of a thing independent and subjective quality.
 subjective quality: quality relative to how people perceive it.

2. W. Edwards Deming
i. Studied under Shewart at Bell Laboratories
ii. Contributions:
 Well known for helping Japanese companies apply Shewart’s statistical process control.
 Main contribution is his Fourteen Points to Quality. The 14 points are:
1) Create constancy of purpose toward improvement of product and service.
2) Adopt the new philosophy. We are in a new economic age.
3) Cease dependence on mass inspection to achieve quality.
4) Constantly and forever improve the system.
5) Remove barriers.
6) Drive out fear.
7) Break down barriers between departments.
8) Eliminate numerical goals.
9) Eliminate work standards (quotas).
10) Institute modern methods of supervision.
11) Institute modern methods of training.
12) Institute a program of education and retraining.
13) End the practice of awarding business on price tag.
14) Put everybody in the company to work to accomplish the transformation.

3. Joseph Juran
i. Contributions:
 Also well known for helping improve Japanese quality.
 Directed most of his work at executives and the field of quality management.
 Developed the “Juran Trilogy” for managing quality:
1) Quality planning, quality control, and quality improvement.
 Enlightened the world on the concept of the “vital few, trivial many” which is the
foundation of Pareto charts.

4. Philip Crosby
i. Quality management
 Four absolutes of quality including:
1) Quality is defined by conformance to requirements.
2) System for causing quality is prevention not appraisal.
3) Performance standards of zero defects, not close enough.
4) Measurement of quality is the cost of nonconformance.

5. Arman Feigenbaum
i. Stressed a systems approach to quality (all organizations must be focused on quality)
ii. Costs of quality may be separated into costs for prevention, appraisal, and failures (scrap,
warranty, etc.)

6. Kaoru Ishikawa
i. Developed concept of true and substitute quality characteristics
 True characteristics are the customer’s view
 Substitute characteristics are the producer’s view
 Degree of match between true and substitute ultimately determines customer
satisfaction
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 10
ii. Advocate of the use of the 7 tools
iii. Advanced the use of quality circles (worker quality teams)
iv. Developed the concept of Japanese Total Quality Control
 Quality first – not short term profits.
 Next process is your customer.
 Use facts and data to make presentations.
 Respect for humanity as a management philosophy – full participation

7. Genichi Taguchi
i. 1960s – 1980s
ii. Quality loss function (deviation from target is a loss to society)
iii. Promoted the use of parameter design (application of Design of experiments) or robust
engineering
 Goal: develop products and processes that perform on target with smallest variation that
are insensitive to environmental conditions.
 focus is on “engineering design”
 robust design/parameter design

H. Quality Vocabulary:
1. TQM (Total Quality Management)
i. A management philosophy which seeks to integrate all organizational functions (marketing,
finance, design, engineering, production, customer service …) to focus on meeting customer
needs and organizational objectives. It views organizations as a collection of processes. It
maintains that organizations must strive to continuously improve these processes by
incorporating the knowledge and experiences of workers.
ii. Total quality management is the management approach of an organization, centered on quality,
based on the participation of all of its members, and aiming at long-term success through
customer satisfaction and benefits to all members of the organization and to society.
iii. Total Quality Management is a structured system for satisfying internal and external customers
and suppliers by integrating the business environment, continuous improvement, and
breakthroughs with development, improvement, and maintenance cycles while changing
organizational culture.

2. Quality Improvement Cycle


i. A quality improvement cycle is a planned sequence of systematic and documented activities
aimed at improving a process.
ii. Improvements can be effected in two ways:
 By improving the process itself
 By improving the outcomes of the process.

3. Quality Management
i. All activities of the overall management function that determine the quality policy, objectives,
and responsibilities, and implement them by means such as quality planning, quality control,
quality assurance, and quality improvement within the quality system.
ii. A comprehensive and fundamental rule or belief, for leading and operating an organization,
aimed at continually improving performance over the long term by focusing on customers while
addressing the needs of all stakeholders.

4. Quality Planning
i. The activities that establish the objectives and requirements for quality and for the application of
quality system elements. Quality planning covers product planning, managerial and operational
planning, and the preparation of quality plans.
ii. Quality planning embodies the concepts of defect prevention and continuous improvement as
contrasted with defect detection.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 11
iii. Advanced (Product) Quality Planning (AQP / APQP) is a structured process for defining key
characteristics important for compliance with regulatory requirements and achieving customer
satisfaction. AQP includes the methods and controls (i.e., measurements, tests) that will be used
in the design and production of a specific product or family of products (i.e., parts, materials).

5. Quality Control
i. Operational techniques and activities that are used to fulfill requirements for quality. It involves
techniques that monitor a process and eliminate causes of unsatisfactory performance at all
stages of the quality loop.
ii. Quality control describes the directed use of testing to measure the achievement of a specified
standard. Quality control is a formal (as in structured) use of testing. Quality control is a superset
of testing, although it often used synonymously with testing. Roughly, you test to see if
something is broken, and with quality control you set limits that say, in effect, if this particular
stuff is broken then whatever you're testing fails.
iii. The concept of quality control in manufacturing was first advanced by Walter Shewhart.

6. Quality Assurance
i. The planned and systematic activities implemented within the quality system and demonstrated
as needed to provide adequate confidence that an entity will fulfill requirements for quality.
ii. Software QA involves the entire software development PROCESS - monitoring and improving
the process, making sure that any agreed-upon standards and procedures are followed, and
ensuring that problems are found and dealt with. It is oriented to 'prevention'.
iii. In developing products and services, quality assurance is any systematic process of checking to
see whether a product or service being developed is meeting specified requirements. Many
companies have a separate department devoted to quality assurance. A quality assurance system
is said to increase customer confidence and a company's credibility, to improve work processes
and efficiency, and to enable a company to better compete with others. Quality assurance was
initially introduced in World War II when munitions were inspected and tested for defects after
they were made. Today's quality assurance systems emphasize catching defects before they get
into the final product.
 ISO 9000 is an international standard that many companies use to ensure that their
quality assurance system is in place and effective. Conformance to ISO 9000 is said to
guarantee that a company delivers quality products and services. To follow ISO 9000, a
company's management team decides quality assurance policies and objectives. Next,
the company or an external consultant formally writes down the company's policies and
requirements and how the staff can implement the quality assurance system. Once this
guideline is in place and the quality assurance procedures are implemented, an outside
assessor examines the company's quality assurance system to make sure it complies
with ISO 9000. A detailed report describes the parts of the standard the company
missed, and the company agrees to correct any problems within a specific time. Once
the problems are corrected, the company is certified as in conformance with the
standard.

7. Quality System
i. The organizational structure, procedures, processes, and resources needed to implement quality
management.
ii. A system of management which assures that planning is carried out such that ALL staff know
what is expected and how to achieve the specified results.
iii. Quality Function Deployment (QFD) - a planning tool for incorporating customer quality
requirements through all phases of the product development cycle. Key benefits to this approach
are product improvement, increased customer satisfaction, reduction in the total product
development cycle, and increased market share.

2. Software Development, Acquisition and Operation Processes


____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 12
1. The CQA candidate must understand how quality software is built to be effective in assuring
and controlling quality throughout the software life cycle

2. Process Knowledge
a. Software Development, Operation and Maintenance Processes
i. Understanding the processes used in the organization to develop, operate and maintain
software systems.
ii. ISO 12207 – Software Life-Cycle standard

iii. Development Process (ISO 12207)


1. `This life cycle process contains the activities and tasks of the developer of
software. The term development denotes both development of new software
and modification to an existing software. The development process is intended
to be employed in at least two ways: (1) As a methodology for developing
prototypes or for studying the requirements and design of a product or (2) As a
process to produce products. This process provides for developing software as
a stand-alone entity or as an integral part of a larger, total system. The
development process consists of the following activities along with their
specific tasks: Process implementation; System requirements analysis; System
design; Software requirements analysis; Software architectural design;
Software detailed design; Software coding and testing; Software integration;
Software qualification testing; System integration; System qualification
testing; Software installation; and Software acceptance support. The
positional sequence of these activities does not necessarily imply a time order.
These activities may be iterated and overlapped, or an activity may be recursed
to offset any implied or default Waterfall sequence. All the tasks in an activity
need not be completed in the first or any given iteration, but these tasks should
have been completed as the final iteration comes to an end. These activities
and tasks may be used to construct one or more developmental models (such
as, the Waterfall, incremental, evolutionary, the Spiral, or other, or a
combination of these) for a project or an organization.

iv. Operation Process (ISO 12207)


1. This life cycle process contains the activities and tasks of the operator of a
software system. The operation of the software is integrated into the operation
of the total system. The process covers the operation of the software and
operational support to users. This process consists of the following activities
along with their specific tasks: Process implementation; Operational testing;
System operation; and User support.

v. Maintenance Process (ISO 12207)


1. The maintenance process contains the activities and tasks of the maintainer.
This process is activated when a system undergoes modifications to code and
associated documentation due to an error, a deficiency, a problem, or the need
for an improvement or adaptation. The objective is to modify an existing
system while preserving its integrity. Whenever a software product needs
modifications, the development process is invoked to effect and complete the
modifications properly. The process ends with the retirement of the system.
This process consists of the following activities along with their specific tasks:
Process implementation; Problem and modification analysis; Modification
implementation; Maintenance review/acceptance; migration; and Software
retirement.

vi. Supporting Processes (ISO 12207)


1. A supporting process supports any other process as an integral part with a
distinct purpose and contributes to the success and quality of the project. A
supporting process is invoked, as needed, by the acquisition, supply,
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 13
development, operation or maintenance process, or another supporting
process.
 Documentation Process - This is a process for recording information
produced by a life cycle process. The process defines the activities,
which plan, design, develop, edit, distribute and maintain those
documents needed by all concerned such as managers, engineers and
users of the system. The four activities along with their tasks are:
Process implementation; Design and development; Production; and
Maintenance.
 Configuration Management Process - This process is employed to
identify, define, and baseline software items in a system; to control
modifications and releases of the items; to record and report the status
of the items and modification requests; to ensure the completeness
and correctness of the items; and to control storage, handling and
delivery of the items. This process consists of: Process
implementation; Configuration identification; Configuration control;
Configuration status accounting; Configuration evaluation; and
Release management and delivery.
 Quality Assurance Process - This process provides the framework for
independently and objectively assuring (the acquirer or the customer)
of compliance of products or services with their contractual
requirements and adherence to their established plans. To be
unbiased, software quality assurance is provided with the
organizational freedom from persons directly responsible for
developing the products or providing the services. This process
consists of: Process implementation; Product assurance; Process
assurance; and Assurance of quality systems.
 Verification Process - This process provides the evaluations related to
verification of a product or service of a given activity. Verification
determines whether the requirements for a system are complete and
correct and that the outputs of an activity fulfill the requirements or
conditions imposed on them in the previous activities. The process
covers verification of process, requirements, design, code,
integration, and documentation. Verification does not alleviate the
evaluations assigned to a process; on the contrary, it supplements
them.
 Validation Process - Validation determines whether the final, as-built
system fulfills its specific intended use. The extent of validation
depends upon the project's criticality. Validation does not replace
other evaluations, but supplements them.
 Joint Review Process - This process provides the framework for
interactions between the reviewer and the reviewee. They may as well
be the acquirer and the supplier respectively. At a joint review, the
reviewee presents the status and products of a life cycle activity of a
project to the reviewer for comment (or approval). The reviews are at
both management and technical levels.
 Audit Process - This process provides the framework for formal,
contractually established audits of a supplier's products or services.
At an audit, the auditor assesses the auditee's products and activities
with emphasis on compliance to requirements and plans. An audit
may well be conducted by the acquirer on the supplier.
 Problem Resolution Process - This process provides the mechanism
for instituting a closed-loop process for resolving problems and
taking corrective actions to remove problems as they are detected. In
addition, the process requires identification and analysis of causes
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 14
and reversal of trends in the reported problems. The term "problem"
includes non-conformance.

vii. Organizational Processes (ISO 12207)


1. This standard contains a set of four organizational processes. An organization
employs an organizational process to perform functions at the organizational,
corporate level, typically beyond or across projects. An organizational process
may support any other process as well. These processes help in establishing,
controlling, and improving other processes.
 Management Process - This process defines the generic activities and
tasks of the manager of a software life cycle process, such as the
acquisition process, supply process, operation process, maintenance
process, or supporting process. The activities cover: Initiation and
scope definition; Planning; Execution and control; Review and
evaluation; and Closure. Even though, the primary processes, in
general, have similar management activities, they are sufficiently
different at the detailed level because of their different goals,
objectives, and methods of operations. Therefore, each primary is an
instantiation (a specific implementation) of the management process.
 Infrastructure Process - This process defines the activities needed for
establishing and maintaining an underlying infrastructure for a life
cycle process. This process has the following activities: process
implementation; Establishment of the infrastructure; and Maintenance
of the infrastructure. The infrastructure may include hardware,
software, standards, tools, techniques, and facilities.
 Improvement Process - The standard provides the basic, top-level
activities that an organization (that is, acquisition, supply,
development, operation, maintenance, or a supporting process) needs
to assess, measure, control, and improve its life cycle process. The
activities cover: Process establishment; Process assessment; and
Process improvement. The organization establishes these activities at
the organizational level. Experiences from application of the life
cycle processes on projects are used to improve the processes. The
objectives are to improve the processes organization-wide for the
benefit of the organization as a whole and the current and future
projects and for advancing software technologies.
 Training Process - This process may be used for identifying and
making timely provision for acquiring or developing personnel
resources and skills at the management and technical levels. The
process requires that a training plan be developed, training material
be generated, and training be provided to the personnel in a timely
manner.

viii. Tailoring Process (ISO 12207)


1. Tailoring in the standard is deletion of non-applicable or in-effective
processes, activities, and tasks. A process, an activity, or a task, that is not
contained in the standard but is pertinent to a project, may be included in the
agreement or contract. The standard requires that all the parties that will be
affected by the application of the standard be included in the tailoring
decisions. It should be noted that this process itself, however, cannot be
tailored.

b. Tools
i. Application of tools and methods that aid in planning, analysis, development operation,
and maintenance for increasing productivity. For example, configuration management,
estimating, and associated tools.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 15
ii. CONFIGURATION MANAGEMENT (CM) - Configuration management consists of
four separate tasks: identification, control, status accounting, and auditing. For every
change that is made to an automated data processing (ADP) system, the design and
requirements of the changed version of the system should be identified. The control
task of configuration management is performed by subjecting every change to
documentation, hardware, and software/firmware to review and approval by an
authorized authority. Configuration status accounting is responsible for recording and
reporting on the configuration of the product throughout the change. Finally, through
the process of a configuration audit, the completed change can be verified to be
functionally correct, and for trusted systems, consistent with the security policy of the
system. Configuration management is a sound engineering practice that provides
assurance that the system in operation is the system that is supposed to be in use. The
assurance control objective as it relates to configuration management of trusted systems
is to "guarantee that the trusted portion of the system works only as intended."[1]
Procedures should be established and documented by a configuration management plan
to ensure that configuration management is performed in a specified manner. Any
deviation from the configuration management plan could contribute to the failure of the
configuration management of a system entirely, as well as the trust placed in a trusted
system.
1. The purpose of configuration management is to ensure that these changes take
place in an identifiable and controlled environment and that they do not
adversely affect any properties of the system, or in the case of trusted systems,
do not adversely affect the implementation of the security policy of the
Trusted Computing Base (TCB). Configuration management provides
assurance that additions, deletions, or changes made to the TCB do not
compromise the trust of the originally evaluated system. It accomplishes this
by providing procedures to ensure that the TCB and all documentation are
updated properly.
2. Software Quality Assurance (SQA) assures that software Configuration
Management (CM) activities are performed in accordance with the CM plans,
standards, and procedures. SQA reviews the CM plans for compliance with
software CM policies and requirements and provides follow-up for
nonconformances. SQA audits the CM functions for adherence to standards
and procedures and prepares reports of its findings.
3. The CM activities monitored and audited by SQA include baseline control,
configuration identification, configuration control, configuration status
accounting, and configuration authentication. SQA also monitors and audits
the software library. SQA assures that:
 Baselines are established and consistently maintained for use in
subsequent baseline development and control.
 Software configuration identification is consistent and accurate with
respect to the numbering or naming of computer programs, software
modules, software units, and associated software documents.
 Configuration control is maintained such that the software
configuration used in critical phases of testing, acceptance, and
delivery is compatible with the associated documentation.
 Configuration status accounting is performed accurately including
the recording and reporting of data reflecting the software's
configuration identification, proposed changes to the configuration
identification, and the implementation status of approved changes.
 Software configuration authentication is established by a series of
configuration reviews and audits that exhibit the performance
required by the software requirements specification and the
configuration of the software is accurately reflected in the software
design documents.
 Software development libraries provide for proper handling of
software code, documentation, media, and related data in their
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 16
various forms and versions from the time of their initial approval or
acceptance until they have been incorporated into the final media.
 Approved changes to baselined software are made properly and
consistently in all products, and no unauthorized changes are made.

c. CM Tool Examples:
i. Rational ClearCase®, a robust software artifact management tool, combined with
Rational ClearQuest®, the most flexible defect and change tracking tool on the market,
creates a software configuration management (SCM) solution that helps your team
handle the rigors of software development. Rational's SCM solution helps you manage
complex change throughout the development lifecycle, freeing your team from tedious
tasks that inhibit productivity.
1. Share code effortlessly and automate error-prone processes
Rational's SCM solution offers the essential functions of transparent code
sharing, version control, and advanced workspace and build management. By
automating many of the necessary, yet error-prone tasks associated with
software development, Rational's SCM solution frees teams of all sizes to
build better software faster.
2. Unite your team with a process that optimizes efficiency
Process is critical to streamlining software development. A sound process will
improve quality, increase development speed and ultimately enhance overall
team collaboration and productivity. Rational's SCM solution offers Unified
Change Management (UCM), a best practices process for managing change at
the activity level and controlling workflow.
3. Choose a solution that scales and make it your last SCM decision
Rational has an SCM solution that meets the needs of all size development
teams. From small project teams to the global enterprise, Rational has the right
size solution for your team. Using the same proven technology, processes and
protocols, you'll be able to select the right product today and seamlessly grow
with the product tomorrow – no conversion headaches, data disasters, or
process changes. Just smooth scalability
ii.

iii. ESTIMATING - Long before software process improvement and the CMM were
common vocabulary in the software world, there was wide spread recognition that
software project managers needed better ways to estimate the costs and schedules of
software development projects. In the early 70’s two concurrent research efforts
resulted in two parametric software cost-estimating models available to the software
development community (COCOMO and PRICE S). Software cost-estimating tools
solicit input from the users describing their software project and from these inputs the
tool will derive a cost (and usually schedule) estimate for that project. The process that
drives inputs to outputs is either cost estimating relationships derived from regression
of actual data, analogies comparing input parameters to existing knowledge bases,
algorithms derived from theoretical research, or some combination of these
methodologies. At a minimum, the cost estimating tools ask for the user to describe;
1. The size of the software (either in source lines of code (SLOC), Function
Points (FPs), or some other sizing metric)
2. The anticipated amount of reuse
3. The type of software being developed (real time, operating systems, web
development, IS,etc.)
4. The operating platform of the software (commercial or military; ground, air,
space, or desktop)
5. A quantification of the organization’s software development productivity
Although cost and schedule estimates are the main deliverable of the software-
estimating tool, there are many other needs the right estimating tool can address for
your organization. Software project planning is really a balancing act between cost,
schedule, quality and content and the right software-estimating tool can help optimize
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 17
this balance. Many tools offer the capability ofestimating latent defects in the delivered
product and then use this information to predict maintenance costs. This offers the
project manager the capability to make trade-offs based on the total cost of ownership,
rather than just development costs. Most tools have other trade-off and analysis features
as well – allowing the user to set a baseline and vary different parameters to optimize
cost and schedule. Another important feature that most cost-estimating tools deliver is
the ability to perform a Risk Analysis, so that a confidence level can accompany your
estimate.
iv. Constructive Cost Model (COCOMO) - The original COCOMO model was first
published by Dr. Barry Boehm in 1981, and reflected the software development
practices of the day. In the ensuing decade and a half, software development techniques
changed dramatically. These changes included a move away from mainframe overnight
batch processing to desktop-based real-time turnaround; a greatly increased emphasis
on reusing existing software and building new systems using off-the-shelf software
components; and spending as much effort to design and manage the software
development process as was once spent creating the software product. These changes
and others began to make applying the original COCOMO model problematic. The
solution to the problem was to reinvent the model for the 1990s. After several years and
the combined efforts of USC-CSE, IRUS at UC Irvine, and the COCOMO II Project
Affiliate Organizations, the result is COCOMO II, a revised cost estimation model
reflecting the changes in professional software development practice that have come
about since the 1970s. This new, improved COCOMO is now ready to assist
professional software cost estimators for many years to come.

1. COCOMO II is a model that allows one to estimate the cost, effort, and
schedule when planning a new software development activity. It consists of
three submodels, each one offering increased fidelity the further along one is
in the project planning and design process. Listed in increasing fidelity, these
submodels are called the Applications Composition, Early Design, and Post-
architecture models. Until recently, only the last and most detailed submodel,
Post-architecture, had been implemented in a calibrated software tool. As
such, unless otherwise explicitly indicated, all further references on these web
pages to "COCOMO II" or "USC COCOMO II" can be assumed to be in
regard to the Post-architecture model.
2. The implemented tool provides a range on its cost, effort, and schedule
estimates, from best case to most likely to worst case outcomes. It also allows
a planner to easily perform "what if" scenario exploration, by quickly
demonstrating the effect adjusting requirements, resources, and staffing might
have on predicted costs and schedules (e.g., for risk management or job
bidding purposes).

v. PRICE S – A tool is distributed by Lockheed - Martin PRICE Systems. This tool was
first developed in 1977, and is considered one of the first complex commercially
available tools used for software estimation. The equations used by this tool are
proprietary. However, descriptions of the methodology algorithms used can be found in
papers published by PRICE Systems. The PRICE S tool is based on Cost Estimation
Relationships (CERs) that make use of product characteristics in order to generate
estimates. CERs were determined by statistically analyzing completed projects where
product characteristics and project information were known, or developed with expert
judgment. A major input to PRICE S is Source Lines of Code (SLOC). Software size
may be input directly, or automatically calculated from quantitative descriptions
(function point sizing). Other inputs include software function, operating environment,
software reuse, complexity factors, productivity factors, and risk analysis factors.
Successful use of the PRICE S tool depends on the ability of the user to define inputs
correctly. It can be customized and calibrated to the needs of the user

d. Project Management
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 18
i. Performing tasks to manage and steer a project toward a successful conclusion.
Understanding the documents developed in the tester’s organization to design,
document, implement, test, support, and maintain software systems.
ii. Project management is the discipline (art and science) of defining and managing the
vision, tasks, and resources required to complete a project. It's really the management
acumen that oversees the conversion of "vision" into "reality". Project management,
while traditionally applied to the management of projects, is now being deployed to
help organizations manage all types of change.

e. Acquisition
i. Acquisition Process (ISO 12207) - This life cycle process defines the activities and
tasks of the acquirer, that contractually acquires software product or service. The
organization having the need for a product or service may be the owner. The owner
may contract all or parts of the acquisition tasks to an agent. The acquirer represents the
needs and requirements of the users. The acquisition process begins with the definition
of the need to acquire a software product or service. The process continues with the
preparation and issuance of a request for proposal, selection of a supplier, and
management of the acquisition process through the acceptance of the system. This
process consists of the following activities along with their specific tasks: Initiation;
Request-for-Proposal preparation; Contract preparation and update; Supplier
monitoring; and Acceptance and completion. The first three activities occur prior to the
agreement, the last two after the agreement.
ii. Obtaining software through purchase or contract

f. Supply Process (ISO 12207)


i. This life cycle process contains the activities and tasks of the supplier. The process may
be initiated either by a decision to prepare a proposal to answer an acquirer's request for
proposal or by signing and entering into a contract or an agreement with the acquirer to
provide a software service. The service may be the development of a software product
or a system containing software, the operation of a system with software, or the
maintenance of a software product. The process continues with the identification of
procedures and resources needed to manage and assure the service, including
development and execution of plans through delivery of the service to the acquirer.
The supply process consists of the following activities along with their specific tasks:
Initiation; Preparation of response; Contract; Planning; Execution and control; Review
and evaluation; and Delivery and completion. The first two activities occur prior to the
agreement, the last five after the agreement.

3. Roles/Responsibilities
a. Requirements
i. Tasks performed, techniques used, and documentation prepared in identifying,
prioritizing, and recording the business needs and problems to be resolved by the new
or enhanced system. Also, to assess the testability of requirements.

b. Design
i. Tasks performed, techniques used, and documentation prepared, in defining the
automated solution to satisfy the business requirements & interfaces.
1. Person/Machine Interfaces
 Interfaces that include the operating system and the development
languages that are available, as well as the input/output facilities.
 The presentation of machine information to the human,
and the human interaction with the machine.
2. Communications Interfaces
 That include transmission of information between computers and
remote equipment (e.g., transmission of computer data over
networks).
3. Program Interfaces
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 19
 Interfaces for the exchange of information, whether on the same
computer, or distributed across multiple tiers of the application
architecture.
4. Build and Install
 Tasks performed, techniques used, and documentation prepared in
building the automated solution to satisfy the business requirements;
including installation of software.
5. Maintenance
 Software modification activities performed on an operational system
to resolve problems (correction), increase functionality
(enhancement), meet changing operating environment conditions
(adaptation), or improve operational efficiency of speed.

c. Quality Principles
i. Understanding the tenets of quality and their application in the enterprise’s quality
program.
ii. This document introduces the eight quality management principles on which the quality
management system standards of the revised ISO 9000:2000 series are based.
iii. These eight quality management principles are defined in ISO 9000:2000, Quality
management systems Fundamentals and vocabulary, and in ISO 9004:2000, Quality
management systems Guidelines for performance improvements:
1. Principle 1 Customer focus
 Organizations depend on their customers and therefore should
understand current and future customer needs, should meet customer
requirements and strive to exceed customer expectations.
2. Principle 2 Leadership
 Leaders establish unity of purpose and direction of the organization.
They should create and maintain the internal environment in which
people can become fully involved in achieving the organization's
objectives.
3. Principle 3 Involvement of people
 People at all levels are the essence of an organization and their full
involvement enables their abilities to be used for the organization's
benefit.
4. Principle 4 Process approach
 A desired result is achieved more efficiently when activities and
related resources are managed as a process.
5. Principle 5 System approach to management
 Identifying, understanding and managing interrelated processes as a
system contributes to the organization's effectiveness and efficiency
in achieving its objectives.
6. Principle 6 Continual improvement
 Continual improvement of the organization's overall performance
should be a permanent objective of the organization.
 Through management review, internal/external audits and
corrective/preventive actions, continually improve the effectiveness
of the Quality Management System.
7. Principle 7 Factual approach to decision making
 Effective decisions are based on the analysis of data and information.
8. Principle 8 Mutually beneficial supplier relationships
 An organization and its suppliers are interdependent and a mutually
beneficial relationship enhances the ability of both to create value.

iv. In their book, Software Quality: A framework for success in software development and
support, Curran and Sanders indicate that this quality process must adhere to four basic
principles:
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 20
1. Prevent defects from being introduced. At least as much effort should be
placed in keeping defects out of the code as detecting their presence in the
code. Methods for doing this include the use of appropriate software
engineering standards and procedures; independent quality auditing to ensure
standards and procedures are followed; establish a formal method of
accumulating and disseminating lessons learned from past experiences and
mistakes; high quality inputs such as software tools and subcontracted
software.
2. Ensure that defects are detected and corrected as early as possible, as the
longer the errors go undetected, the more expensive they are to correct.
Therefore, quality controls must be put in place during all stages of the
development life cycle, and to all key development products such as
requirements, designs, documentation and code. These should all be subjected
to rigorous review methods such as inspections, walkthroughs, and technical
reviews.
3. Eliminate the causes as well as the symptoms of the defects. This is an
extension of the previous principleóremoval of the defect without eliminating
the cause is not a satisfactory way to solve the problem. By removing the
cause, you have in effect improved the process (and recall that continuous
process improvement is another key tenet in how Total Quality Management
principles are applied to quality software); and lastly…
4. Independently audit the work for compliance with standards and procedures.
This is a two part audit conducted at the process level using SEI or SPR
assessment methodologies, as well as audits at the project level which will
determine if project activities were carried out in accordance to the standards
and procedures established in the quality process, and whether those standards
and procedures are adequate to ensure the quality of the project in general.

d. The "V" Concept of Software Development:


i. The "V" concept relates the build components of development to the test components
that occur during that build phase.
ii. Many of the process models currently used can be more generally connected by the 'V'
model where the 'V' describes the graphical arrangement of the individual phases. The
'V' is also a synonym for Verification and Validation. By the ordering of activities in
time sequence and with abstraction levels the connection between development and test
activities becomes clear. Oppositely laying activities complement one another (i.e.)
server as a base for test activities. For example, the system test is carried out on the
basis of the results specification phase.

“V” Model

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 21
Validate Requirements
Requirements Acceptance Testing

Specifications System Testing

Architectural
Design Integration Testing
Verify
Design

Detail Design Unit Testing

Coding

iii. The 'V' Model as proposed by William E Perry – William E Perry in his book,
Effective Methods of Software Testing proposes the 11 Step Software Testing Process,
also known as the Testing 'V' Model. The following figure depicts the same:

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 22
Define Software Assess Development
Requirements Plan and Status

Develop the Test Plan

Test Software
Requirements
Build Software

Test Software Design

Program Phase Testing

Execute and Record


Operate and Results
Maintain Software

Acceptance Testing

Report Test Results

Operate and
Test Software
Maintain Software
Installation

Test Software Changes

Evalutate Test
Effectiveness

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 23
3. Quality Models and Quality Assessment
A. There are many quality models and standards. Most notably are the Software Engineering
Institute’s Capability Maturity Model (CMM), the Malcolm Baldrige National Quality Award, ISO
9000, SPICE (ISO 15504), ISO 12207 Standard for Information Technology - Software Life Cycle
Processes, The Institute of Electrical Electronics Engineers, Inc. (IEEE) standards, and the Quality
Assurance Institute’s Approach to Implementing Quality. This category will test the CQA candidate’s
understanding of model objectives, structure, pros and cons, and how assessments and baselines
can be developed using a quality model.

1. Purpose of a Quality Model


A. To satisfy business goals and objectives
B. Requirements are imposed by a customer
C. For competitive reasons
D. As a guide (roadmap) to continuous improvement

2. There are many models of software product quality that define software quality attributes. Three often
used models are discussed here as examples. McCall's Model of Software Quality (The GE Model, 1977)
incorporates 11 criteria encompassing product operation, product revision, and product transition.
Boehm's Model (1978) is based on a wider range of characteristics and incorporates 19 criteria.[2] The
criteria in these models are not independent; they interact with each other and often cause conflict,
especially when software providers try to incorporate them into the software development process. ISO
9126 incorporates six quality goals, each goal having a large number of attributes.

A. The criteria and goals(1) defined in each of these models are listed below:

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 24
B. These three models and other references to software quality use the terms criteria, goals and
attributes interchangeably. To avoid confusion, we will use the terminology in ISO 9126 - goal,
attribute, metric.

B. Industry Quality Models


1. Malcolm Baldrige National Quality Award
A. In 1987, jumpstarting a small, slowly growing U.S. quality movement, Congress established the
Malcolm Baldrige National Quality Award to promote quality awareness, to recognize quality
and business achievements of U.S. organizations, and to publicize these organizations’
successful performance strategies. Now considered America’s highest honor for performance
excellence, the Baldrige Award is presented annually to U.S. organizations by the President of
the United States. Awards are given in manufacturing, service, small business, and, starting in
1999, education and health care. In conjunction with the private sector, the National Institute of
Standards and Technology designed and manages the award and the Baldrige National Quality
Program.
B. The Baldrige Award is given by the President of the United States to businesses—manufacturing
and service, small and large—and to education and health care organizations that apply and are
judged to be outstanding in seven areas: leadership, strategic planning, customer and market
focus, information and analysis, human resource focus, process management, and business
results.
C. Malcolm Baldrige was Secretary of Commerce from 1981 until his death in a rodeo accident in
July 1987. Baldrige was a proponent of quality management as a key to this country’s prosperity
and long-term strength. He took a personal interest in the quality improvement act that was
eventually named after him and helped draft one of the early versions. In recognition of his
contributions, Congress named the award in his honor.

2. Software Engineering Institute’s Capability Maturity Model


A. The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the
maturity of the software processes of an organization and for identifying the key practices that
are required to increase the maturity of these processes.

B. The CMM is designed to provide organizations with guidance on how to gain control of their
process for developing and maintaining software and how to evolve toward a culture of software
excellence. It does this by serving as a model against which an organization can determine its
current process maturity and by identifying the few issues most critical to software quality and
process improvement.

C. SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S.


Defense Department to help improve software development processes.

D. The Software CMM has become a de facto standard for assessing and improving software
processes. Through the SW-CMM, the SEI and community have put in place an effective means
for modeling, defining, and measuring the maturity of the processes used by software
professionals.
1) The Capability Maturity Model for Software describes the principles and practices
underlying software process maturity and is intended to help software organizations
improve the maturity of their software processes in terms of an evolutionary path from
ad hoc, chaotic processes to mature, disciplined software processes. The CMM is
organized into five maturity levels:
i. Initial (Level 1) - The software process is characterized as ad hoc, and
occasionally even chaotic. Few processes are defined, and success depends on
individual effort and heroics.
A. Characterized by chaos, periodic panics, and heroic efforts required
by individuals to successfully complete projects. Few if any
processes in place; successes may not be repeatable.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 25
ii. Repeatable (Level 2) - Basic project management processes are established to
track cost, schedule, and functionality. The necessary process discipline is in
place to repeat earlier successes on projects with similar applications.
A. The Key Process Areas (KPA) at Level 2 focus on the software
project's concerns related to establishing basic project management
controls. They are Requirements Management, Software Project
Planning, Software Project Tracking and Oversight, Software
Subcontract Management, Software Quality Assurance, and Software
Configuration Management.
B. Software project tracking, requirements management, realistic
planning, and configuration management processes are in place;
successful practices can be repeated.
iii. Defined (Level 3) - The software process for both management and
engineering activities is documented, standardized, and integrated into a
standard software process for the organization. All projects use an approved,
tailored version of the organization's standard software process for developing
and maintaining software.
A. The key process areas at Level 3 address both project and
organizational issues, as the organization establishes an infrastructure
that institutionalizes effective software engineering and management
processes across all projects. They are Organization Process Focus,
Organization Process Definition, Training Program, Integrated
Software Management, Software Product Engineering, Intergroup
Coordination, and Peer Reviews.
B. Standard software development and maintenance processes are
integrated throughout an organization; a Software Engineering
Process Group is is in place to oversee software processes, and
training programs are used to ensure understanding and compliance.
iv. Managed (Level 4) - Detailed measures of the software process and product
quality are collected. Both the software process and products are quantitatively
understood and controlled.
A. The key process areas at Level 4 focus on establishing a quantitative
understanding of both the software process and the software work
products being built. They are Quantitative Process Management and
Software Quality Management.
B. Metrics are used to track productivity, processes, and products.
Project performance is predictable, and quality is consistently high.
v. Optimizing (Level 5) - Continuous process improvement is enabled by
quantitative feedback from the process and from piloting innovative ideas and
technologies.
A. The key process areas at Level 5 cover the issues that both the
organization and the projects must address to implement continual,
measurable software process improvement. They are Defect
Prevention, Technology Change Management, and Process Change
Management.
B. The focus is on continouous process improvement. The impact of
new processes and technologies can be predicted and effectively
implemented when required.
vi. Predictability, effectiveness, and control of an organization's software
processes are believed to improve as the organization moves up these five
levels. While not rigorous, the empirical evidence to date supports this belief

E. CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of
organizational 'maturity' that determine effectiveness in delivering quality software. It is geared
to large organizations such as large U.S. Defense Department contractors. However, many of the
QA processes involved are appropriate to any organization, and if reasonably applied can be

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 26
helpful. Organizations can receive CMM ratings by undergoing assessments by qualified
auditors.

F. People Capability Maturity Model (P-CMM)


The People Capability Maturity Model® (P-CMM®) adapts the maturity framework of the
Capability Maturity Model® for Software (CMM®) [Paulk 95], to managing and developing an
organization's work force. The motivation for the P-CMM is to radically improve the ability of
software organizations to attract, develop, motivate, organize, and retain the talent needed to
continuously improve software development capability. The P-CMM is designed to allow
software organizations to integrate work-force improvement with software process improvement
programs guided by the SW-CMM. The P-CMM can also be used by any kind of organization as
a guide for improving their people-related and work-force practices.

Based on the best current practices in the fields such as human resources and organizational
development, the P-CMM provides organizations with guidance on how to gain control of their
processes for managing and developing their work force. The P-CMM helps organizations to
characterize the maturity of their work-force practices, guide a program of continuous work-
force development, set priorities for immediate actions, integrate work-force development with
process improvement, and establish a culture of software engineering excellence. It describes an
evolutionary improvement path from ad hoc, inconsistently performed practices, to a mature,
disciplined development of the knowledge, skills, and motivation of the work force, just as the
CMM describes an evolutionary improvement path for the software processes within an
organization.

The P-CMM consists of five maturity levels that lay successive foundations for continuously
improving talent, developing effective teams, and successfully managing the people assets of the
organization. Each maturity level is a well-defined evolutionary plateau that institutionalizes a
level of capability for developing the talent within the organization.

Except for Level 1, each maturity level is decomposed into several key process areas that
indicate the areas an organization should focus on to improve its workforce capability. Each key
process area is described in terms of the key practices that contribute to satisfying its goals. The
key practices describe the infrastructure and activities that contribute most to the effective
implementation and institutionalization of the key process area.

The five maturity levels of the P-CMM are:


1) Initial

2) Repeatable - The key process areas at Level 2 focus on instilling basic discipline into
workforce activities. They are:
 Work Environment
 Communication
 Staffing
 Performance Management
 Training
 Compensation

3) Defined - The key process areas at Level 3 address issues surrounding the identification of
the organization's primary competencies and aligning its people management activities with
them. They are:
 Knowledge and Skills Analysis
 Workforce Planning
 Competency Development
 Career Development
 Competency-Based Practices
 Participatory Culture
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 27
4) Managed - The key process areas at Level 4 focus on quantitatively managing
organizational growth in people management capabilities and in establishing competency-based
teams. They are:
 Mentoring
 Team Building
 Team-Based Practices
 Organizational Competency Management
 Organizational Performance Alignment

5) Optimizing - The key process areas at Level 5 cover the issues that address continuous
improvement of methods for developing competency, at both the organizational and the
individual level. They are:
 Personal Competency Development
 Coaching
 Continuous Workforce Innovation

3. ISO 9000 / ISO 9004 Quality Management Principles and Guidelines on their Application
A. ISO 9000 family of standards presents an overview of the standards and demonstrates how they
form a basis for continual improvement and business excellence.

B. ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which
replaces the previous standard of 1994) concerns quality systems that are assessed by outside
auditors, and it applies to many kinds of production and manufacturing organizations, not just
software. It covers documentation, design, development, production, testing, installation,
servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality
Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems:
Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for
Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an
organization, and certification is typically good for about 3 years, after which a complete
reassessment is required. Note that ISO certification does not necessarily indicate quality
products - it indicates only that documented processes are followed.

C. ISO 9000 family includes ISO 9001, ISO 9002 and ISO 9003 which were integrated into ISO
9001:2000, which is an international "quality management system" standard--a standard used to
assess an organization's management approach regarding quality.

D. ISO 9126 is the software product evaluation standard that defines six characteristics of software
quality:
1) Functionality is the set of attributes that bear on the existence of a set of functions and
their specified properties. The functions are those that satisfy stated or implied needs
2) Reliability is the set of attributes that bear on the capability of software to maintain its
level of performance under stated conditions for a stated period of time
3) Usability is the set of attributes that bear on the effort needed for use, and on the
individual assessment of such use, by a stated or implied set of users
4) Efficiency is the set of attributes that bear on the relationship between the level of
performance of the software and the amount of resources used, under stated conditions
5) Maintainability is the set of attributes that bear on the effort needed to make specified
modifications
6) Portability is the set of attributes that bear on the ability of software to be transferred
from one environment.

E. ISO 9126 is the software product evaluation standard that serves to eliminate any
misunderstanding between purchaser and supplier.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 28
F. Assurance of the process by which a product is developed (ISO 9001), and the evaluation of the
quality of the end product (ISO 9126 ) are important and both require the presence of a system
for managing quality.

4. ISO 12207 - Standard for Information Technology - Life Cycle processes


A. ISO 12207 offers a framework for software life-cycle processes from concept through
retirement. It is especially suitable for acquisitions because it recognizes the distinct roles of
acquirer and supplier. In fact, the standard is intended for two-party use where an agreement or
contract defines the development, maintenance, or operation of a software system. It is not
applicable to the purchase of commercial-off-the-shelf (COTS) software products.

B. ISO 12207 describes five "primary processes"-- acquisition, supply, development, maintenance,
and operation. It divides the five processes into "activities," and the activities into "tasks," while
placing requirements upon their execution. It also specifies eight "supporting processes"--
documentation, configuration management, quality assurance, verification, validation, joint
review, audit, and problem resolution--as well as four "organizational processes"--management,
infrastructure, improvement, and training.

C. Software life cycle architecture - The standard establishes a top-level architecture of the life
cycle of software. The life cycle begins with an idea or a need that can be satisfied wholly or
partly by software and ends with the retirement of the software. The architecture is built with a
set of processes and interrelationships among these processes. The derivation of the processes is
based upon two basic principles: modularity and responsibility.
1) Modularity - The processes are modular; that is, they are maximally cohesive and
minimally coupled to the practical extent feasible. An individual process is dedicated to
a unique function.
2) Responsibility - A process is considered to be the responsibility of a party in the
software life cycle. In other words, each party has certain responsibilities.
Responsibility is one of the key principles of total quality management, as discussed
later.) This is in contrast to a "text book approach," where the life cycle functions could
be studied as topics or subjects, such as management, design, measurement, quality
control, etc.

D. SOFTWARE LIFE CYCLE - The period of time that begins when a software product is
conceived and ends when the software is no longer available for use. The software life cycle
typically includes a concept phase, requirements phase, design phase, implementation phase, test
phase, installation and checkout phase, operation and maintenance phase, and sometimes
retirement phase. (IEEE-STD-610)

5. SPICE ISO 15504 - Standard for Information Technology (Software Process Improvement and
Capability dEtermination)
A. SPICE (ISO/IEC 15504) is a major international initiative to develop a Standard for Software
Process Assessment. The project is carried out under the auspices of the International Committee
on Software Engineering Standards through its Working Group on Software Process Assessment
(WG10). Since 1993, the SPICE (ISO/IEC 15504) (Software Process Improvement and
Capability dEtermination) project, launched within the International Standards Organization has
been developing a framework standard for software process assessment, bringing together the
major suppliers and users of assessment methods. Field trials of SPICE-based assessment
commenced in January 1995, and will continue until ISO/IEC 15504 is published as a full
International Standard, scheduled by 2002.
B. The project has three principal goals:
1) to develop a working draft for a standard for software process assessment
2) to conduct industry trials of the emerging standard
3) to promote the technology transfer of software process assessment into the software
industry world-wide

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 29
6. The Institute of Electrical Electronics Engineers (IEEE) Standards
A. The IEEE is a global technical professional society serving the public interest and members in
electrical, electronics, computer, information & other technologies.

B. IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards
such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE
Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software
Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.

7. The Quality Assurance Institute’s Approach to Quality Implementation


A. QAI'S Strategic Model
1) QAI's Strategic Model contains four processes critical to your success:
i. Manage toward results
ii. Manage by process
iii. Manage by fact
iv. Manage continuous improvement
2) QAI believes that the foundation to any quality initiative is:
i. A well-defined mission statement
ii. A clearly defined vision

B. QAI has developed a customizable approach that is driven by your management's style,
customer/user needs, and a feedback system based upon metrics.

C. QAI's Approach for Managing Quality in a Changing World is designed to enable IT


organizations to restore credibility; build an enviroment where products are completed on time
and within budget, and provide proof that IT is operating effectively and efficiently.
1) Our Approach is a business-oriented approach. It recognizes the close working
relationship that must exist between IT management, customers/users, and staff. The
Approach shows methods for improving all of the activities within IT. The Approach
recognizes that many organizations must first reestablish credibility because of missed
schedules, overrun budgets, and failure to implement the needed requirements.

D. QAI has developed a detailed how-to approach to quality improvement. This approach is
composed of five process categories. Each category is segmented into specific how-to processes.
These five categories are:
1) Establish a quality environment within the I/S function.
2) Align information services with corporate objectives and define the desired results to
support those objectives.
3) Establish, implement, align and deploy processes to support the defined management
results.
4) Establish strategic and tactical dashboards to enable management to effectively use
quantitative data in their management processes.
5) Continuously improve the above process categories.

E. The Seven Phase Performance Implementation Framework:


1) Improving performance is not a one-time “quick fix”. It is a continuum of activity that
requires the understanding and buy-in of all employees. QAI subscribes to a seven-
phase approach:
2) Establishing A Partnership
i. The objectives of this phase are to: establish and clarify the organization’s
vision; organizational strengths; organizational weaknesses; constraints;
inhibitors; management’s perception of “where the pain is”; superior
performers; specific performance objectives; and develop a strategic
measurement dashboard.
3) Quality and Performance Analysis
i. This phase is to conduct an organization wide data gathering process to assist
in: identifying strengths, weaknesses, opportunities, and threats; processes that
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 30
are currently performed; audit existing standards for applicability and status;
tools (manual or automated) in use; and establish performance baselines.
4) Develop a Consensus Approach
i. This phase is to prepare and deliver the overall organizational assessment
findings and gain the consensus agreement of the project sponsors to the
findings.
5) Developing Solutions
i. Using proven industry work practices, and QAI’s improvement framework, to
accomplish the agreed upon improvement goals is the goal of this phase.
6) Implementing Solutions
i. This phase is to implement within the organization’s culture and constraints,
the agreed upon solutions.
7) Evaluating Solutions
i. This phase ensures that the solutions are increasing performance as defined by
management’s strategic measurement dashboard.
8) Improving Solutions
i. This final phase is to work towards building a culture and environment
conducive to continuous improvement of performance enhancers.

8. ANSI = 'American National Standards Institute'


A. The primary industrial standards body in the U.S.; publishes some software-related standards in
conjunction with the IEEE and ASQ (American Society for Quality).

C. Model Selection Process


1. How an IT organization selects a model. Criteria may include
A. Applicability of model to the IT organization’s goals and objectives.
B. Management commitment to include needed:
a. Applicability of model to the IT organization’s goals and objectives.
b. Management commitment to include needed:
C. Need for Baseline Assessments
D. Need for measurable goals and objectives

E. Using Models for Assessment and Baselines:


A. Product baselines are references points in vital areas of the application that can be used to measure
development progress.

B. BASELINE
a. A specification or product that has been formally reviewed and agreed upon, that thereafter
serves as the basis for further development, and that can be changed only through formal
change control procedures. (SW-CMM (IEEE-STD-610))
b. A formally approved version of a configuration item, regardless of media, formally
designated and fixed at a specific time during the configuration item's life cycle. (IEEE/EIA
12207.0)
c. A configuration identification document or a set of documents formally designated by the
Government at a specific time during a configuration item’s life cycle. Baselines, plus
approved changes from those baselines, constitute the current configuration identification.
For configuration management, there are three baselines, which are established sequentially,
as follows:
i. Functional Baseline. The initially approved documentation describing a system’s
or configuration item’s functional characteristics and the verification tests required
to demonstrate the achievement of those specified functional characteristics.
ii. Allocated Baseline. The initially approved documentation describing a
configuration item’s interface characteristics that are allocated from those of the
higher level configuration item or those to a lower level, interface requirements
with interfacing configuration items, additional design constraints, and the
verification tests required to demonstrate the achievement of those specified
functional and interface characteristics.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 31
iii. Product Baseline. The initially approved documentation describing all of the
necessary physical and functional characteristics of the configuration item,
including manufacturing processes and procedures, materials, any required joint
and combined operations interoperability characteristics of a configuration item
(including a complete summary of other service and allied interfacing
configuration items or systems and equipment); the selected physical
characteristics designated for production acceptance testing and tests necessary for
production and support of the configuration item. (DODD 5010.19, CM, 10/87)
d. A system life cycle documentation standard established to enhance program stability and
provide a critical reference point for measuring and reporting the status of program
implementation.
e. CMM’s Process capability baseline (PCB) - defined as “a documented characterization of
the range of expected results that would normally be achieved by following a specific
process under typical circumstances.
f. CMM’s Process performance baseline (PPB) - defined as “a documented characterization of
the actual results achieved by following a process, which is used as a benchmark for
comparing actual process performance against expected process performance. A process
performance baseline is typically established at the project level, although the initial process
performance baseline will usually be derived from the organization’s process capability
baselines.

4. Quality Management/Leadership
A. The most important prerequisite for a successful implementation of any major quality initiative is
commitment from executive management. It is management’s responsibility to establish strategic
objectives and build an infrastructure that is strategically aligned to those objectives. This category
will test the CSQA candidate’s understanding of the management processes used to establish the
foundation of a quality-managed environment.

B. Management’s Quality Directives

C. Mission Statement
a. Quality Vision
i. Is a clear definition of the result you are trying to achieve.
ii. Example of a Corporate Quality Vision:
A. Phelps Dodge Copper Products & Refining Corporation - Our journey is Total
Quality Management--fully satisfying our customers requirements through a
process of continuous improvement. It's critical to understand that Total Quality
Management is not a short term program. It's a long term commitment aimed at
continuously improving the way we work, providing a safe work environment,
managing our business processes, and supplier selection/retention. It is our goal to
posture our company for market expansion, thereby providing improved job
security and quality of life for all
b. Quality Goals
i. Explain how the vision will be achieved.
ii. Progress implies a goal. Without clear quality goals agreed upon up front, it is impossible to
determine if we have met our objectives. If we have not met our objectives, then we cannot
in good conscience say that a product is ready to ship. Unpleasant as it may be, hashing out
release criteria at the time the test plan is written is far superior to doing it at the release
checklist meeting.
iii. Examples of Quality Goals:
A. Quality Goals - To that goal the management of Savaré I.C. commits itself to carry
out a Quality Policy. This Policy aims at:
1. Assuring the training and the professional growth of its employees, providing
them with the resources that are necessary to carry out their duty to the best of their
abilities.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 32
2. Developing its organization to foster improvements in the company services,
spotting and eliminating the causes of non compliance.
3. Keeping under control the products of non compliance, proposing actions of
improvement and verifying their correct applications.

c. Quality Principles
i. Can be defined as procedures.
ii. Quality Management Principles provide understanding of and guidance on the application of
Quality Management.
iii. Examples of Quality Principles:
A. Mummert & Partners, Inc. - Our quality principles comprise the following
concepts:
a. Quality is a principle for our management and work
b. Best quality for our clients
c. Top-notch industry know-how and specialized expertise
d. Convincing project results
e. Maximum quality in performing projects
f. Powerful internal organization
g. Promotion of quality in human-resource work

d. Quality Values
i. Can be defined as standards
ii. Example of Quality Values:
A. Newport Corporation – To be the leading supplier of high quality optics,
instruments, micropositioning and measurement products and systems to the
Scientific and Research, Semiconductor Equipment, Computer Peripherals, and
Fiber-Optic Communications industries worldwide. Our Quality Values and
Beliefs:
a. Outstanding service to our Customers
b. Innovation, quality, and value in our products
c. Creativity and teamwork in the workplace
d. Respect for our Customers, suppliers, shareholders, employees, and
community
e. Honesty and integrity in all that we do

D. Quality Assurance Charter


a. Quality Policy
i. The statement of the enterprise’s commitment to Quality.
ii. Section 4.1.1 of the ISO 9001 standard requires that management "shall define and
document its policy for quality, including objectives for quality and commitment to quality .
. ." It goes on to say that the policy "shall be relevant to the supplier's organizational goals
and the expectations and needs of its customers."
A. Examples Of Corporate Quality Policies:
a. Zenith Software Limited, Bangalore. – “We practice continual
Improvement to achieve customer delight by providing Customer-Centric,
Cost-effective, Timely and Qualitative software solutions.”
b. Spectra-Physics Scanning Systems – “We the employees of Spectra-
Physics Scanning Systems make the personal commitment to first
understand our customers expectations then, to meet or exceed our
commitment to those expectations by performing the correct tasks defect
free, on time, every time.”
c. Argo-Tech Corporation - To meet or exceed all requirements agreed to
with our customers.

b. Quality Charter
i. The statement of the responsibilities & authorities of all Quality function
performers.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 33
ii. A statement of corporate standards and quality of service.
iii. Examples of Qualit Charters:
A. We at Temasek Polytechnic are committed to exceeding the expectations of our
stakeholders and customers in the delivery of all our courses and services. We will
do so with warmth, courtesy, grace and with typical Temasek style.
We will achieve academic and administrative excellence by encouraging and
expecting the creative involvement of all staff, by listening to our customers and
meeting their needs, and by continually improving our processes, products and
services. We will use technology innovatively and with a human touch.
We will by our own example, inspire our students and community to keep
improving themselves through continuous learning.
At Temasek Polytechnic, we will create the best environment to work and study, so
as to achieve our mission and vision for the betterment of the people of Singapore.
B. AITO is an association of independently-minded companies specialising in
particular areas or types of holiday and sharing a common dedication to high
standards of quality and personal service. AITO defines ‘quality’ as “providing a
level of satisfaction which, based upon the holiday information provided by the
tour operator, aims to meet or exceed a customer’s reasonable expectations,
regardless of the type of holiday sold or the price paid”.

c. Selling Quality
i. Educating all members in the value of quality & their responsibility for it.

5. Quality Assurance
A. Quality Assurance is a professional competency whose focus is directed at critical processes used to
build products and services. The profession is charged with the responsibility for tactical process
improvement initiatives that are strategically aligned to the goals of the organization. This category will
test the CQA candidate’s ability to understand and apply quality assurance practices in support of the
strategic quality direction of the organization.

1. Quality Champion:
A. The spokesperson for quality within the IT organization; shares responsibility with IT
management to market and deploy quality programs.
B. Leadership: A Quality Champion has a vision of the quality organization, shares that vision with
others, and positively leads others towards that vision by example.
C. Co-operation: A Quality Champion embraces the spirit of teamwork by working in co-operation
with others in order to achieve the desired organizational, team or work group results.
D. Customer Focus: A Quality Champion actively works to make the customer a priority in their
workplace and perseveres regardless of the barriers that may be encountered. (Customer may be
internal and/or external.)
E. Process Orientation: A Quality Champion leads, supports and/or participates in the development
of processes which will lead to better organization, team, or work group performance.
F. People Oriented: A Quality Champion is an open minded individual who is receptive to new
ideas, has a positive attitude, and encourages the participation of all individuals in the
organization, team or work group.

2. Establishing a Function to Promote and Manage Quality:


A. Build and mature a quality assurance function including staffing, planning, and plan execution.

3. Data-Gathering Techniques:
A. Identifying or developing and using problem reports, and the like, to gather the data that can be
used for the improvement of the enterprise’s information processes.
B. Common methods are:
a. Interviewing
b. Questionnaires
c. Observation
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 34
d. Repertory Grids
e. Concept Mapping
f. Joint Application Design

4. Problem Trend Analysis:


A. Examination of problem reports, incident reports, etc., to seek out error-prone products,
anticipate future error experience, and better manage error-detection resources.
B. Identifies repetitive problems and assesses how often given problems occur. It also provides a
mechanism to track progress of problem resolution. The main interest in this analysis is locating
where key problems are occurring and the frequency of occurrence.

5. Process Identification:
A. Identifying activities that comprise a process so that analysis can be performed.

6. Process Analysis and Understanding:


A. Analysis of the gathered data to understand a process and its strengths and weaknesses and
ability to watch a process "in motion" so that recommendations can be made to remove flaw-
introducing actions and build upon successful flaw-avoidance and detection actions.

7. Post-Implementation Reviews (PIR):


A. Technique used to review results of projects after their completion and implementation.
Evaluation may include compliance with approved requirements, expected ROI, resource
utilization, and so forth.
B. After the implementation of any major new piece of software, it is often useful to "take stock"
and look back at how the process was managed. Not only will this highlight any issues which
may need resolving over the forthcoming months, but it will also help improve any future
implementations.
C. A PIR is an independent, objective review that is a key part of the benefits management process.
It is used to answer the questions: Did we achieve what we set out to do, in business terms? If
not, what should be done? What are the lessons learned that will improve future performance?

8. Quality Plan:
A. Develop a tactical quality plan.
B. SOFTWARE QUALITY ASSURANCE PLAN - Plan which indicates the means by which the
SQA requirements are met during the information system’s life cycle.
C. The Quality Plan describes how a developer's overall quality process guidelines will be applied
to a project. It defines what is meant by the various quality-related tasks in the Project Plan. The
Quality Plan outlines how you will build quality into the software and documentation. The dates
assigned to key tasks in the Quality Plan are entered into the project plan. The Quality Plan
describes:
i. How you control changes.
ii. How you ensure that the product meets the requirements (validation).
iii. How you ensure that the product works properly (verification).
iv. How you track multiple development builds of the software to avoid confusion
(configuration management).
v. How you plan for and execute testing, both incrementally during development
and for the entire product before delivery to EPRI.
vi. How you track and resolve defects.
vii. How and when you conduct design reviews, code reviews, walk throughs,
reviews of test scripts, reviews of test results (for example, is 100% of all code
checked, or only the most complex parts?).
viii. Definitions, methods, and criteria you use to determine whether the software
has passed each review.

9. Quality Tools:
A. Understanding, using and encouraging the use of quality tools.
B. Applying Quality Assurance to IT Technologies and IT Technical Practices
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 35
A. Management Tools:
a. Pareto Chart
i. Pareto charts are extremely useful because they can be used to identify
those factors that have the greatest cumulative effect on the system, and
thus screen out the less significant factors in an analysis. Ideally, this
allows the user to focus attention on a few important factors in a process.
They are created by plotting the cumulative frequencies of the relative
frequency data (event count data), in decending order. When this is done,
the most essential factors for the analysis are graphically apparent, and in
an orderly format.
ii. From the Pareto Chart it is possible to see that the initial focus in quality
improvement should be on reducing edge flaws. Although the print
quality is also of some concern, such defects are substantially less
numerous than the edge flaws.
iii. Pareto Charts are used for:
a. Focusing on critical issues by ranking them in terms of
importance and frequency. (ex. Which problem with Product X is
most significant to our customers?)
b. Prioritizing problems or causes to efficiently initiate problem
solving. (ex. Solution of what production problem will improve
quality most?)
c. Analyzing problems or causes by different groupings of data.
(ex. by machine, by process)
d. Analyzing the before and after impact of changes made in a
process. (ex. The initiation of a quality improvement program
reduced the number of defectives?)

iv. Pareto Principle - The phenomenon whereby a small number of concerns


is usually responsible for most quality problems. The principle is named
for Vilfredo Pareto, an Italian economist who found that a large
percentage of wealth was concentrated in a small proportion of the entire
population.
v. Steps for preparing a Pareto analysis:
a. Identify the problem area
b. Name the events/items/causes that will be analyzed
c. Count the named incedences
d. Rank the count by frequency (using bar chart)
e. Validate reasonableness of the Pareto analysis

b. Cause and Effect Diagram (Fishbone)

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 36
a. This diagram, also called an Ishikawa diagram (or fish bone diagram), is
used to associate multiple possible causes with a single effect. Thus, given
a particular effect, the diagram is constructed to identify and organize
possible causes for it.
b. The primary branch represents the effect (the quality characteristic that is
intended to be improved and controlled) and is typically labelled on the
right side of the diagram. Each major branch of the diagram corresponds
to a major cause (or class of causes) that directly relates to the effect.
Minor branches correspond to more detailed causal factors. This type of
diagram is useful in any analysis, as it illustrates the relationship between
cause and effect in a rational manner.
c. Having decided on which problem to focus on, a Cause and Effect
diagram of the related process is created to help the user see the entire
process and all of its components. In many instances, attempts to find key
problem areas in a process can be a hit or miss proposition. In this
instance, it was decided to collect data on the curetimes of the material

c. The tools listed above are ideally utilized in a particular methodology, which
typically involves either reducing the process variability or identifying specific
problems in the process. However, other methodologies may need to be developed
to allow for sufficient customization to a certain specific process. In any case, the
tools should be utilized to ensure that all attempts at process improvement include:

a. Discovery
b. Analysis
c. Improvement
d. Monitoring
e. Implementation
f. Verification

d. Furthermore, it is important to note that the mere use of the quality control tools
does not necessarily constitute a quality program. Thus, to achieve lasting
improvements in quality, it is essential to establish a system that will continuously
promote quality in all aspects of its operation

B. Problem Identification Tools:


i. Demonstrate an understanding of tools such as flow charts, check sheets, and
brainstorming.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 37
a. Flowchart
i. Flowcharts are pictorial representations of a process. By breaking
the process down into its constituent steps, flowcharts can be
useful in identifying where errors are likely to be found in the
system.
ii. By breaking down the process into a series of steps, the flowchart
simplifies the analysis and gives some indication as to what event
may be adversely impacting the process.

b. Checksheet
i. The function of a checksheet is to present information in an efficient,
graphical format. This may be accomplished with a simple listing of
items. However, the utility of the checksheet may be significantly
enhanced, in some instances, by incorporating a depiction of the
system under analysis into the form.
ii. A defect location checksheet is a very simple example of how to
incorporate graphical information into data collection.
iii. Additional data collection checksheet examples demonstrate the
utility of this tool. The data collected will be used in subsequent
examples to demonstrate how the individual tools are often
interconnected

C. Problem Analysis Tools:


i. Demonstrate an understanding of tools such as histograms, scatter diagrams,
control charts, and force field analysis.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 38
c. Histogram
i. Histograms provide a simple, graphical view of accumulated
data, including its dispersion and central tendancy. In addition to
the ease with which they can be constructed, histograms provide
the easiest way to evaluate the distribution of data.
ii. Data for a test of curetimes was collected and analyzed using a
histogram.
iii. From this chart, the curetime distribution does not appear to be a
normal distribution as might be expected, but is bimodal instead.
Deviations from a normal distribution in a histogram suggest the
involvement of additional influences in the process.

d. Scatter Diagram
i. Scatter diagrams are graphical tools that attempt to depict the
influence that one variable has on another. A common diagram
of this type usually displays points representing the observed
value of one variable corresponding to the value of another
variable.
ii. Applying curing time test data to create a scatterplot, it is
possible to see that there are very few defects in the range of
approximately 29.5 to 37.0 minutes. Thus, it is possible to
conclude that by establishing a standard curetime within this
range, some degree of quality improvement is likely.

e. Control Chart

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 39
i. The control chart is the fundamental tool of statistical process
control, as it indicates the range of variability that is built into a
system (known as common cause variation). Thus, it helps
determine whether or not a process is operating consistently or if
a special cause has occurred to change the process mean or
variance.
ii. The bounds of the control chart are marked by upper and lower
control limits that are calculated by applying statistical formulas
to data from the process. Data points that fall outside these
bounds represent variations due to special causes, which can
typically be found and eliminated. On the other hand,
improvements in common cause variation require fundamental
changes in the process.
iii. Applying statistical formulas to the data from the curetime tests
of base material, it was possible to construct X-bar and R charts
to assess its consistency. As a result, we can see that the process
is in a state of statistical control.

f. Run Charts
i. Run charts are used to analyze processes according to time or
order. Run charts are useful in discovering patterns that occur
over time.
ii. Run charts evolved from the development of these control charts,
but run charts focus more on time patterns while a control chart
focuses more on acceptable limits of the process.

g. Forcefield Analysis
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 40
i. A technique which helps us to achieve improvement by
considering those factors or forces that encourage change or
those which work against change. Improvement will happen only
if the encouraging factors are strengthened or the inhibiting
factors are weakened. It should be used whenever a change or
improvement is needed.
ii. A method of analyzing a situation by looking at all the forces and
factors affecting the issue.
1. On the left side, list the forces that are helping, or could
help, drive the group towards the goal.
2. On the right side, list the forces that are hindering the
situation, or could get in the way of reaching the goal.

B. The objective of this skill is to identify where and how the Quality Assurance professional can
control IT technologies and technical practices such as:

A. Backup and Recovery


i. Restart application after problems are encountered.

B. Security
i. Protecting access to your organization’s technology assets.

C. Privacy
i. Ensuring customer’s confidential data is not compromised.

D. Client server.
i. Identifying risks of distributed processing.
ii. Distributed Processing - Refers to any of a variety of computer systems that
use more than one computer, or processor, to run an application. This includes
parallel processing, in which a single computer uses more than one CPU to
execute programs. More often, however, distributed processing refers to local-
area networks (LANs) designed so that a single program can run
simultaneously at various sites. Most distributed processing systems contain
sophisticated software that detects idle CPUs on the network and parcels out
programs to utilize them.

E. Web based systems


i. Reducing development cycle time with disciplined processes.
ii. Web-based systems integration is the art of combining multiple
systems (including Legacy systems and proprietary software
applications) into a new system that is accessible through a Web
browser.

F. E-Commerce.
i. Brochure ware, storefront, or a selling channel.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 41
ii. Brochure ware - A website that is little more than a corporate brochure, video,
or other corporate media.
iii. Storefront - The software you use to build and manage your online store is
critical to the overall success of your e-commerce venture. Your customers
will want easy navigation of your product catalog, all the modern features of a
shopping cart system, a simple check-out process, flexible payment options
and clear confirmation that their order has been received.

G. E-Business.
i. A new business strategy built around demand and trust.
ii. eBusiness is an interaction with business partners, where the interaction is
enabled by information technology. This is an accurate definition, but doesn't
give us much insight into the excitement surrounding eBusiness and
eCommerce.
iii. It is the information technology available to "enable" business transactions
electronically

H. Enterprise Resource Planning (ERP).


i. ERP (Enterprise resource planning) is an industry term for the broad set of
activities supported by multi-module application software that helps a
manufacturer or other business manage the important parts of its business,
including product planning, parts purchasing, maintaining inventories,
interacting with suppliers, providing customer service, and tracking orders.
ii. ERP attempts to integrate all departments and functions across a company
onto a single computer system that can serve all those different departments'
particular needs.
iii. ERP's best hope for demonstrating value is as a sort of battering ram for
improving the way your company takes a customer order and processes it into
an invoice and revenue—otherwise known as the order fulfillment process.
That is why ERP is often referred to as back-office software.

C. Customer Relationship Management(CRM).


A. Understanding how to build a partnership with your most valuable customers.
B. Customer Relationship Management (CRM) is the seamless coordination between
sales, marketing, customer service, field support and other functions that touch your
customer. The right CRM strategy integrates people, process and technology to
maximize all of your relationships – with your day-to-day customers, distribution
channel partners, internal customers and suppliers.

D. Supply Chain Management (SCM).


A. The umbrella under which products are ordered, created, and delivered.
B. A supply chain is a network of facilities and distribution options that performs the
functions of procurement of materials, transformation of these materials into
intermediate and finished products, and the distribution of these finished products to
customers. Supply chains exist in both service and manufacturing organizations,
although the complexity of the chain may vary greatly from industry to industry and
firm to firm.
C. Supply Chain Management focuses on globalization and information management tools
which integrate procurement, operations, and logistics from raw materials to customer
satisfaction.

E. Knowledge Management(KM).
A. The process, once institutionalized, conveys knowledge embedded in its users to others
in the organization.
B. Knowledge management involves the identification and analysis of available and
required knowledge assets and knowledge asset related processes, and the subsequent
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 42
planning and control of actions to develop both the assets and the processes so as to
fulfil organisational objectives.
C. Knowledge Management caters to the critical issues of organizational adaption,
survival and competence in face of increasingly discontinuous environmental change....
Essentially, it embodies organizational processes that seek synergistic combination of
data and information processing capacity of information technologies, and the creative
and innovative capacity of human beings." (Journal for Quality & Participation,
Hewlett-Packard Executive Intelligence, and Asian Strategy Leadership Institute
Review).
D. Unlike most conceptions of knowledge management proposed in information systems
research and in trade press, this conception is better related to the new model of
business strategy discussed earlier. Its primary focus is on How can knowledge
management enable business strategy for the new world of business? and What
strategic outcomes should knowledge management try to achieve? rather than What
goes into the nuts and bolts of the machinery that supports knowledge management? It
relates more closely to the dynamic view of business strategy as driver of the corporate
information strategy. Furthermore, unlike most prevailing definitions, this interpretation
explicitly addresses the strategic distinction between knowledge and information
explained earlier. (Information Strategy: The Executive's Journal)

F. Application Service Providers (ASP).


A. A contractual service offering hosting, managing, and access to an application (that is
commercially available) from a centrally managed facility.
B. The terms "ASP" and "Application Service Provider" are applied specifically to
companies that provide services via the Internet. In most cases, the term ASP has come
to denote companies that supply software applications and/or software-related services
over the Internet.
i. Here are the most common features of an ASP:
A. The ASP owns and operates a software application.
B. The ASP owns, operates and maintains the servers that run the
application. The ASP also employs the people needed to maintain the
application.
C. The ASP makes the application available to customers everywhere
via the Internet, either in a browser or through some sort of "thin
client."
D. The ASP bills for the application either on a per-use basis or on a
monthly/annual fee basis. In many cases, however, the ASP can
provide the service for free or will even pay the customer.
C. Simply stated, an ASP is a service provider whose specialization is the implementation
and ongoing operations management of one or more networked applications on behalf
of its customer. One key attribute beginning to rapidly evolve is the emphasis on Web-
based e-business application management as an important differentiator from the more
traditional outsourced client-server application management services.

G. Data Warehousing (DW).


A. A repository of historical data used to make decisions.
B. Data Warehousing - A collection of data designed to support management decision
making. Data warehouses contain a wide variety of data that present a coherent picture
of business conditions at a single point in time. Development of a data warehouse
includes development of systems to extract data from operating systems plus
installation of a warehouse database system that provides managers flexible access to
the data. The term data warehousing generally refers to combine many different
databases across an entire enterprise

H. Outsourcing.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 43
A. Developing a process to solicit service providers (Request For Proposal - RFP), the
process for selecting a service provider, and processes to manage and control
expectations and status.
B. Outsourcing - The act of hiring an outside source, usually a consultant or application
service provider, to transfer components or large segments of an organization's internal
IT structure, staff, processes and applications for access via a virtual private network or
an Internet-based browser.

6. Quality Control Practices


A. Quality control is a component of internal control. Quality control comprises all methods employed
to detect the presence of defects. Quality control should occur both during the build of a product or
service and after completion. The producer of the product or service, an independent group or
person, and/or the customer, can perform quality control. This category will test the candidates
understanding of quality control principles and methods.

D. System of Internal Control


1. Internal System Controls
A. A basic understanding of typical manual and automated controls within an information system
designed to ensure data integrity, process integrity, financial integrity, security, and systems
performance.
i. Management Controls
A. Knowledge of the methods and procedures used by management to provide
direction to their staff, and to ensure through governance, accounting, and
reporting on the operation of the information system function.

ii. Application Controls


A. Knowledge of how software applications are controlled.

iii. Quality Control


A. Knowledge of the subset of management controls focused on assuring a completed
project meets the user’s true needs

E. Verification and Validation


1. Verification Methods
A. In Process Reviews
A. Walkthroughs
i. The technique, ranging from informal peer reviews to structured reviews for
the purpose of early error detection for removal, is typically used for design
products and untested code. Knowledge should cover principles, rationale,
rules, and psychology of the technique.
ii. The primary purpose of a walkthrough is to identify defects in the product as
early in the Systems Life Cycle as possible. The earlier a defect is identified,
the less costly it is to correct, and the easier it is to take corrective action.
iii. Additional benefits of conducting walkthroughs (actually significant benefits
in our situation) are:
A. An opportunity to monitor adherence to standards.
B. Ensure readable, structured and easy to maintain products.
C. A method for disseminating new concepts and conventions.
D. A method for exposing people to new areas of the system.
E. A method for improving group communication.

B. Inspections
i. Planned and formal technique used to verify compliance of specific
development products against their documented standards and requirements.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 44
Knowledge should cover purpose, structure (including rules), and roles of
participants.
ii. Formal code inspections are one of the most powerful techniques available for
improving code quality. Code inspections -- peers reviewing code for bugs --
complement testing because they tend to find different mistakes than testing
does.
iii. Code inspections are even more useful when inspectors hunt for specific errors
rather than casually browse for bugs.
iv. The goal is to identify and remove bugs before testing the code.
v. Software Inspections are a disciplined engineering practice for detecting and
correcting defects in software artifacts, and preventing their leakage into field
operations. Software Inspections were introduced in the 1970s at IBM, which
pioneered their early adoption and later evolution. Software Inspections
provide value in improving software reliability, availability, and
maintainability.
vi. The Return on Investment for Software Inspections is defined as net savings
divided by detection cost. Savings result from early detection and correction
avoiding the increased cost that comes with the detection and correction of
defects later in the life cycle. An undetected major defect that escapes
detection and leaks to the next phase may cost two to ten times to detect and
correct. A minor defect may cost two to four times to detect and correct. The
net savings then are up to nine times for major defects and up to three times
for minor defects. The detection cost is the cost of preparation effort and the
cost of conduct effort.
vii. The cost of performing Software Inspections includes the individual
preparation effort of each participant before the session and the conduct effort
of participants in the inspections session. Typically, 4-5 people participate and
expend 1-2 hours of preparation and 1-2 hours of conduct each. This cost of 10
to 20 hours of total effort per session results in the early detection of 5-10
defects in 250-500 lines of new development code or 1000-1500 lines of
legacy code.
viii. Software Inspections are a rigorous form of peer reviews, a Key Process Area
(kpa) of the CMM. Although peer reviews are part of achieving CMM level 3,
and many organizations limit their software process improvement agenda to
the kpas for the maturity level they are seeking to achieve, the population of
Software Inspections adopters ranges from level 1 to 5.

C. Requirements Tracing
i. Methods to ensure that requirements are implemented correctly during each
software development life cycle phase.
ii. The development and use of Requirements Tracing techniques originated in
the early 1970s to influence the completeness, consistency, and traceability of
the requirements of a system. They provide an answer to the following
questions:
A. What mission need is addressed by a requirement?
B. Where is a requirement implemented?
C. Is this requirement necessary?
D. How do I interpret this requirement?
E. What design decisions affect the implementation of a requirement?
F. Are all requirements allocated?
G. Why is the design implemented this way and what were the other
alternatives?
H. Is this design element necessary?
I. Is the implementation compliant with the requirements?
J. What acceptance test will be used to verify a requirement?
K. Are we done?
L. What is the impact of changing a requirement?
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 45
iii. Requirements traceability is defined as the ability to describe and follow the
life of a requirement, in both a forward and backward direction (i.e., from its
origins, through its development and specification, to its subsequent
deployment and use, and through periods of ongoing refinement and iteration
in any of these phases).
A. Cross referencing - This involves embedding phrases like "see
section x" throughout the project documentation (e.g., tagging,
numbering, or indexing of requirements, and specialized tables or
matrices that track the cross references).
B. Specialized templates and integration or transformation documents -
These are used to store links between documents created in different
phases of development.
C. Restructuring - The documentation is restructured in terms of an
underlying network or graph to keep track of requirements changes
(e.g., assumption-based truth maintenance networks, chaining
mechanisms, constraint networks, and propagation).

B. Phase-End Reviews
A. Review of products and the processes used to develop or maintain systems occurring at,
or near, the completion of each phase of development, e.g., design, programming.
Decisions to proceed with development, based on cost, schedule, risk, progress, etc., are
usually a part of these reviews. A formal written report of the findings and
recommendations is normally provided.
B. These phase-end reviews are often called phase exits, stage gates, or kill points. Each
project phase normally includes a set of defined work products designed to establish the
desired level of management control. The majority of these items are related to the
primary phase deliverable, and the phases typically take their names from these items:
requirements, design, build, text, start-up, turnover, and others as appropriate.
C. At the end of a project they are commonly called “Post Mortems Review”

2. Validation Methods
A. Test Concepts
A. Testing techniques
i. Knowledge of the various techniques used in testing, such as human
(walkthroughs/inspections), white box (logic driven), black box (data driven),
Incremental (top-down and bottom-up), and regression. Should cover topics
such as purpose, and methods for designing and conducting.
a. WALKTHROUGH - A presentation of developed material to an
audience with a broad cross-section of knowledge about material
being presented. There is no required preparation on the part of
the audience and limited participation. A walkthrough gives
assurance that no major oversight lies concealed in the material.
b. WHITE BOX - Also known as glass box, structural, clear box
and open box testing. A software testing technique whereby
explicit knowledge of the internal workings of the item being
tested are used to select the test data. Unlike black box testing,
white box testing uses specific knowledge of programming code
to examine outputs. The test is accurate only if the tester knows
what the program is supposed to do. He or she can then see if the
program diverges from its intended goal. White box testing does
not account for errors caused by omission, and all visible code
must also be readable.
c. BLACK BOX - Black Box testing (Functional testing) attempts
to find discrepancies between the program and the user’s
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 46
description of what the program should do. It subjects the
program or system to inputs, and its outputs are verified for
conformance to specified behavior. Software users are concerned
with functionality and features of the system. Testing is done
from the user's point-of-view.
a. Also known as functional testing. A software testing
technique whereby the internal workings of the item
being tested are not known by the tester. For example,
in a black box test on a software design the tester only
knows the inputs and what the expected outcomes
should be and not how the program arrives at those
outputs. The tester does not ever examine the
programming code and does not need any further
knowledge of the program other than its specifications.
b. The advantages of this type of testing include:
i. The test is unbiased because the designer and
the tester are independent of each other.
ii. The tester does not need knowledge of any
specific programming languages.
iii. The test is done from the point of view of the
user, not the designer.
iv. Test cases can be designed as soon as the
specifications are complete.
c. The disadvantages of this type of testing include:
i. The test can be redundant if the software
designer has already run a test case.
ii. The test cases are difficult to design.
iii. Testing every possible input stream is
unrealistic because it would take a inordinate
amount of time; therefore, many program paths
will go untested.
d. REGRESSION ACCEPTANCE TEST - It may be necessary to
execute a planned acceptance, integration, string, system, or unit
test more than once, either because the initial execution did not
proceed successfully to its conclusion or because a flaw was
discovered in the system or subsystem being tested. The first
execution of a planned test, whether or not successful, is termed
an initial test. Subsequent executions, if any, are termed
regression tests.

B. Testing methods
i. Methods of types such as unit/program, performance (subsets include volume
and stress, security, and controls) and recovery.

A. Unit Testing - An essential aspect of unit testing is to test one


feature at time.
A. A test of an application software unit.
B. Unit tests tell a developer that the code is doing things right;
functional tests tell a developer that the code is doing the
right things.
C. Unit tests are written from a programmer's perspective. They
ensure that a particular method of a class successfully
performs a set of specific tasks. Each test confirms that a
method produces the expected output when given a known
input.
D. Unit Testing may be defined as the verification and
validation of an individual module or 'unit' of software. It is
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 47
the most "micro" scale of testing for testing particular
functions or code modules. Unit testing may require
developing test driver modules or test harnesses. In addition,
unit testing often requires detailed knowledge of the internal
program design.
E. “Routine” Unit Testing includes identifying all fields and
testing for input, output, upper and lower boundaries, as well
as calculations when appropriate. All standard GUI elements
should be identified and validated. These include scroll bars,
push buttons, links, etc

B. Functional Tests - written from a user's perspective. These tests


confirm that the system does what users are expecting it to.
A. Unlike UnitTests, which test the behavior of a single class,
FunctionalTests test the entire system from end to end.

C. Volume Testing - seeks to verify the physical and logical limits to a


system's capacity and ascertain whether such limits are acceptable to
meet the projected capacity of the application's required processing.
A. The purpose of Volume Testing is to find weaknesses
in the system with respect to its handling of large amounts of
data, server requests, etc.

D. Stress Testing - Determines the breaking point or unacceptable


performance point of a system to reveal the maximum service level it
can achieve.

E. Load Testing - Determines the response time of a system with


various workloads within the anticipated normal production range.
A. A load test simulates user activity and analyzes the effect of
the real-world user environment on an application. By load
testing a Web application throughout development, a
company can identify problematic parts of a Web
application before it is accessed by hundreds or thousands of
users.

F. Scalability Testing - Determines the behavior of a system with


expanded workloads simulating future production states such as
added data and an increased amount of users.

G. Security Testing - The primary reason for testing a system is to


identify potential vulnerabilities and subsequently repair them
A. Testing allows an organization to accurately assess their
system’s security posture. Also, testing, using the
techniques recommended in this report, allows an
organization to view its network the same way an attacker
would, thus providing additional insight and advantage.
B. The following types of security testing:
A. Network Mapping
B. Vulnerability Scanning
C. Penetration Testing
D. Security Test & Evaluation
E. Password Cracking
F. Log Review
G. Integrity Checkers
H. Virus Detection
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 48
I. War Dialing
A. There are several software packages available
(see Appendix C) that allow hackers and
network administrators to dial large blocks of
phone numbers in search of available modems.
This process is called war dialing. A computer
with four modems can dial 10,000 numbers in a
matter of days. Certain war dialers will even
attempt some limited automatic hacking when a
modem is discovered. All will provide a report
on the .discovered. numbers with modems.

C. Regression testing
i. Verification that current changes have not adversely affected previous
functionality.
ii. The selective retesting of a software system that has been modified to ensure
that any bugs have been fixed and that no other previously-working functions
have failed as a result of the reparations and that newly added features have
not created problems with previous versions of the software. Also referred to
as verification testing, regression testing is initiated after a programmer has
attempted to fix a recognized problem or has added source code to a program
that may have inadvertently introduced errors. It is a quality control measure
to ensure that the newly-modified code still complies with its specified
requirements and that unmodified code has not been affected by the
maintenance activity.
iii. Selective testing of an item, system, or component to verify thatmodifications
have not caused unintended effects and that the item, system, or
componentcomplies with its specified requirements.

D. System Test
i. A test of an entire application software system conducted to ensure that the
system meets all applicable user and design requirements.
ii. The functionality, delivered by the development team, is as specified by the
business in the Business Design Specification Document and the
Requirements Documentation.
iii. The software is of high quality; the software will replace/support the intended
business functions and achieves the standards required by the company for the
development of new systems.
iv. The software delivered interfaces correctly with existing systems.
v. System testing specifically goes after behaviors and bugs that are properties of
the entire system as distinct from properties attributable to components
(unless, of course, the component in question is the entire system). Examples
of system testing issues: resource loss bugs, throughput bugs, performance,
security, recovery, transaction synchronization bugs (often misnamed "timing
bugs").

E. Independent
i. The approach of using personnel not involved in the development of the
product or system in its testing.

F. Integration Test
i. Test which verifies that interfaces and interdependencies of products, modules,
subsystems, and systems have been properly designed and implemented.
ii. Testing that is focused on an entire end-to-end business process.
iii. The simplest definition of Integration Testing that I could find states that "An
integration test verifies that all the parts of an application "Integrate" together
or work as expected together". This is important because after all the units are
tested individually we need to ensure that they are tested progressively.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 49
iv. Many individuals use the terms System Testing and Integration Testing
interchangeably and for simple applications that do not have many
components the criteria and test scripts required to perform testing are similar.
But as an application increases in complexity, and size and users demand new
functionality and features the need to perform Integration Test becomes more
obvious. Often there is a deadline that drives businesses to develop new
applications, and in an effort to preempt the market the time for Development
and of course testing is generally shortened as the project matures. One of the
ways that the QA team contributes to the project is to perform Integration
Tests on the various units as they are developed. The QA Team does not have
to wait for the entire system to be completed before Testing is implemented
but can take the various units after they have been developed and ensure that
they function correctly together. Upon completion of all units a complete
"System Test" is performed to ensure that data 'flows' from the beginning to
the end of the Application.
v. An Integration Test will thus allow "flaws" in the application between
different Objects, Components, and Modules etc to be uncovered while the
Application is still being developed and the developers are still conceivably
working in the same portion of the application. If the problem were to
discovered in a system test at the end of the Development cycle it would
probably require more resources to correct than during the cycle. This is
especially important in today's market where the drive is to be the first to
market a product.

G. SOFTWARE QUALIFICATION TEST


i. This test phase verifies compliance with the system design objectives and tests
each module/program/system against the functional specifications using the
system test environment. The SQT should include a performance test, a
volume test, stress testing, operability tests, security and control tests, disaster
recovery tests, and, if applicable, a data conversion test

H. System Acceptance
i. Testing of the system to demonstrate system compliance with user
requirements.
ii. ACCEPTANCE TESTING - A test of an application software system that is
performed for the purpose of enabling the system sponsor to decide whether or
not to accept the system. For a given release of an application software system,
an acceptance test may or may not be conducted, at the sponsor’s option. In
cases where an acceptance test is conducted, it is not conducted in lieu of a
system test but in addition to a system test.
iii. Formal testing conducted to determine whether a system satisfies its
acceptance criteria and to enable the customer to determine whether to accept
the system. (SW-CMM (IEEE-STD-610))
iv. SOFTWARE ACCEPTANCE TEST (SAT) - The Software Acceptance Test
is used to test effectiveness of the documentation, the training plan,
environmental impact on the operating systems, and security. In this test
phase, the user is involved in validating the acceptability of the system against
acceptance criteria using the operational test environment. Establishing the test
in the operational environment requires coordination between the System
Developer and the Information Processing Centers and is used to validate any
additional impacts to the operating environment. The completion of the SAT
should result in the formal signing of a document accepting the software and
establishes a new baseline.

B. Test Program Development


A. Planning (Test Plan)

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 50
i. Selection of techniques and methods to be used to validate the product against
its approved requirements; includes planning for regression testing.
ii. A QA Team typically creates a test plan and uses it to guide the QA team's
efforts. A test plan provides an overview of the project, lists items to be tested
and serves as a communication device between all members of the project
team. The plan also identifies sufficient and proper tests to assure that
previously tested related functions will execute properly.
iii. Many large projects require a Master Test Plan which establishes the test
management process for the overall project, as well as level-specific test plans
which establish protocol for each required level of testing. In addition to the
master test plan, these projects may include test plans for:
A. Unit Testing
B. System Testing
C. Performance/Load Testing
D. User Acceptance Testing

B. Acceptance Criteria
i. The criteria that a system or component must satisfy in order to be accepted by
a user, customer, or other authorized entity. (SW-CMM (IEEE-STD-610))
ii. “Acceptance Criteria" means the written technical and operational
performance and functional criteria and documentation standards set out in the
project or test plan.

C. Cases
i. Development of test objective (cases), including techniques, approaches for
verification, and validation of cases.
ii. A specific set of test data and associated inputs, execution conditions, and
expected results that determine whether the software being tested meets
functional requirements.
iii. What is the difference between Test Cases and Test Scripts?
A. Test cases describe what you want to test. Test scripts describe how
to perform the test. A test case typically describes detailed test
conditions that are designed to produce an expected result. Test
scripts also contain expected results, but usually in more general
terms. A distinction must also be made between manual and
automated test scripts. Automated test scripts are sometimes referred
to as test procedures. These automated test scripts or procedures
closely resemble source code. In fact, they are software testing
software. In test automation, a test script may be placed in a loop and
read many different test cases from test data files. You can also carry
this concept to manual test scripts by keeping the test script free of
specific test data

D. Procedures
i. Development, execution, and evaluation of procedures used for testing.
ii. Test Procedure - Defines the procedures to be followed when applying a test
suite to a product for the purposes of conformance testing.

E. Data
i. Development of test data. Tools related to generation of test data. Analysis
techniques used to evaluate results of testing.
ii. Rule-based software test data generation is proposed as an alternative to either
path/predicate analysis or random data generation.
iii. The chaining approach for automated software test data generation which
builds on the current theory of execution-oriented test data generation. In the
chaining approach, test data are derived based on the actual execution of the
program under test. For many programs, the execution of the selected
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 51
statement may require prior execution of some other statements. The existing
methods of test data generation may not efficiently generate test data for these
types of programs because they only use control flow information of a
program during the search process. The chaining approach uses data
dependence analysis to guide the search process, i.e., data dependence analysis
automatically identifies statements that affect the execution of the selected
statement. The chaining approach uses these statements to form a sequence of
statements that is to be executed prior to the execution of the selected
statement. The experiments have shown that the chaining approach may
significantly improve the chances of finding test data as compared to the
existing methods of automated test data generation.

F. Specifications
i. Creation of test specifications. Knowledge should cover purpose, preparation,
and usage.
ii. The test case specifications should be developed from the test plan and are the
second phase of the test development life cycle. The test specification should
explain "how" to implement the test cases described in the test plan.
iii. A software specification document is crucial for a successful project. It
describes the features the new product should have. A good software
specification document can:
A. Reduce the time needed to complete the project by determining the
usability of the system and providing the customers with a realistic
expectation of what the system will do — before it is built.
B. Improve customer satisfaction since their expectations are met or
exceeded.
C. A well-understood specification reduces unplanned features and
informs developers where future features will be needed, so the
design can allow for them.
D. Determine what features are most important, and what subsets of
features comprise a useful solution. By dividing the full feature set
into useful subsets, and confirming these subsets, you can better plan
a staged delivery that will test your assumptions and validate your
design.

G. Scripts
i. Documentation of the steps to be performed in testing. Focus should be on the
purpose and preparation.
ii. Test scripts describe how to perform the test.
iii. The decision to use test cases versus test scripts depends on:
A. The level of predictability of how a user will interact with a web
interface.
B. The importance of sequence in the user’s correct performance of a
task.
C. The degree of freedom a user is intended to have in interacting with a
web interface.
D. The importance of documenting a test of a specified sequence.
E. The intensity of the test.
iv. The main benefit of a test script is that it predefines a procedure to follow in
performing a test. This can also be its greatest curse. Sometimes you want the
randomness of user actions. At the same time, you want to know in advance
the conditions to be tested and how they should behave. This is the classic
tradeoff between test scripts and test cases.

H. Analysis Techniques
i. Tools and methods to access test results.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 52
ii. The Test Results Analysis Report is an analysis of the results of running tests.
The results analysis provide management and the development team with a
readout of the product quality.The following sections should be included in the
results analysis:
A. Management Summary
B. Test Results Analysis
C. Test Logs/Traces
iii. Identify any remaining (open) deficiencies, limitations, or constraints that
were detected by the testing performed. Problem/change reports may be used
to provide deficiency information. c. For each remaining (open) deficiency,
limitation, or constraint, describe: 1) Its impact on system performance,
including identification of requirements not met 2) The impact on system
design to correct it 3) A recommended solution/approach for correcting it.
iv. Provide an assessment of the manner in which the test environment may be
different from the operational environment and the effect of this difference on
the test results.
v. Provide any recommended improvements in the design, operation, or testing of
the system tested. Describe the impact on the system, for each
recommendation. If no recommendation is provided state, indicate: None.
vi. Data flowanalysis provides interesting information about the
structure of the code that can be used for deducing static
properties of the code and for deriving coverage information.

C. Test Completion Criteria


A. Code coverage
i. Knowledge of purpose, methods, and test coverage tools used for monitoring
the execution of software and reporting on the degree of coverage at the
statement, branch or path level.
ii. Code coverage analysis is sometimes called test coverage analysis. The two
terms are synonymous. The academic world more often uses the term "test
coverage" while practitioners more often use "code coverage". Likewise, a
coverage analyzer is sometimes called a coverage monitor.
iii. Code coverage analysis is a structural testing technique (AKA glass box
testing and white box testing). Structural testing compares test program
behavior against the apparent intention of the source code. This contrasts with
functional testing (AKA black-box testing), which compares test program
behavior against a requirements specification. Structural testing examines how
the program works, taking into account possible pitfalls in the structure and
logic. Functional testing examines what the program accomplishes, without
regard to how it works internally.
iv. Structural testing is also called path testing since you choose test cases that
cause paths to be taken through the structure of the program. Do not confuse
path testing with the path coverage measure.
v. A large variety of coverage measures exist. Here is a description of some
fundamental measures:
A. Statement Coverage - This measure reports whether each executable
statement is encountered.
B. Decision Coverage - This measure reports whether boolean
expressions tested in control structures (such as the if-statement and
while-statement) evaluated to both true and false. The entire boolean
expression is considered one true-or-false predicate regardless of
whether it contains logical-and or logical-or operators. Additionally,
this measure includes coverage of switch-statement cases, exception
handlers, and interrupt handlers.
C. Condition Coverage - Condition coverage reports the true or false
outcome of each boolean sub-expression, separated by logical-and

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 53
and logical-or if they occur. Condition coverage measures the sub-
expressions independently of each other.
D. Multiple Condition Coverage - Multiple condition coverage reports
whether every possible combination of boolean sub-expressions
occurs. As with condition coverage, the sub-expressions are separated
by logical-and and logical-or, when present. The test cases required
for full multiple condition coverage of a condition are given by the
logical operator truth table for the condition.
E. Condition/Decision Coverage - Condition/Decision Coverage is a
hybrid measure composed by the union of condition coverage and
decision coverage.
F. Modified Condition/Decision Coverage - Also known as MC/DC and
MCDC. This measure requires enough test cases to verify every
condition can affect the result of its encompassing decision. This
measure was created at Boeing and is required for aviation software
by RCTA/DO-178B.
G. Path Coverage - This measure reports whether each of the possible
paths in each function have been followed. A path is a unique
sequence of branches from the function entry to the exit. Also
known as predicate coverage. Predicate coverage views paths as
possible combinations of logical conditions.
H. You can compare relative strengths when a stronger measure includes
a weaker measure.
A. Decision coverage includes statement coverage since
exercising every branch must lead to exercising every
statement.
B. Condition/decision coverage includes decision coverage and
condition coverage (by definition).
C. Path coverage includes decision coverage.
D. Predicate coverage includes path coverage and multiple
condition coverage, as well as most other measures.
vi. Here is a description of some variations of the fundamental measures and
some less commonly use measures:
A. Function Coverage - This measure reports whether you invoked each
function or procedure. It is useful during preliminary testing to assure
at least some coverage in all areas of the software. Broad, shallow
testing finds gross deficiencies in a test suite quickly.
BullseyeCoverage measures function coverage.
B. Call Coverage - This measure reports whether you executed each
function call. The hypothesis is that faults commonly occur in
interfaces between modules. Also known as call pair coverage
C. Linear Code Sequence and Jump (LCSAJ) Coverage - This variation
of path coverage considers only sub-paths that can easily be
represented in the program source code, without requiring a flow
graph. An LCSAJ is a sequence of source code lines executed in
sequence. This "linear" sequence can contain decisions as long as the
control flow actually continues from one line to the next at run-time.
Sub-paths are constructed by concatenating LCSAJs. Researchers
refer to the coverage ratio of paths of length n LCSAJs as the test
effectiveness ratio (TER) n+2. The advantage of this measure is that
it is more thorough than decision coverage yet avoids the exponential
difficulty of path coverage. The disadvantage is that it does not avoid
infeasible paths.
D. Data Flow Coverage - This variation of path coverage considers only
the sub-paths from variable assignments to subsequent references of
the variables. The advantage of this measure is the paths reported
have direct relevance to the way the program handles data. One
disadvantage is that this measure does not include decision coverage.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 54
Another disadvantage is complexity. Researchers have proposed
numerous variations, all of which increase the complexity of this
measure. For example, variations distinguish between the use of a
variable in a computation versus a use in a decision, and between
local and global variables. As with data flow analysis for code
optimization, pointers also present problems.
E. Object Code Branch Coverage - This measure reports whether each
machine language conditional branch instruction both took the branch
and fell through. This measure gives results that depend on the
compiler rather than on the program structure since compiler code
generation and optimization techniques can create object code that
bears little similarity to the original source code structure.
F. Loop Coverage - This measure reports whether you executed each
loop body zero times, exactly once, and more than once
(consecutively). For do-while loops, loop coverage reports whether
you executed the body exactly once, and more than once. The
valuable aspect of this measure is determining whether while-loops
and for-loops execute more than once, information not reported by
others measure.
G. Race Coverage - This measure reports whether multiple threads
execute the same code at the same time. It helps detect failure to
synchronize access to resources. It is useful for testing multi-threaded
programs such as in an operating system.
H. Relational Operator Coverage - This measure reports whether
boundary situations occur with relational operators (<, <=, >, >=).
The hypothesis is that boundary test cases find off-by-one errors and
mistaken uses of wrong relational operators such as < instead of <=.
I. Weak Mutation Coverage - This measure is similar to relational
operator coverage but much more general. It reports whether test
cases occur which would expose the use of wrong operators and also
wrong operands. It works by reporting coverage of conditions derived
by substituting (mutating) the program's expressions with alternate
operators, such as "-" substituted for "+", and with alternate variables
substituted.

B. Risk
i. Knowledge of risk assessment and risk abatement techniques used in the
testing process.
ii. The Software Risk Evaluation (SRE) is a service that helps projects establish
an initial baseline set of risks and mitigation plans¾ one of the key first steps
for putting risk management in place. The SEI Software Risk Evaluation
(SRE) Service is a diagnostic and decision-making tool that enables the
identification, analysis, tracking, mitigation, and communication of risks in
software-intensive programs. An SRE is used to identify and categorize
specific program risks emanating from product, process, management,
resources, and constraints. The program's own personnel participate in the
identification, analysis, and mitigation of risks facing their own development
effort.
iii. Risks to a software project must first be identified. One way of identifying
software project risks is using a questionnaire such as the SEI Taxonomy-
Based Risk Identification Questionnaire. The Taxonomy-Based Questionnaire
is structured into three main areas of software risk: Product Engineering,
Development Environment, and Program Constraints. Each of these categories
is subdivided further, narrowing the focus on particular aspects of risk. For
example, the thirteenth question on the Questionnaire (Product Engineering,
Requirements, Scale) asks: "Is the system size and complexity a concern?".

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 55
Once risks have been identified in some manner, the process must continue.
The risk must be analyzed
iv. After analyzing software risks, a plan should be formulated to address each
risk. Planning stages should cover the following:
A. Specify why the risk is important?
B. What information is needed to track the status of the risk?
C. Who is responsible for the Risk Management activity?
D. What resources are needed to perform the activity?
E. A detailed plan of how the risk will be prevented and/or mitigated is
created.
F. Action planning can be used to mitigate the risk via an immediate
response. The probability of the risk occurring, and/or the potential
impact of the risk may be mitigated by dealing with the problem early
in the project.
G. Contingency planning can be used to monitor the risk and invoke a
predetermined response. A trigger should be set up, and if the trigger
is reached, the contingency plan is put in effect.
v. Risk Management is a practice with processes, methods, and tools for
managing risks in a project. It provides a disciplined environment for proactive
decision making to
A. assess continuously what could go wrong (risks)
B. determine which risks are important to deal with
C. implement strategies to deal with those risks
vi. Risk management is currently a key process area (KPA) in the Systems
Engineering CMM® and the Software Acquisition CMM. It is a Process Area
(PA) at Maturity Level 3 in the CMM IntegrationSM (CMMISM) staged model.
Risk management and process improvement are complementary. Risk
management focuses on building the right product, project performance,
managing change, innovation, and uncertainty. Process improvement focuses
on building the product right, activity improvement, managing variability,
conformance, and control
vii. Software Risk factors that impact a product’s performance, cost, and schedule
can be further segmented into five risk areas. However, any given risk may
have an impact in more than one area. The five risk areas are:
A. Technical risk (performance related)
B. Supportability risk (performance related)
C. Programmatic risk (environment related)
D. Cost risk
E. Schedule risk
viii. Risk Identification What key technical area or process is at risk?
ix. Risk Analysis Determine the root cause of the risk. Quantify your risks by
determining the likelihood of an event and the potential consequence to the
ISS.
x. Risk Abatement What can you do about a risk? Identify possible solutions.
Next, develop a mitigation/contingency plan or accept the risk.
xi. Risk Communication Provide status of the risks on a regular basis.

C. Error rate
i. Understanding of mean time between errors as a criterion for test
completion.
ii. Reliability, expressed in MTBE (Mean Time Between Errors) and availability
expressed as MTBF (mean time between failures).
iii. The reliability of a system is measured by the probability of errors.
iv. The cumulative average time that a manufacturer estimates between failures or
occurrences in a component.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 56
v. Mean Time Between Failures - The "down-time" during which the managed
application is unavailable due to failure.
vi. Reliability is a measure that indicates the probability of faults or the mean time
between errors.

F. Software Change Control


1. The process by which a software change is proposed, evaluated, approved or rejected, scheduled,
implemented, closed, and tracked.
A. Software change control is both a managerial as well as technical activityand is essential for
proper software quality control. At the project level these activities should be included as part of
the project plan or in a software change control plan. Change control procedures cover the
establishment of methods for identifying, storing and changing system items that pass through
development, integration, implementation and operations

i. Project.
1. Change control over the requirements, plans, design, documentation, code, etc., of a
particular software project.
2. Example Tool:
a. Merant PVCS Professional gives teams the power to organize and manage
software assets, track and communicate issues and standardize the software
build process. A complete package for software configuration management,
Professional combines PVCS Version Manager, PVCS Tracker, and PVCS
Configuration Builder for automated software builds. As PVCS Professional
facilitates communication, coordinates tasks and manages changes, teams can
speed time to market with efficient parallel development, more code re-use
and fewer errors. PVCS Professional combines PVCS Version Manager,
PVCS Tracker and PVCS Configuration Builder in an integrated suite for
software configuration management. PVCS Professional enables organizations
to protect development assets, automate software configuration management
tasks, and manage workflow tasks involved in team collaboration.
i. Organizes, manages and protects software assets
ii. Tracks and communicates issues across the enterprise
iii. Automates software builds for standardized, repeatable development
success.

ii. Environment.
1. Change control over the project management environment that projects function within
(i.e., standards, procedures, and guidelines). Also, hardware and operating system
(support) software change control.
2. Change Control Environment
a. Where possible, three separate environments should be maintained for each
strategic system :
i. development
ii. testing
iii. production
b. Migration of software between environments should only be undertaken after
obtaining the appropriate sign-offs as specified in the Software Change
Control Procedures.
c. New software and changes to existing software should be prepared in the
Development Environment by appropriately authorized development or
applications support staff. Applications should be specified, designed and
coded according to systems development methodology.
d. Once assessed as satisfactory, the new or modified software should be
transferred to the Testing Environment for systems and acceptance testing by
an appropriate testing group, according to an agreed test procedure. Changes
to software are not permitted in the testing environment.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 57
e. Following successful completion of testing and approval by the appropriate
systems custodian, the new or modified software should be transferred to the
Production Environment for implementation under the control of IT
Operations staff. A contingency plan to enable the software to be restored to
its previous version in the event that the implementation is unsuccessful
should be prepared where appropriate.

iii. Version Control.


1. Knowledge of controlling multiple releases of configuration items.
2. Maybe version management is more appropriate. In SVC you can specifacally tell
which version of a program, module, Java class/package or even complete projects has
which functionality.
3. Although not a global standard per se, software developers have a generally agreed
code of practice with regard to software versioning. In general, the version number will
be identified by two or three digits e.g. (version) 1.2.1 This example indicates that the
software is in its first major release, its second point release and its first mini release or
patch.
a. Visual SourceSafe 6.0c is the ideal version control system for any
development team using Microsoft Visual Studio® .NET. Historically,
problems within the team development environment stem from the inability to
work comfortably in a setting sensitive to their projects and source code.
While every project requires an adequate level of software management, the
costs and overhead associated with file-based version control often outweigh
the benefits. By providing project-oriented software management, Visual
SourceSafe enables teams to develop with the confidence that their projects
and files will be protected. Versioning features, such as labels, provide
snapshots of a project for the quick retrieval of any previous version in the
software life cycle. Difference reporting provides quick access to changes
across separate versions of the same file, enabling developers to know
immediately what lines of code have changed.
b. Share and linking capabilities promote the reuse of code and components
across projects and simplify code maintenance by propagating changes across
all shared and linked files whenever a file is updated.
c. Parallel development features, such as branching, enable teams to fork the
development process into parallel projects and files, creating identical copies
that inherit all versioning documentation but may be tracked as new,
individual projects. Team members can also reconcile conflicts between
different versions of the same file by using a visual merge capability, which
provides a point-and-click interface for uniting files and avoids potential loss
of valuable changes. As revisions are made, files are added and modified, and
the software life cycle grows, all changes and documentation are secured by
Visual SourceSafe, providing an audit trail for every file and every project,
easily accessible to even the novice user.
d. Visual SourceSafe also provides many advanced features for Web site
management, including extensive deployment support. Additionally, Visual
SourceSafe can be used to create site maps and check hyperlinks, enabling a
deeper degree of software reliability.

G. Defect Management
1. Defect Recording, Tracking And Correction

i. Defect Reporting and Tracking.


1. Identification of the most common sources of information and the different methods,
frequency, and types of reporting.
a. Corrective Action on Defects.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 58
i. Analyzing problem data and using problem-solving principles such as
identification of problems, establishment of applicable objectives, and
defining/documenting and applying the appropriate solution.

2. IT Auditing Coordination
A. Knowledge of IT audit function and how to effectively coordinate with work schedules; response
to implementation of IT audit recommendations; and joint projects with the IT audit section of the
organization’s internal auditing department.
B. Dynamic economic, political and social forces are creating an urgent, worldwide demand for
knowledge. In an atmosphere like this it is critical that all auditors understand the impact of
Information Systems on control and auditing. Today’s auditors must be fully integrated auditors,
understanding information systems and able to function effectively within a technical environment.
No longer the exclusive domain of the Information Systems department or even Information
Systems Auditors, high tech has created new roles and responsibilities for everyone in the audit
function. Auditors must know what the new technologies are, the risks and exposures involved and
how they affect audit plans and the audit. Today’s auditors are becoming more proactive and
coactive, rather than reactive. In their broadened role they participate with management in
strengthening the overall control framework.

F. QAI Recommended Quality practices


a. Meet Customer’s True Quality Needs
i. Uniqueness of information technology
ii. Requirements documents are defect prone
iii. Identify customer’s true needs and update the requirements document, if needed.

b. Produce products and services on-time at the lowest possible cost


i. Quality at any cost, delivered at any time, will not satisfy customers

c. Create enthusiasm and cooperation between management and staff for quality
i. Everybody’s responsibility
ii. Everyone must ‘buy in’ into the quality principles & methods

d. Reduce product inspections and testing by building processes that produce defect-free products.

e. IT policies, standards and procedures must be developed, well documented, continually updated and
followed.

f. Quality must be defined quantitatively


i. Quality is a binary state
ii. If Quality is not measured, it can not be controlled

g. The goal of IT management and staff must be to produce defect-free products & services

h. Non-conformance must be detected as early as possible, recorded and measured


i. Economic issue
ii. Helps in improving processes

i. IT management must accept the responsibility for nonconformance


i. 80% of all defects are directly attributable to ineffective processes

j. The customer’s view of Quality is the correct view of Quality


i. Customer is always right
ii. Cannot survive without customers

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 59
7. Define, Build, Implement, and Improve Work Processes
A. The world is constantly changing; customers are more knowledgeable and demanding; and quality and speed of
delivery are now critical needs. We must constantly improve our ability to produce quality products that add
value to your customer base. Defining and continuously improving work processes enables you to maintain the
pace of change without negatively impacting the quality of your products and services. This category will test
the candidate’s understanding of process components, and how to define a process and how to continuously
improve process capability.
a. A process is defined as any set of conditions or set of causes that work
together to produce a given result. In other words, a process is a system of causes: the people,
materials, energy, equipment, and procedures working together in a specified manner to produce an
intended result.
b. The purpose of a process is to produce results such as products or services. We
measure the results and the ways in which they are delivered to determine quality, cost, quantity, and
timeliness of the products and services. These characteristics, and others, help to define process
performance. Measurements of process performance are used to evaluate the ability of a process to
produce products or services with the characteristics we desire.
c. The four possibilities for any Process:
A. Conforming and predictable -- the ideal state
B. Nonconforming and predictable -- the threshold state
C. Conforming yet unpredictable -- the brink of chaos
D. Nonconforming and unpredictable -- the state of chaos
iv. Use statistical tools to indicate the degree to which a process is “in control.”

b. Developing/Building Processes
a. Process development group
i. The identified group within the enterprise that has the responsibility and authority to identify,
acquire, or generate, and install process, across some or all of the enterprise.
ii. The Process Development Group has bee developed by Richard Reynolds at Indigo Rose as a
highly effective way of enabling people to be more productive working in a team. The group
method improves working practice and outcomes and is likely to contribute to far greater job
satisfaction. The group provides a safe and supportive framework for employees at all levels to
learn to communicate openly and to work constructively in any environment where the
achievement of common goals is important.
b. Process committee
i. An implementation of the standards group in which the members are not part of a fixed group,
but come from the other standardized portions of the enterprise to perform the standards
function.
c. Process development process
i. The methods for process mapping; selection of processes to build; and the procedures to build
processes.
d. Implementing a process
i. Implementing a newly defined process is as complex and risk laden as defining the process and
includes training.

c. Administering Processes
a. On-line standards
i. Maintenance of approved processes in an automated environment, e.g., using on-line terminals
rather than paper manuals.
b. Standards needs assessment
i. Determination of the needs of the enterprise for new or modified standards, or for the
elimination of obsolete or non-beneficial standards.

d. Compliance and Enforcement


a. Tailoring processes
i. Modifying existing processes to better match the needs of a project or environment.
b. Waiver

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 60
i. The method by which release from the requirements of a specific process may be obtained for a
specific situation.
c. Automated process enforcement
i. The use of precompilers and other tools to detect noncompliance. In some cases, the tools can
correct the noncompliance.

e. Process Improvement Methods


a. Locating potential process improvements, evaluating them, and providing management with the
information and techniques to introduce beneficial modifications to the process.
i. Establish process measures
1. Collecting measurement data on process performance in use of process improvement.
2. Measurement - provides objective information about, and visibility into, project
performance, process performance, process capability, and product and service quality.
Use of measures and other information allow organizations to learn from the past in
order to improve performance and achieve better predictability over time. The
Capability Maturity Model (CMM) certainly affirms this viewpoint and represents
measurement practices as critical components of project, process, and quality
management at all levels.

f. Externally Developed Standards


a. Sources
i. Knowledge of where to find standards developed outside the enterprise (e.g., IEEE), which may
be useful to the enterprise.
ii. The World Wide Web Consortium (W3C) develops interoperable technologies (specifications,
guidelines, software, and tools) to lead the Web to its full potential. W3C is a forum for
information, commerce, communication, and collective understanding.
iii. Software development standards include: IEEE, FAA, DOD, and ISO/IEC 15504

b. Acquisition and customization


i. Acquiring externally developed standards and adapting them for beneficial use within the
enterprise.

c. Technical standards
i. Standards developed outside the enterprise that may affect the operation, products, or
opportunities of the enterprise (e.g., statutory regulations, industry standards and specifications,
and other public domain standards).

8. Quantitative Methods
A. What gets measured gets done. A properly established measurement system is used to help
achieve missions, goals, and objectives. Measurement data is most reliable when it is generated as
a by-product of producing a product or service. The quality assurance professional must ensure
quantitative data is valid and reliable and presented to management in a timely and easy-to-use
manner. Measurement can be used to measure the status of processes, customer satisfaction,
product quality, effectiveness and efficiency of processes, and as a tool for management to use in
their decision-making processes. This category will test the candidate’s understanding of measures
and how to build an effective measurement program.

B. Probability and Statistics


1. Statistical process control (SPC).
A. Statistical methods used to monitor process performance. Statistics are used for both determining
whether or not the processes under control (i.e., within acceptable variance from standards) and
to help identify the root cause of process problems that are causing defects.
B. SPC is a method of monitoring a process during its operation in order to control the quality of
the products while they are being produced -- rather than relying on inspection to find problems
after the fact. It involves gathering information about the product, or the process itself, on a near
real-time basis so that the operator can take action on the process. This is done in order to
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 61
identify special causes of variation and other non-normal processing conditions, thus bringing
the process under statistical control and reducing variation.
C. Statistical Process Control, SPC for short, is a tool that businesses and industries use to achieve
quality in their products and/or services. Universally, businesses and industries use mathematics
and statistical measurements to solve problems. There is an increasing demand for managers and
workers who understand and are able to apply Statistical Process Control methods. The problem
solving cycle shown below illustrates the process of continual improvement. Notice that
improvement is a never-ending cycle. Even a superior product or service can be improved on.

Define Pareto
chart
Problem
Implemen
Define
Proposal t Flow Chart
Process
Solution

Analyse Analyse List Brainstorm


Ranking
Charts
Data Possible Fishbone
Collect Sampling
Data

D. Dr. W. Edwards Deming, who had worked with Walter Shewhart, taught SPC to the Japanese
after World War II. Today US businesses are in the process of implementing SPC to build
quality into products and services. According to Dr. Deming, 80 percent of all quality problems
are due to management. This is not to say that one day management decided to make inferior
goods. Top management, by its methods of operation, has built defects into the process. Top
managers make important decisions for companies and have the most influence on the future of
the business. To assist management, Dr. Deming has created 14 points to serve as a guideline.

2. Random and assignable causes


A. Statistical methods used to differentiate normal variance in the operation of processes (random),
from variances that are associated with the root cause (assignable). Random causes rarely can be
eliminated; assignable causes can almost always be eliminated.
B. Variation - Differences exist from product to product, person to person, or machine to machine.
These differences among products, or the process output over time, are called variation.
a. Random variation occurs, as its name implies, due to random causes or
chance. Random variation is inherent in a system. It is hard to detect and reduce.
b. Assignable variation in the product performance occurs due to a change in
machine setup, chemicals, operator, procedure, or other specific causes. Assignable
variation is easy to detect and easier to reduce than random variation because its causes are
known. Once all assignable causes are removed from a process, then the process is in
statistical control.
C. Managers need to determine whether a production system is undergoing only random
fluctuations in its operation, or whether non-random deviations, the so-called "assignable
causes," are occurring—to be tracked down and eliminated. Assignable causes in converting
operations could include a change in raw materials, a set point change, out-of-round rolls, bad
bearings, the presence of an impurity, poor calibration of the instruments, etc. Assignable causes
also can include factors too minor to bother about.
D. Fluctuations in the process performance come from two sources. Fluctuations over time in the
inherent process cause systems (differences in material, equipment, environment, physical and
mental reactions of people, etc.) are responsible for random variation in the process performance
and is referred to by Shewhart as common cause variation. On the other hand, the process may
be subject to large and unusual changes in the cause system from time to time which result in
non-random variation in the process performance. Such variation is referred to as assignable or
special cause variation since the variation is generally due to causes that could have been

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 62
prevented. The total variation that may be observed in process performance is expressed by the
equation:

Total variation = Common cause variation + assignable cause variation

E. Common cause variation of process performance is characterized by fluctuations that are


random, and vary within predictable bounds. When the cause system is constant, the observed
distribution of the process performance variation tends to approach, as a statistical limit, a
distribution function of some sort. When process performance is limited to common cause
variation, it will be within a distribution function and is therefore predictable, i.e., in statistical
control or stable.
F. When variation in process performance includes assignable cause variation, the process is no
longer predictable. Assignable cause variation arises from events that are not part of the normal
process, and are due to sudden or persistent anomalies within one or more components of the
cause system. When assignable causes are removed, process variation will decrease with future
execution of the process and the process will become stable and predictable.
G. Shewhart's control charts are the primary vehicle used to analyze process performance variation.
The control charts employ upper and lower control limits (UCL and LCL) to delineate or filter
assignable cause variation from common cause variation. The limits are empirically derived
from measurements of the variation in the process performance over time

3. Problem characteristic analysis


A. Statistical methods used to accumulate and analyze problems incurred as a result of operating
processes.
B. Normal distribution is characterized by two parameters, mean and standard deviation.
Calculating probabilities using the normal distribution requires the estimate of the process mean,
the standard deviation. In the industrial environment, the normal distribution is used to predict
the probability of producing defective product.
A. Process mean. In manufacturing operations, mean is the value where a process is
expected to operate or the target value. Average values are plotted to monitor the
process output using tools such as trend charts, control charts, or pre-control charts.
B. Standard deviation. Standard deviation is widely used to quantify the variability of a
process. Standard deviation is the square root of the mean sum of the squares of the
deviations from the mean. A process capability is defined as six times standard
deviation. The standard deviation is a measure of inconsistency in a process.

C. Measures and Metrics


a. Characteristics of measures and methods
i. The definitions and concepts.
A. Type of software measurements:
a. Product size: count lines of code, function points, object classes, number of
requirements, or GUI elements
b. Estimated and actual duration (calendar time) and effort (labor hours): track
for individual tasks, project milestones, and overall product development
c. Work effort distribution: record the time spent in development activities
(project management, requirements specification, design, coding, testing) and
maintenance activities (adaptive, perfective, corrective)
d. Defects: count the number found by testing and by customers and their type,
severity, and status (open or closed)

B. Some software measures can be characterizes as static, meaning that they can be
derived from examination of the software itself (usually in the form of source or object
code, or perhaps in terms of a design document). Other measures can be characterized
as dynamic, meaning that they can only be derived from observation of the execution of
the softwareComputer scientists and software engineers have done a lot of research

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 63
trying to define the important measures of software engineering. One of the most
significant efforts was undertaken over the last four years at the Software Engineering
Institute (SEI), a federally funded research and development center at Carnegie Mellon
University. Researchers at the SEI, assisted by more than 60 specialists from industry,
academia, and government, identified four direct measures and several indirect
measures that software engineering organizations can use to improve their software
development processes. The properties or attributes of software that are directly
measurable are size (source lines of code (SLOC)), effort (labor-month, man-month,
staff-week, staff-hour), schedule, and quality(freedom from defects, stability for use).
Another property whose measure is widely regarded as fundamentally important is
performance, which can be defined in several ways. Clearly, a performance measure is
a dynamic software measure. There are a few other software properties that are
generally believed to be important but which we don’t yet know how to measure very
well. Among these are reliability and complexity. Finally, there are other attributes of
software that seem important but that we don’t know how to measure at all. These
include maintainability, usability, and portibility.

C. A measure is a numerical value computed from a collection of data. Before examining


the details of software measures (often called metrics), let's consider which properties
of a measure, in general, that are reasonable. A measure should have the following
characteristics to be of value to us:
a. The measure should be robust. The calculation of the measure is repeatable
and the result is insensitive to minor changes in environment, tool, or observer.
The measure is precise, and the process of collecting the data for the measure
is objective.
b. The measure should suggest a norm, scale, and bounds. There is a scale upon
which we can make a comparison of two measures of the same type, and so
conclude which of the two measures is more desirable. For example, there is a
realistic lower bound, such as zero for number of errors.
c. The measure should be meaningful. The measure relates to the product, and
there should be a rationale for collecting data for the measure.
d. Often, one measure alone is insufficient to measure the features of the design
paradigm or to accomplish the objectives of the software project. This suggests
that a collection or suite of measures is needed to provide the range and
diversity necessary to achieve the software project's objectives. A suite of
measures adds an additional consideration.
i. A suite of measures should be consistent. If a smaller value is better
for one type of measure in the suite, then smaller is better for all other
types of measures in the suite.
e. Software metrics are measurements made on a software artifact.
D. Organizations with successful measurement programs report the following benefits:
• Insight into product development
• Capability to quantify tradeoff decisions
• Better planning, control, and monitoring of projects
• Better understanding of both the software development process and the
development environment
• Identification of areas of potential process improvement as well as an
objective measure of the improvement efforts
• Improved communication
E. However, many of the potential benefits that an organization can derive from a sound
measurement program is often not achieved due to a half-hearted commitment by
managers to a measurement program. The commitment cannot be just a policy
statement; it must be total commitment. The policy must be followed with the
allocation of resources to the measurement program. This includes allocating staff as
well as tools.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 64
F. Measure - n. A standard or unit of measurement; the extent, dimensions, capacity, etc.
of anything, especially as determined by a standard; an act or process of measuring; a
result of measurement. v. To ascertain the quantity, mass, extent, or degree of
something in terms of a standard unit or fixed amount, usually by means of an
instrument or process; to compute the size of something from dimensional
measurements; to estimate the extent, strength, worth, or character of something; to
take measurements.
G. Measurement - The act or process of measuring something. Also a result, such as a
figure expressing the extent or value that is obtained by measuring.
H. Techniques or methods that apply software measures to software engineering objects to
achieve predefined goals. A measure is a mapping from a set of software engineering
objects to a set of mathematical objects. Measurement goals vary with the software
engineering object being measured, the purpose of measurement, who is interested in
these measurements, which properties are being measured, and the environment in
which measurement is being performed. Examples of measures include software size,
Halstead's software science measures, and McCabe's cyclomatic complexity.
Associated models include sizing models, cost models, and software reliability models.
I. Data Definition Frameworks (DDF) are primarily used to define measurements as
well to communicate more effectively what a set of measurements represent. Secondary
DDF uses include assistance for: identifying issues that can be used to focus data
analysis designing databases for storing measurement data developing data collection
forms A DDF can be used to define a set of measurements. For example, a single DDF
can be used to identify a line of code measurement, i.e., identify what is to be counted.
A DDF can also be used to help communicate what has been counted. A DDF does this
by allowing a user to identify specifically what was included and excluded in a
measurement. For example, if I have a count of lines of code, say 317,300 lines of code.
The DDF helps me communicate what that number represents by identifying what types
of code were counted and included in that number and what types of code were
specifically not counted, i.e., excluded.

b. Complexity measurements
i. Quantitative values accumulated by a predetermined method that measures the complexity of a
software product, such as code and documentation.
ii. Software complexity is one branch of software metrics that is focused on direct measurement of
software attributes, as opposed to indirect software measures such as project milestone status
and reported system failures. There are hundreds of software complexity measures, ranging from
the simple, such as source lines of code, to the esoteric, such as the number of variable
definition/usage associations.
iii. An important criterion for metrics selection is uniformity of application, also known as "open
reengineering." The reason "open systems" are so popular for commercial software applications
is that the user is guaranteed a certain level of interoperability-the applications work together in
a common framework, and applications can be ported across hardware platforms with minimal
impact. The open reengineering concept is similar in that the abstract models used to represent
software systems should be as independent as possible of implementation characteristics such as
source code formatting and programming language. The objective is to be able to set complexity
standards and interpret the resultant numbers uniformly across projects and languages. A
particular complexity value should mean the same thing whether it was calculated from source
code written in Ada, C, FORTRAN, or some other language. The most basic complexity
measure, the number of lines of code, does not meet the open reengineering criterion, since it is
extremely sensitive to programming language, coding style, and textual formatting of the source
code. The cyclomatic complexity measure, which measures the amount of decision logic in a
source code function, does meet the open reengineering criterion. It is completely independent of
text formatting and is nearly independent of programming language since the same fundamental
decision structures are available and uniformly used in all procedural programming languages.
iv. Ideally, complexity measures should have both descriptive and prescriptive components.
Descriptive measures identify software that is error-prone, hard to understand, hard to modify,
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 65
hard to test, and so on. Prescriptive measures identify operational steps to help control software,
for example splitting complex modules into several simpler ones, or indicating the amount of
testing that should be performed on given modules.
v. There is a strong connection between complexity and testing, and the structured testing
methodology makes this connection explicit.
A. First, complexity is a common source of error in software. This is true in both an
abstract and a concrete sense. In the abstract sense, complexity beyond a certain point
defeats the human mind's ability to perform accurate symbolic manipulations, and
errors result. The same psychological factors that limit people's ability to do mental
manipulations of more than the infamous "7 +/- 2" objects simultaneously apply to
software. Structured programming techniques can push this barrier further away, but
not eliminate it entirely. In the concrete sense, numerous studies and general industry
experience have shown that the cyclomatic complexity measure correlates with errors
in software modules. Other factors being equal, the more complex a module is, the
more likely it is to contain errors. Also, beyond a certain threshold of complexity, the
likelihood that a module contains errors increases sharply. Given this information,
many organizations limit the cyclomatic complexity of their software modules in an
attempt to increase overall reliability.
B. Second, complexity can be used directly to allocate testing effort by leveraging the
connection between complexity and error to concentrate testing effort on the most
error-prone software. In the structured testing methodology, this allocation is precise-
the number of test paths required for each software module is exactly the cyclomatic
complexity. Other common white box testing criteria have the inherent anomaly that
they can be satisfied with a small number of tests for arbitrarily complex (by any
reasonable sense of "complexity") software.

vi. Cyclomatic Complexity Metric (McCabe & Associates, Inc.)


A. Cyclomatic Complexity is a measure of the complexity of a module's
decision structure. It is the number of linearly independent paths and
therefore, the minimum number of paths that should be tested.
B. Cyclomatic complexity measures the amount of decision logic in a single software
module. It is used for two related purposes in the structured testing methodology. First,
it gives the number of recommended tests for software. Second, it is used during all
phases of the software lifecycle, beginning with design, to keep software reliable,
testable, and manageable. Cyclomatic complexity is based entirely on the structure of
software's control flow graph.
C. Cyclomatic complexity measures branches in the control flow of a program. In the
simplest possible code, there are 0 branches and cyclomatic complexity equals 1. For
every branch, a value of 1 is added to the complexity total.

vii. Halstead Software Metrics (Dr. Maurice Halstead)


A. Program Length
The total number of operator occurrences and the total number of operand occurrences.
B. Program Volume
The minimum number of bits required for coding the program.
C. Program Level and Program Difficulty
Measure the program's ability to be comprehended.
D. Intelligent Content
Shows the complexity of a given algorithm independent of the language used to express
the algorithm.
E. Programming Effort
The estimated mental effort required to develop the program.
F. Error Estimate
Calculates the number of errors in a program.
G. Programming Time
The estimated amount of time to implement an algorithm.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 66
H. Line Count Software Metrics
I. Lines of Code
J. Lines of Comment
K. Lines of Mixed Code and Comments
L. Lines Left Blank

c. Size measurements
i. Methods developed for measuring the (primarily software) size of information systems, such as
lines of code, function points, etc. Also effective in measuring software development
productivity.
ii. The most widely used size measure is a count of source lines of code (SLOC). Unfortunately,
there are as many definitions of what to count as there are people doing the counting. Some
people count executable statements but not comments; some include declarations while others
exclude them; some count physical statements and others count logical statements. Published
information on software measures that depend on this measure is therefore difficult to interpret
and compare. One SEI report says this about measurement of source code size: “Historically,
the primary problem with measures of source code size has not been in coming up with numbers
—anyone can do that. Rather, it has been in identifying and communicating the attributes that
describe exactly what those numbers represent.” The precision of a measurement of source
lines of code does not depend on the numbers used in counting (everyone agrees to use the
nonnegative integers), so it must depend on what we choose to count. A comprehensive
definition of what kinds of statements or constructs in a program to count is necessary before
precise measurement is possible.

iii. Function Points


A. Allan Albrecht (Reference 1), in collaboration with John Gaffney, Jr. (Reference 2),
designed FPs as a direct measure of functionality. FPs are a weighted sum of the
number of inputs, outputs, user inquiries, files, and interfaces to a system. The latest
counting rules are defined in Release 3.0 (1990) of "Function Point Counting Practices
Manual," by the International Function Points Users Group (IFPUG).
B. Function Points and the Function Point Model are measurement tools to manage
software. Function Points, with other business measures, become Software Metrics.
C. Basic function points quantify the size and complexity of an application based on that
application's inputs, outputs, inquiries, internal files, and interfaces. The resulting count
is then adjusted based on the complexity of the system defined by a set of general
system characteristics. Since function points are independent of language, operating
system, platform or development process, it avoids the problems that arise from the use
of source lines of code (SLOC) to measure the size of an application. Function points
have been gaining in popularity and usage in recent times. At the 1993 International
Conference on Applications of Software Measurement, it was announced that function
points had become the most widely used metric in the world.
D. Function Points measure Software size. Function Points measure functionality by
objectively measuring functional requirements. Function Points quantify and document
assumptions in Estimating software development. Function Points and Function Point
Analysis are objective; Function Points are consistent, and Function Points are
auditable. Function Points are independent of technology. Function Points even apply
regardless of design. But Function Points do not measure people directly. Function
Points is a macro tool, not a micro tool. Function Points are the foundation of a
Software Metrics program.
E. Software Metrics include Function Points as a normalizing factor for comparison.
Function Points in conjunction with time yield Productivity Software Metrics. Function
Points in conjunction with defects yield Quality Software Metrics. Function Points with
costs provide Unit Cost, Return on Investment, and Efficiency Software Metrics, never
before available.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 67
F. All of the above Software Metrics can prove your organization is Doing Things Right!
But the real and biggest value of Function Points and Software Metrics is proving you
are Doing The Right Things!
G. Function Points and Usage or Volume measures create Software Metrics that
demonstrate an organization's ability to Leverage software's business impact. The
Leverage of E Commerce is obvious, but until now unmeasured. Function Points
support Customer Satisfaction measures to create Value Software Metrics. Function
Points and Skill measures provide Software Metrics for Employee Service Level
Agreements to meet current and future company skill needs. Function Points can even
measure the Corporate Vision and generate Software Metrics to report progress toward
meeting it.
H. Function Points, Function Point Analysis, the Function Point Model, Supplemental
Software Measures, and the Software Metrics they generate, are only the third measure
that transcend every part of every organization. (The other two are time and money.)
Without them your organization is only two thirds whole.

I. Function points are a measure of the size of computer applications and the projects that
build them. The size is measured from a functional, or user, point of view. It is
independent of the computer language, development methodology, technology or
capability of the project team used to develop the application. The fact that Albrecht
originally used it to predict effort is simply a consequence of the fact that size is usually
the primary driver of development effort. The function points measured size.
J. It is important to stress what function points do NOT measure. Function points are not a
perfect measure of effort to develop an application or of its business value, although the
size in function points is typically an important factor in measuring each. This is often
illustrated with an analogy to the building trades. A three thousand square foot house is
usually less expensive to build one that is six thousand square feet. However, many
attributes like marble bathrooms and tile floors might actually make the smaller house
more expensive. Other factors, like location and number of bedrooms, might also make
the smaller house more valuable as a residence. Function Point analysis can be used
for:
a. Measure productivity -- Many executives have come to the conclusion that
regardless of their core business, they are also in the software business.
Calculating several variations on the function points produced per month
theme tells them how well they are doing in this regard.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 68
b. Estimate development and support -- Since the beginning, function points
have been used as an estimating technique. Estimating is obviously necessary
for the cost benefit analysis that justifies application development. Even for
strategic projects that need no quantitative justification, accurate estimation is
required for proper staffing.
c. Monitor outsourcing agreements -- Companies outsourcing significant parts
of their IS requirements are concerned that the outsourcing entity deliver the
level of support and productivity gains that they promise. Outsourcers, like
CSC and IBM Global Services, frequently use function points to demonstrate
compliance in these areas.
d. Drive IS related business decisions -- Companies must analyze their
portfolios of applications and projects. The size in function points is an
attribute that needs to be tracked for each application and project. Along with
other data, this will allow decisions regarding the retaining, retiring and
redesign of applications to be made.
e. Normalize other measures -- To put them in perspective, other measures
frequently require the size in function points. For example, 100 delivered
defects on a 100 function point system is not good news. The same 100
delivered defects on a 10,000 function point system are much easier to take.
K. At best, counting lines of code measures software from the developers' point of view.
Since function points are based on screens, reports and other external objects, this
measure takes the users' view
L. Basic function points, originated by Allan Albrecht in 1979 while he was with IBM,
were designed primarily for business applications using disk files, PC screens of data,
and printed reports. The evolution of feature points included algorithms, but only
counted the number of algorithms used, treating them all equally. In embedded software
engineering, applications algorithms abound; some are simplistic, some are more
complex. Designing, coding, debugging, and correctly executing those algorithms are
critical to the applications and add to the complexity of the development effort. They, in
effect, take longer to develop. How then can we consider any sizing metric without
considering the algorithm complexity characteristic?
M. We approached the algorithm problem in a manner not unlike that of the late Dr.
Maurice Halstead. While at Purdue University, he identified the commonality of
software characteristics in his software science methods [1]. Just as he observed that all
software contains four basic characteristics, unique and total operators and unique and
total operands, we observed that all algorithms contain four basic characteristics:
elements, arithmetic operators, relational operators, and logical operators. They may be
defined as follows:
a. Elements: A,B,C,D, or any variable name.
b. Arithmetic operators: add, subtract, multiply, divide, exponents.
c. Relational operators: equal, less than, greater than.
d. Logical operators: AND, OR, NOT.
N. If we count the number of elements and operators in any given algorithm, we see the
results shown in Figure 1.

Arithmetic Relational Logical


Algorithm Elements Operators Operators Operators Total
1. A + B = C 3 1 1 0 5
2. A*(B-D)/C = E 5 3 1 0 9
3. A+B+((C/G-E)*F)=D 7 5 1 0 13
4. IF (D lt A/B*((C/G-E*F))
AND A/B gt 0 THEN D=0 10 6 3 1 20
5. IF (A AND B) OR 9 0 1 7 17
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 69
((C AND (D OR
(E OR F))) AND
(G OR H)) THEN X=1
Figure 1: Algorithm Complexity.

O. In the first example, we can see that there are three elements (A,B,C), one arithmetic
operator (+), one relational operator (=), and no logical operators for a total of five
engineering function points (EFPs). As the algorithms increase in complexity, so do the
total EFPs. We can consider that the larger the point total, the greater the effort needed
to deliver a quality product, not just from the standpoint of writing code, but for correct
element definition and usage, efficient execution timing, and consistent functional
results in accordance with required action. This provides us with a consistent and
reproducible method of counting and differentiating algorithms when counting function
points in an engineering environment. The EFPs are then added to the Engineering
Function Point Summary Sheet and included in the overall calculation of unadjusted
engineering function points.
P. Engineering function points and its tracking system provide a sizing metric very early
in the software development cycle when it can be the most useful and least expensive
and can be used throughout the development cycle to check progress as it is made. It
provides management with a simple, easy to use, flexible tool to track development
progress against the projected plan utilizing deliverables instead of tracking staff hours
used or number of dollars left in the budget for the project. It is a more definitive sizing
metric than using SLOC that is difficult to define and is not consistent across multiple
software languages. It avoids extensive intrusion into the developer's time allowing
more opportunity for the creative function and requires little configuration, collection,
and posting time. The tracking mechanism reflects any requested changes and
immediately shows the impact and the effort needed to keep the current time schedule
intact. The system can be used to provide a productivity rate and used as a planning tool
for new projects to provide more accurate and achievable schedules and resource
allocation.
Q. This system provides a consistent, reproducible method to predict how large a software
project will be before it has ever been designed or coded. It can then be tracked to
ensure completion of the project on time. It is independent of software language used,
hardware platform, or development process. It is as effective with a one-person team
with a small project as it is with a 100-person team with a very large project. It is not
the "silver bullet" of metrics, but it is a positive step in the right direction for measuring
software in an early, consistent, and reproducible manner.

d. Defect measurements
i. Values associated with numbers or types of defects, usually related to system size, such as
defects/1000 function points.
ii. We will define a software defect to be any flaw or imperfection in a software work product or
software process. A software defect is a manifestation of a human (software producer) mistake;
however, not all human mistakes are defects, nor are all defects the result of human mistakes.
When found in executable code, a defect is frequently referred to as a fault or a bug. A fault is an
incorrect program step, process, or data definition in a computer program. Faults are defects that
have persisted in software until the software is executable.
iii. One of the fundamental tenets of the statistical approach to software test is that it is possible to
create fault surrogates. While we cannot know the numbers and locations in faults, we can, over
time, build models based on observed relationships between faults and some other measurable
software attributes. Software faults and other measures of software quality can be known only at
the point the software has finally been retired from service. Only then can it be said that all the
relevant faults have been isolated and removed from the software system.
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 70
iv. The identification and removal of software defects constitutes the basis of the software testing
process, a fact that inevitably places increased emphasis on defect related software
measurements. Defect Distribution, Defect Density and Defect Type metrics allow the
quantification of the quality on software modules, while Defect Age, Defect Detection Rates and
Defect Response Time metrics allow for pinpointing software inspection and testing process
shortcomings. Code coverage and testing effort measurements complement the defect metrics
and provide additional software product as well as process quality indicators.
v. Errors can enter software applications from a variety of sources, including requirements
themselves, designs, source code, and “bad fixes” or secondary defects introduced during defect
repairs. The overall average for software defects in the United States for 2001 hovers around 5
defects per function point from initial requirements through one year of production. The U.S.
average for removing defects prior to delivery is about 85%, so the volume of delivered defects
averages about 0.75 defects per function point. Best-in-class organizations create only about
half as many defects, and can remove more than 96% of them before delivery to clients.
vi. The best organizations in terms of overall quality use synergistic combinations of formal
inspections, formal testing, and very complete defect measurements. It is important to note that
excellence in software quality has a very positive return on investment. When otherwise similar
projects are compared, those removing more than 95% of defects before release have shorter
schedules and lower costs than those removing less than 85%.
vii. Somewhat surprisingly, most forms of testing are less than 50% efficient in finding defects, in
that at least half of latent defects remain after the testing is finished. The most efficient defect
removal activities yet measured are formal design and code inspections. These activities average
around 65% in removal efficiency, and have topped 85%. An optimal suite of formal
inspections and test stages can top 99% in cumulative defect removal efficiency. Achieving
100% efficiency in defect removal has only been observed twice out or more than 10,000
projects examined.

e. Automated Complexity Tools (example):


i. Project Analyzer is a complete code review and quality control tool for Visual Basic. With
Project Analyzer's problem detection feature, you remove unnecessary code, get
recommendations for better coding style, and check for error prone places in your project.
A. Optimization. Detect dead code and decrease your .exe by up to 100s of kB. Find
inefficient code such as unnecessary Variants. Style. Fix that spaghetti. Enforce
programming standards. Functionality. Are you sure all the forms resize? How about
error handling?
B. Metrics. Estimate the quality of your code with metrics such as logical lines of code,
cyclomatic complexity, depth of conditional nesting, comment to code ratio, length of
names.

D. Customer Quality Evaluation Measurement Methods


a. Customer satisfaction
i. Determination of the level of service perceived by the customer including the ability to meet
requirements and overall expectation.
ii. Some of the most advanced thinking in the business world recognizes that customer
relationships are best treated as assets, and that methodical analysis of these relationships can
provide a road map for improving them. The American Customer Satisfaction Index (ACSI) was
developed to provide business with this analytical tool. The ACSI, often referred to as "the voice
of the nation’s consumer," is published quarterly in the Wall Street Journal and provides a
benchmark for success in the private sector.
A. Established in 1994, the American Customer Satisfaction Index (ACSI) is a uniform
and independent measure of household consumption experience. A powerful economic
indicator, the ACSI tracks trends in customer satisfaction and provides valuable
benchmarking insights of the consumer economy for companies, industry trade
associations, and government agencies. The ACSI is produced through a partnership of
the University of Michigan Business School, the American Society for Quality (ASQ),
and the international consulting firm, CFI Group.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 71
iii. For the first time, the ISO 9000 quality management standard requires that registered companies
measure customer satisfaction. Many customer surveys produce misleading results due to poor
questionnaire design, inappropriate data collection methods, and invalid statistic analysis.
Customer Satisfaction Measurement for ISO 9000 explains in a clear and simple manner how to
conduct a professional customer satisfaction survey that will produce a reliable result--as well as
be consistent with the requirements of ISO 9001:2000.

b. Service-level agreements
i. The establishment of a contract with the customer to maintain an agreed upon service level for
the customer’s application(s).
ii. SERVICE LEVEL AGREEMENTS - Documents service objectives, the responsibilities of the
service provider and the customer, and the criteria and metrics for measuring performance.
iii. A service-level agreement (SLA) is an informal contract between a carrier and a customer that
defines the terms of the carrier's responsibility to the customer and the type and extent of
remuneration if those responsibilities are not met. (International Engineering Consortium)
iv. A contract between a network service provider and a customer that specifies the services the
network service provider will furnish. Services SLAs may specify often include the percentage
of time services will be available; number of users that can be served simultaneously; help-desk
response time; and statistics to be provided. ISPs often provide SLAs for its customers.
v. While information systems are evolving, the state of systems management remains constant.
Information systems are becoming increasingly more complex, the Service-Level Agreement
(SLA) serves as a valuable tool in meeting this challenge by documenting the success of the
system in meeting needs and expectations. SLAs allow organizations to measure stated
objectives by comparing actual performance to the performance levels or standards specified by
the SLA. A SLA is a set of broadly defined, repeatable functions and processes––the output of
which is delivered to users in accordance with pre-agreed performance levels. The service can
originate from another part of the user’s enterprise or from a third party. A SLA defines the
acceptable levels of information systems (IS) performance, typically to include response time,
availability or downtime, and callback/repair-dispatch response time. The SLA is the key to
setting the users’ service expectation. Customers can set requirements for IS and network
services and weigh them against the service cost. Chargeback systems compare costs to expected
benefits and provide benefits (i.e., payment of "failure credits," monetary or otherwise) to clients
if the service provider fails to achieve the agreed upon service levels. Customers and service
providers are encouraged to discuss a number of different scenarios that may lead to
compensation by the service provider if expected outcomes are not met. Customers should
understand what the service provider considers an acceptable level of performance. For example,
the customer’s definition of downtime may be quite different than the definition used by the
service provider. Customers should ensure they have an accurate understanding of the terms of
the agreement
vi. The SLA is an insurance policy of sorts. It ensures that the organization understands and works
in sync with business goals, while ensuring that end users have a factual understanding of
network realities. A SLA contains a set of definitions that identifies what the service
deliverables are, when they are delivered and where. The SLA should not describe how the
service is delivered. SLAs generally include minimum standards related to the following:
A. Performance/service measures (i.e., network availability, reliability, serviceability,
network response time, and user satisfaction)
B. Constraints (i.e., workload, conformance requirements, rules and regulations, and
dependencies)
C. Price or charge to the customer for use of the service
vii. SLAs should be short yet precisely define the services and the level of services to be provided.
At a minimum, the following points should be addressed in the SLA:
A. Scope: includes the purpose, objectives, background information, major players,
procedures, and reference documents
B. Supported Environment: defines the hardware, software and network
C. Technical Support: provides an overview of the support services provided during
business hours and after hours. Defines help desk support, network infrastructure
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 72
support and server maintenance, data back-up and recovery, support and other
operational problems
D. Response-time Goals: includes hardware and software orders, network connections,
archive-data-special file recovery and server support
E. Support Staffing: (within IS operations) should include security administration and
customer services. Customer service departments should be staffed with resource
managers and technology coordinators.
F. Service-level Partnerships: outline the relationships established with providers

c. Evaluation methods
i. An understanding and application of the basic techniques needed to provide consistent reliable
results, permitting objective evaluation. (A common method is a questionnaire.)
ii. A basic and effective baseline customer satisfaction survey program should focus on measuring
customer perceptions of how well the company delivers on the critical success factors and
dimensions of the business as defined by the customer; for example, service promptness,
courtesy of staff, responsiveness, understanding of the customer's problem, etc. The findings of
company performance should be analyzed both with all customers and by key segments of the
customer population

E. Risk Analysis
a. Using quantitative data to manage
i. Methods for using quantitative data as a management tool.
b. In any software development project, we can group risks into four categories.
i. Financial risks: How might the project overrun the budget?
ii. Schedule risks: How might the project exceed the allotted time?
iii. Feature risks: How might we build the wrong product?
iv. Quality risks: How might the product lack customer-satisfying behaviors or possess customer-
dissatisfying behaviors?
c. Testing allows us to assess the system against the various risks to system quality, which allows the
project team to manage and balance quality risks against the other three areas.
d. It’s important for test professionals to remember that many kinds of quality risks exist. The most obvious
is functionality: Does the software provide all the intended capabilities?
e. Other classes of quality risks:
i. Use cases: working features fail when used in realistic sequences.
ii. Robustness: common errors are handled improperly.
iii. Performance: the system functions properly, but too slowly.
iv. Localization: problems with supported languages, time zones, currencies, etc.
v. Data quality: a database becomes corrupted or accepts improper data.
vi. Usability: the software’s interface is cumbersome or inexplicable.
vii. Volume/capacity: at peak or sustained loads, the system fails.
viii. Reliability: too often—especially at peak loads—the system crashes, hangs, kills sessions, and
so forth.
f. The priority of a risk to system quality arises from the extent to which that risk can and might affect the
customers’ and users’ experiences of quality. In other words, the more likely a problem or the more
serious the impact of a problem, the more testing that problem area deserves. You can prioritize in a
number of ways. One approach I like is to use a descending scale from one (most risky) to five (least
risky) along three dimensions.
i. Severity: How dangerous is a failure of the system in this area?
ii. Priority: How much does a failure of the system in this area compromise the value of the product
to customers and users?
iii. Likelihood: What are the odds that a user will encounter a failure in this area, either due to usage
profiles or the technical risk of the problem?

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 73
9. Extra Information
A. CRITICAL PATH - A series of dependent tasks for a project that must be completed as planned to keep
the entire project on schedule.

B. DEVELOPMENT COSTS - Development costs include personnel costs; computer usage, training,
supply, and equipment costs; and the cost of any new computer equipment and software. In addition,
costs associated with the installation and start-up of the new system must be calculated.

C. END USER - The individual or group who will use the system for its intended operational use when it is
deployed in its environment.

D. PROTOTYPE - An active model for end-users to see touch feel and experience. It is the working
equivalent to a paper design specification with one exception - errors can be detected earlier. However,
you cannot substitute any prototype for a paper specification. Prototyping is a complement to other
methodologies.

E. TECHNICAL REQUIREMENTS - Any requirements related to software, development, or maintenance


work (e.g., response time). Those requirements that describe what the software must do and its
operational constraints. Examples of technical requirements include functional, performance, interface,
and quality requirements

F. TEST and EVALUATION PLAN - A system life cycle documentation standard that identifies high-level
requirements and defines the objectives and overall structure of the test and evaluation for a system. It
provides the framework within which detailed test and evaluation plans are generated. It details the test
strategy, schedule, and resource requirements for test and evaluation. It relates program schedule, test
management strategy and structure, and required resources to critical operational issues, key performance
parameters and operational performance parameters (threshold and objective criteria), and critical
technical parameters, derived from the Operational Requirements Document, evaluation criteria, and
major decisions.

G. TEST ARCHITECTURE - The high-level design of a planned application software test. A test
architecture includes: (1) a structural blueprint, i.e., a hypothetical user environment intentionally
constructed to be sufficiently diverse and complex to support execution of all relevant test cases, (2) a
definition of the test time dimension (the time span covered by the test and the division of that time span
into discrete periods, and (3) a definition of the overall processing sequence for the test.

H. TEST CASE - An assertion concerning the functioning of an application software entity, the truth of
which must be demonstrated through testing in order to conclude that the entity meets established
user/design requirements.

I. TEST DATA - Files, records, and data elements created by users, analysts, and developers to test
requirements, design specifications, and software code. Samples of live data may be used for test data if
they are analyzed, and supplemented as necessary, to determine completeness in terms of all conditions
which can occur. There is no standard format for test data.

J. TEST PLAN - A tool for directing the software testing which contains the orderly schedule of events and
list of materials necessary to effect a comprehensive test of a complete application. Those parts of the
document directed toward the user staff personnel should be presented in noncomputer-oriented language,
and those parts of the document directed toward other personnel should be presented in suitable
terminology.
a. A formal or informal plan for carrying out a particular test that: (1) defines tasks to be
performed, (2) specifies sequential dependencies among the tasks, (3) defines resources required
to accomplish each task, (4) schedules task starts and completions, and (5) links, via an initial
traceability matrix, test tasks to pertinent user/design requirement.

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 74
K. TEST REQUIREMENTS - A description of the test which must be executed to verify a system/software
requirement. This is part of the traceability matrix. Test requirements should generally exist at levels
corresponding to the requirements.

L. TEST SCRIPT - A system life cycle documentation standard that is the design specification for a test run.
It defines the test cases to be executed, required set up procedures, required execution procedures, and
required evaluation procedures.

M. VERIFICATION - The process of evaluating software to determine whether the products of a given
development phase satisfy the conditions imposed at the start of that phase (IEEE-STD-610).

N. COMPLETE TESTING - erroneously used to mean 100% branch coverage. The notion is specific to a
test selection criterian: i.e., testing is "complete" when the tests specified by the criterion have been
passed. Absolutely complete testing is impossible.*
* - Quoted from the Glossary/Index of Software Testing Techniques - 2nd ed. by Boris Beizer, (ISBN 0-442-20672-0)

O. Software Work Product - any artifact created as part of the software process includingcomputer
programs, plans procedures, and associated documentation and data [CMU/SEI 91].

P. Software process - a set of activities, methods, practices, and transformations that people use to develop
and maintain software work products [CMU/SEI 91].

Postings from CQAfolks (Yahoo)

 Testing Standards:
Testing Standards mainly are those set by ISO, IEEE, NIST, SEI-CMM or DoD. You also have standards set by
British for their own companies to follow. V-Model is only an approach for testing, i.e., to demonstrate which
type of testing is to be executed in parallel with the phases of the SDLC. V-Model is not a standard.

 Can anyone tell me the procedure to arrive at the metrics baselines report?
I believe that it depends upon the organisation to organisation when they will update the MBR, otherwsie to make
the new MBR we should have atleast 4-8 datapoints it means that project for same nature or technolgy. For
revision we can take the existing & closed project in that defined duration let say in my orgainsation we do it after
3 months. For exsiting project we will take only those phases which are complete. - Ankur Handa

 CMM & CMMI certain facts:


CMM or CMMI? how long will it take? can transition between from ISO to CMMI can happen?
1. Given a profile with ISO 9001:2000 certificate and Process focus in the organization, a CMM Level 4 initiative
can reach you to a tangible milestone at an early date (say 9 to 12 months).
2. A CMMI Level 4 initiative could run for much longer (say 18 months and above).
3. costs for a CMMI assessments would be around 45000 USD while that for CMM would be around 28000 USD,
depending upon the service provider.
4. It is not correct that CMM is being phased out by end of 2003. CMM is definitely on till end of 2005, if not later
than that. We have confirmed this with SEI and the same is indicated in SEI's web site as well. This is
misinformation prevailing in the market.
5. A CMM Level 4 initiative would produce tangible results for you and you can reach a milestone in a
comparative lesser duration and can act as a strong springboard to launch your further initiatives.
6. Currently, organizations who have been assessed for CMM Level 5 are
finding that they need about 15 months to acheive a proper CMMI Level 5 implementation.

 How to prepare a Project Closure Report: (courtsey of Advait.


“advaitslele@indiatimes.com”)
A project closure report can be prepared either by formal project closure report or it can be in the form of a
checklist. So, when you are in the installation phase, and all the installation activities are over at the client's site,
you can fill the report and get it counter signed by the client. in the report you can have columns to check whether
all the reuqirements are met, and whether software installed is running without hiccups. after a careful examination
____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 75
of the system at client's site after installation (u can get the client to sit besides u when this check is being
performed) you can fill up the report / checklist.

Closure of Project report needs to address following topics

1) General Information : Project Description , Technology used, Duration , Team size etc. Metrics like :
Productivity achieved v/s Planned , Quality of product (Include Acceptance Defects ) Customer Complaint
received and resolution

2) Process Details : Methodology used, Deviations in the process, Tailoring the templates, checklist , guidelines in
any.

3) Tools used : Tools used in the coding phase , Testing tools, Project management tool , CM tools etc

4) Risk Management : Risk assessment details , Contingency plan etc

5) Metrics for all the phases


a) Effort Estimation
b) Schedule Phase wise
c) Test Defects
d) Review defects
e) Paretto analysis
f) Cause and effect analysis
g) Defect Leakage across phase
h) Defect density
I) Any other metrics as required for the customer

6) SWOT analysis of the project

7) Overall conclusion of the project .

Software Engineering Paradigms

Also known Software Process Models , these are various strategies for successful solution of software engineering
problems. They mainly involve the question as to what software development process is all about and how it can best be
controlled. Through the years, various models emphasizing the product or process views to one degree or another have been
proposed and discussed in the literature. The following is a quick list of these models:

Linear Sequential Model


Also known as waterfall model as well as SDLC = Software Development Life Cycle, this model involves well-
defined steps or phases in the software development process. These steps are:
1. Analysis
2. Design
3. Coding
4. Testing

The Prototyping Model


This model skips the rigorous steps involving requirements analysis and specifications since the customer is
unsure about what is required and the developer would like to make sure the implementation is not going to be a
big problem. After a cursory requirements gathering and a quick design, the developer build a prototype. A
prototype is a scled down version of the full system and serves for further clarification of the requirements. Its a
cyclic process as shown below:
1. Listen to customer
2. Build/revise mock-up
3. Have customer test-drive the new prototype, then start all over

The RAD Model


____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 76
This Rapid Application Development model is essentially an SDLC with extremely short life cycle. It is a high-
speed adaptation of SDLC using component based construction approach. It relies heavily on software reuse which
is a big issue in and of itself. This approach involves the following phases:
1. Business Modeling
2. Data Modeling
3. Process Modeling
4. Application generation
5. Testing and turnover

The Evolutionary Models


These models are based on evolutionary nature of software development process. Business and product
requirements change and the details and extensions of the core software needs to adapt to these changes.
Therefore, limited versions are introduced to meet the competitive requirements. In an iterative manner,
increasingly more complex versions of the software is developed.

The Incremental Model


The first increment is normally referred to as core product and thereafter, using the ptotyping philosophy, more
sophisticated versions are devloped with new features.

The Spiral Model


originally proposed by Boehm, this is an evolutionary model which combines the iterative nature of prototyping
with the systematic aspects of life cycle model. Software is developed in a series of incremental releases. The early
iterations may be a paper or simple prototypes. In later increments, increasingly complete versions are engineered.
Since it emphasizes the evolutionary nature of the software development proces where customer and developer
both understand and react better at each step, this model highly realistic. But, as with the earlier models, this model
is not a panacea, and requires careful assesment and analysis of risks at each step. This model is fairly new and has
not yet been assessed for efficacy.

The Component Assembly Model


This is essentially a spiral model with technical framework based on object technologies. It is an attempt to
produce the iterative versions of the product from prepackaged sotware components corresponding to object
classes.

The Concurrent Development Model


Also known as concurrent engineering, this model is based on keeping track of many phases of the project
simultaneously. The activities associated with various phases are grouped together and defined as a state. State
transition diagrams are use d for keeping track of and controlling various phases. When applied to client-server
models, this model defines activities in two dimensions: The system dimension and the component dimension. In
essence, this model is based on using appropriate models --as above-- for different components of a large system at
the same time. Rather than defining as simple sequence of activities, this model defines a network of clusters of
activities.

The following list defines terms that could be relevant for evaluation purposes:
Assessment: An action of applying specific documented assessment criteria to a specific software module, package or
product for the purpose of determining acceptance or release of the software module, package or product. (ISO 9126: 1991,
3.1)

Customer: Ultimate consumer, user, client, beneficiary or second party. (ISO 9004: 1987, 3.4)

Defect: The nonfulfilment of intended usage requirements. (ISO 8402: 1986, 3.21)

Features: Features are identified properties of a software product which can be related to the quality characteristics. (ISO
9126: 1991, 3.2)

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 77
Firmware: Hardware that contains a computer program and data that cannot be changed in its user environment. The
computer program and data contained in firmware are classified as software; the circuitry containing the computer program
and data is classified as hardware. (ISO 9126: 1991, 3.3)

Inspection: Activities such as measuring, examining, testing, gauging one or more characteristics of a product or service
and comparing these with specified requirements to determine conformity. (ISO 8402: 1986, 3.14)
Level of performance: The degree to which the needs are satisfied, represented by a specific set of values for the quality
characteristics. (ISO 9126: 1991, 3.4)

Liability (product/service): A generic term used to describe the onus on a producer or others to make restitution for loss
related to personal injury, property damage or other harm caused by a product or service. (ISO 8402: 1986, 3.19)

Measurement: The action of applying a software quality metric to a specific software product. (ISO 9126: 1991, 3.5)

Nonconformity: The nonfulfilment of specified requirements. (ISO 8402: 1986, 3.20)


NOTE -- The basic difference between `nonconformity' and `defect' is that specified requirements may differ from the
requirements for the intended use. (ISO 8402: 1986, 3.20)

Quality: The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied
needs. (ISO 8402: 1986, 3.1)

Quality assurance: All those planned and systematic actions necessary to provide adequate confidence that a product or
service will satisfy given requirements for quality. (ISO 8402: 1986, 3.6)

Quality control: The operational techniques and activities that are used to fulfill requirements for quality. (ISO 8402: 1986,
3.7)

Quality surveillance: The continuing monitoring and verification of the status of procedures, methods, conditions,
processes, products and services, and analysis of records in relation to stated references to ensure that specified
requirements for quality are being met. (ISO 8402: 1986, 3.11)

Rating: The action of mapping the measured value to the appropriate rating level. Used to determine the rating level
associated with the software for a specific quality characteristic. (ISO 9126: 1991, 3.7)

Rating level: A range of values on a scale to allow software to be classified (rated) in accordance with the stated or implied
needs. Appropriate rating levels may be associated with the different views of quality i.e. Users, Managers or Developers.
These levels are called rating levels. (ISO 9126: 1991, 3.8)

Recoverability: Attributes of software that bear on the capability to re-establish its level of performance and recover the
data directly affected in case of a failure and on the time and effort needed for it. (ISO 9126: 1991, A.2.2.3)

Reliability: The ability of an item to perform a required function under stated conditions for a stated period of time. The
term `reliability' is also used as a reliability characteristic denoting a probability of success or a success ratio. (ISO 8402:
1986, 3.18)

Replaceability: Attributes of software that bear on the opportunity and effort of using it in the place of specified other
software in the environment of that software. (ISO 9126: 1991, A.2.6.4)

Resource behaviour: Attributes of software that bear on the amount of resources used and the duration of such use in
performing its function. (ISO 9126: 1991, A.2.4.2)

Security: Attributes of software that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to
programs and data. (ISO 9126: 1991, A.2.1.5)

Software: Intellectual creation comprising the programs, procedures, rules and any associated documentation pertaining to
the operation of a data processing system. (ISO 9000-3: 1991, 3.1)

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 78
Software product: Complete set of computer programs, procedures and associated documentation and data designated for
delivery to a user. (ISO 9000-3: 1991, 3.2)

Software item: Any identifiable part of a software product at an intermediate step or at the final step of development. (ISO
9000-3: 1991, 3.3)

Software quality: The totality of features and characteristics of a software product that bear on its ability to satisfy stated or
implied needs. (ISO 9126: 1991, 3.11)

Software quality assessment criteria: The set of defined and documented rules and conditions which are used to decide
whether the total quality of a specific software product is acceptable or not. The quality is represented by the set of rated
levels associated with the software product. (ISO 9000-3: 1991, 3.12)

Software quality characteristics: A set of attributes of a software product by which its quality is described and evaluated. A
software quality characteristic may be refined into multiple levels of sub-characteristics. (ISO 9126: 1991, 3.13)

Software quality metric: A quantitative scale and method which can be used to determine the value a feature takes for a
specific software product. (ISO 9126: 1991, 3.14)

Specification: The document that prescribes the requirements with which the product or service has to conform. (ISO 8402:
1986, 3.22)

Stability: Attributes of software that bear on the risk of unexpected effect of modifications. (ISO 9126: 1991, A.2.5.3)

Suitability: Attribute of software that bears on the presence and appropriateness of a set of functions for specified tasks.
(ISO 9126: 1991, A.2.1.1)

Testability: Attributes of software that bear on the effort needed for validating the modified software. (ISO 9126: 1991,
A.2.5.4)

Time behaviour: Attributes of software that bear on response and processing times and on throughput rates in performing
its function. (ISO 9126: 1991, A.2.4.1)

Understandability: Attributes of software that bear on the users' effort for recognizing the logical concept and its
applicability. (ISO 9126: 1991, A.2.3.1)

Usability: A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated
or implied set of users. (ISO 9126: 1991, 4.3)

Validation (for software): The process of evaluating software to ensure compliance with specified requirements. (ISO
9000-3: 1991, 3.7)

Verification (for software): The process of evaluating the products of a given phase to ensure correctness and consistency
with respect to the products and standards provided as input to that phase. (ISO 9000-3: 1991, 3.6

____________________________________________________________________________________________________________________________
CSQA Exam Notes Revised: 08/19/2002
Page: 79

You might also like