You are on page 1of 146

SOFTWARE ENGINEERING

UNIT-IV
School of CSA
Dr.Hemanth.K.S
Software Quality Management Process
& Advanced Topics

UNIT IV
UNIT-IV

Software Quality Management Process:


Quality Concepts, Software Quality Assurance, Testing Strategies, Security
Engineering, Software Configuration Management Process.

Advanced Topics: Software Process Improvement, The SPI Process, Trends and Return
of Investment, Technology Directions.

3
SOFTWARE QUALITY MANAGEMENT PROCESS

❖ Quality Concepts
❖ Software Quality Assurance
❖ Testing Strategies
❖ Security Engineering,
❖ Software Configuration Management Process

4
SOFTWARE QUALITY

• Why quality? To reduce the amount of rework.

• Steps to achieve high quality:


➢ proven s/w engineering process and practice
➢ solid project management
➢ comprehensive quality control
➢ presence of a quality assurance infrastructure.

• Bad software plagues nearly every organization that uses computers, causing lost
work hours during computer downtime, lost or corrupted data, missed sales
opportunities, high IT support and maintenance costs, and low customer
satisfaction
5
QUALITY
• The American Heritage Dictionary defines quality as
“a characteristic or attribute of something.”
• For software, two kinds of quality may be encountered:
Quality of design encompasses requirements, specifications, and the
design of the system.
Quality of conformance is an issue focused primarily on
implementation.
• User satisfaction = compliant product + good quality + delivery within
budget and schedule

6
QUALITY—A PRAGMATIC VIEW

• The transcendental view argues that quality is something that you immediately
recognize, but cannot explicitly define.
• The user view sees quality in terms of an end-user’s specific goals. If a product
meets those goals, it exhibits quality.
• The manufacturer’s view defines quality in terms of the original specification of
the product. If the product conforms to the spec, it exhibits quality.
• The product view suggests that quality can be tied to inherent characteristics
(e.g., functions and features) of a product.
• Finally, the value-based view measures quality based on how much a customer is
willing to pay for a product. In reality, quality encompasses all of these views and
more.
7
SOFTWARE QUALITY

Software quality can be defined as:


An effective software process applied in a manner that creates a useful product that
provides measurable value for those who produce it and those who use it.

8
EFFECTIVE SOFTWARE PROCESS

• An effective software process establishes the infrastructure that supports any


effort at building a highquality software product.
• The management aspects of process create the checks and balances that
help avoid project chaos—a key contributor to poor quality.
• Software engineering practices allow the developer to analyze the problem
and design a solid solution—both critical to building high quality software.
• Finally, umbrella activities such as change management and technical
reviews have as much to do with quality as any other part of software
engineering practice.

9
USEFUL PRODUCT

• A useful product delivers the content, functions, and features that the end-
user desires
• But as important, it delivers these assets in a reliable, error free way.
• A useful product always satisfies those requirements that have been
explicitly stated by stakeholders.
• In addition, it satisfies a set of implicit requirements (e.g., ease of use) that
are expected of all high-quality software.

10
ADDING VALUE
• By adding value for both the producer and user of a software product, high quality software
provides benefits for the software organization and the end-user community.

• The software organization gains added value because high quality software requires less
maintenance effort, fewer bug fixes, and reduced customer support.

• The user community gains added value because the application provides a useful
capability in a way that expedites some business process.

• The end result is:


(1) greater software product revenue,

(2) better profitability when an application supports a business process, and/or

(3) improved availability of information that is crucial for the business.

11
QUALITY DIMENSIONS

David Garvin [Gar87]:


• Performance Quality. Does the software deliver all content, functions, and features that are
specified as part of the requirements model in a way that provides value to the end-user?

• Feature quality. Does the software provide features that surprise and delight first-time end-
users?

• Reliability. Does the software deliver all features and capability without failure? Is it available
when it is needed? Does it deliver functionality that is error free?

• Conformance. Does the software conform to local and external software standards that are
relevant to the application? Does it conform to de facto design and coding conventions?

12
QUALITY DIMENSIONS

• Durability. Can the software be maintained (changed) or corrected (debugged) without the
inadvertent generation of unintended side effects? Will changes cause the error rate or reliability
to degrade with time?

• Serviceability. Can the software be maintained (changed) or corrected (debugged) in an


acceptably short time period. Can support staff acquire all information they need to make
changes or correct defects?

• Aesthetics. Most of us would agree that an aesthetic entity has a certain elegance, a unique flow,
and an obvious “presence” that are hard to quantify but evident, nonetheless.

• Perception. In some situations, you have a set of prejudices that will influence your perception of
quality.

13
OTHER VIEWS

• McCall’s Quality Factors that affect software quality(McCall, Richard &


Walters)

14
MCCALL’S QUALITY FACTOR
• Correctness: The extent to which a program satisfies its specification and fulfills the customer’s
mission objectives.
• Reliability: extent to which a program can be expected to perform its intended function with
required precision.
• Efficiency: the amount of computing resources and code required by a program to perform its
function.
• Integrity: Extent to which access to software or data by unauthorized persons can be controlled.
• Usability: Effort required to learn, operate, prepare input for and interpret output of a program.
• Maintainability: Effort required to locate and fix an error in a program.
• Flexibility: Effort required to modify an operational program.
• Testability: effort required to test a program to ensure that it performs its intended function.
• Portability: Effort required to transfer the program from one hardware and / or software system
environment to another.
• Reusability: part of a program can be reused in other applications
• Interoperability: effort required to couple one system to another.

15
ISO 9126 QUALITY FACTORS

16
TARGETED FACTORS

17
TARGETED FACTORS

18
THE SOFTWARE QUALITY DILEMMA

• If you produce a software system that has terrible quality, you lose because no one will
want to buy it.
• If on the other hand you spend infinite time, extremely large effort, and huge sums of
money to build the absolutely perfect piece of software, then it's going to take so long to
complete, and it will be so expensive to produce that you'll be out of business anyway.
• Either you missed the market window, or you simply exhausted all your resources.
• So, people in industry try to get to that magical middle ground where the product is good
enough not to be rejected right away, such as during evaluation, but also not the object of
so much perfectionism and so much work that it would take too long or cost too much to
complete. [Ven03]

19
“GOOD ENOUGH” SOFTWARE

• Good enough software delivers high quality functions and features that end-users desire, but at the
same time it delivers other more obscure or specialized functions and features that contain known
bugs.

• Arguments against “good enough.”


• It is true that “good enough” may work in some application domains and for a few major software
companies. After all, if a company has a large marketing budget and can convince enough people to buy
version 1.0, it has succeeded in locking them in.

• If you work for a small company be wary of this philosophy. If you deliver a “good enough” (buggy) product,
you risk permanent damage to your company’s reputation.

• You may never get a chance to deliver version 2.0 because bad buzz may cause your sales to plummet and
your company to fold.

• If you work in certain application domains (e.g., real time embedded software, application software that is
integrated with hardware can be negligent and open your company to expensive litigation.

20
COST OF QUALITY
1. Quality costs us time and money, but lack of quality also has a cost.

2. Prevention costs

1. Appraisal costs include


cost of conducting technical review , cost of data collection & metrics evaluation and cost of testing & debugging
1. Internal failure costs include

21
1. External failure costs are
COST
1. The relative costs to find and repair an error or defect increase dramatically
as we go from prevention to detection to internal failure to external failure
costs.

22
RISK

• “People bet their jobs, their comforts, their safety, their entertainment, their
decisions, and their very lives on computer software. It better be right.
• The implication is that the low-quality software increases risks for both developer
and user.
Example:
• Throughout the month of November 2000 at a hospital in Panama, 28 patients
received massive overdoses of gamma rays during treatment for a variety of
cancers. In the months that followed, five of these patients died from radiation
poisoning and 15 others developed serious complications. What caused this
tragedy? A software package, developed by a U.S. company, was modified by
hospital technicians to compute modified doses of radiation for each patient.

23
NEGLIGENCE AND LIABILITY
• The story is all too common. A governmental or corporate entity hires a major software developer
or consulting company to analyze requirements and then design and construct a software-based
“system” to support some major activity.
• The system might support a major corporate function (e.g., pension management) or some
governmental function (e.g., healthcare administration or homeland security).
• Work begins with the best of intentions on both sides, but by the time the system is delivered,
things have gone bad.
• The system is late, fails to deliver desired features and functions, is error-prone, and does not
meet with customer approval.
• Litigation ensues.
• In most cases, customer claims that the developer has been negligent and is therefore not entitled
to payment.
• Whereas developer claims that customer has repeatedly changed their requirements.
• But in every case , quality of the delivered system comes into question.
24
QUALITY AND SECURITY

• “Software security relates entirely and completely to quality. You must think
about security, reliability, availability, dependability—at the beginning, in the
design, architecture, test, and coding phases, all through the software life cycle
[process].

• Even people aware of the software security problem have focused on late life-
cycle stuff. The earlier you find the software problem, the better. And there are
two kinds of software problems. One is bugs, which are implementation
problems. The other is software flaws—architectural problems in the design.
People pay too much attention to bugs and not enough on flaws.”

25
IMPACT OF MANAGEMENT ACTIONS

• Risk – oriented decisions


• Estimation decisions
• Scheduling decisions

26
ACHIEVING SOFTWARE QUALITY

• Critical success factors:


• Software Engineering Methods
• Project Management Techniques
• Quality Control
• Quality Assurance

27
SOFTWARE QUALITY ASSURANCE(SQA)

Topics covered
1. Elements of SQA
2. SQA Tasks, Goals and Metrics
3. Statistical Software Quality Assurance
4. Software Reliability

28
ELEMENTS OF SQA
• Standards
• Reviews and Audits
• Testing
• Error/defect collection and analysis
• Change management
• Education
• Vendor management
• Security management
• Safety
• Risk management
29
ROLE OF THE SQA GROUP-I

• Prepares an SQA plan for a project.

• The plan identifies


• evaluations to be performed
• audits and reviews to be performed
• standards that are applicable to the project
• procedures for error reporting and tracking
• documents to be produced by the SQA group
• amount of feedback provided to the software project team

• Participates in the development of the project’s software process description.

• The SQA group reviews the process description for compliance with organizational policy,
internal software standards, externally imposed standards (e.g., ISO-9001), and other parts of 30
the software project plan.
ROLE OF THE SQA GROUP-II
Reviews software engineering activities to verify compliance with the defined software process.

• Identifies, documents, and tracks deviations from the process and verifies that corrections have been
made.

Audits designated software work products to verify compliance with those defined as part of the software
process.

• reviews selected work products; identifies, documents, and tracks deviations; verifies that corrections
have been made

• periodically reports the results of its work to the project manager.

Ensures that deviations in software work and work products are documented and handled according to a
documented procedure.

Records any noncompliance and reports to senior management.

• Noncompliance items are tracked until they are resolved.


31
SQA GOALS

• Requirements quality. The correctness, completeness, and consistency of the


requirements model will have a strong influence on the quality of all work products that
follow.
• Design quality. Every element of the design model should be assessed by the software
team to ensure that it exhibits high quality and that the design itself conforms to
requirements.
• Code quality. Source code and related work products (e.g., other descriptive information)
must conform to local coding standards and exhibit characteristics that will facilitate
maintainability.
• Quality control effectiveness. A software team should apply limited resources in a way
that has the highest likelihood of achieving a high quality result.

32
Statistical SQA
Product Collect information on all defects
& Process Find the causes of the defects
Move to provide fixes for the process

measurement

... an understanding of how


to improve quality ...

33
STATISTICAL SQA
• Information about software errors and defects is collected and categorized.
• An attempt is made to trace each error and defect to its underlying cause (e.g., non-
conformance to specifications, design error, violation of standards, poor communication
with the customer).
• Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all
possible causes), isolate the 20 percent (the vital few).
• Once the vital few causes have been identified, move to correct the problems that have
caused the errors and defects.

34
SIX-SIGMA FOR SOFTWARE ENGINEERING

• The term “six sigma” is derived from six standard deviations—3.4 instances (defects) per million
occurrences—implying an extremely high quality standard.

• The Six Sigma methodology defines three core steps:

• Define customer requirements and deliverables and project goals via well-defined methods of customer
communication

• Measure the existing process and its output to determine current quality performance (collect defect
metrics)
• Analyze defect metrics and determine the vital few causes.

• Improve the process by eliminating the root causes of defects.


• Control the process to ensure that future work does not reintroduce the causes of defects.

35
SOFTWARE RELIABILITY

• A simple measure of reliability is mean-time-between-failure (MTBF),


where
MTBF = MTTF + MTTR
• The acronyms MTTF and MTTR are mean-time-to-failure and mean-time-
to-repair, respectively.
• Software availability is the probability that a program is operating
according to requirements at a given point in time and is defined as
Availability = [MTTF/(MTTF + MTTR)] x 100%

36
SOFTWARE SAFETY

• Software safety is a software quality assurance activity that focuses on the


identification and assessment of potential hazards that may affect software
negatively and cause an entire system to fail.
• If hazards can be identified early in the software process, software design features
can be specified that will either eliminate or control potential hazards.

37
ISO 9001:2000 STANDARD

• ISO 9001:2000 is the quality assurance standard that applies to software engineering.
• The standard contains 20 requirements that must be present for an effective quality assurance
system.
• The requirements delineated by ISO 9001:2000 address topics such as
• management responsibility, quality system, contract review, design control, document and data control,
product identification and traceability, process control, inspection and testing, corrective and preventive
action, control of quality records, internal quality audits, training, servicing, and statistical techniques.

38
SOFTWARE TESTING STRATEGIES

1. Testing is the process of exercising a program with the specific intent


of finding errors prior to delivery to the end user.

39
WHAT TESTING SHOWS

errors

requirements conformance

performance

an indication
of quality

40
A STRATEGIC APPROACH TO S/W
TESTING

1. To perform effective testing, you should conduct effective technical


reviews. By doing this, many errors will be eliminated before testing
commences.
2. Testing begins at the component level and works "outward" toward the
integration of the entire computer-based system.
3. Different testing techniques are appropriate for different software
engineering approaches and at different points in time.
4. Testing is conducted by the developer of the software and (for large
projects) an independent test group.
5. Testing and debugging are different activities, but debugging must be
accommodated in any testing strategy.
41
VERIFICATION & VALIDATION

1. Verification refers to the set of tasks that ensure that software correctly
implements a specific function.
2. Validation refers to a different set of tasks that ensure that the software that
has been built is traceable to customer requirements. Boehm [Boe81] states
this another way:
• Verification: "Are we building the product right?"
• Validation: "Are we building the right product?"

42
ORGANIZING FOR SOFTWARE TESTING

developer independent tester

Understands the system Must learn about the system,

and, is driven by "delivery" and, is driven by quality

43
TESTING STRATEGY- THE BIG
PICTURE

System engineering

Analysis modeling
Design modeling

Code generation

Integration test
Validation test

System test

44
TESTING STRATEGY- THE BIG
PICTURE

Software Testing Steps:

45
STRATEGIC ISSUES

1. Specify product requirements in a quantifiable manner long before testing commences.


2. State testing objectives explicitly.
3. Understand the users of the software and develop a profile for each user category.
4. Develop a testing plan that emphasizes “rapid cycle testing.”
5. Build “robust” software that is designed to test itself
6. Use effective technical reviews as a filter prior to testing
7. Conduct technical reviews to assess the test strategy and test cases themselves.
8. Develop a continuous improvement approach for the testing process.

46
TESTING STRATEGY

1. We begin by ‘testing-in-the-small’ and move toward ‘testing-in-the-large’


2. For conventional software
• The module (component) is our initial focus
• Integration of modules follows
3. For OO software
• our focus when “testing in the small” changes from an individual module
(the conventional view) to an OO class that encompasses attributes and
operations and implies communication and collaboration

47
TESTING STRATEGIES FOR CONVENTIONAL SOFTWARE

1. Strategy falls between the two extremes:


2. Takes an incremental view of testing, beginning with the testing of
individual program units, moving to tests designed to facilitate the
integration of the units and culminating with tests that exercise the
constructed system.

48
UNIT TESTING

1. Unit testing focuses verification effort on the smallest unit of software


design- component or module.
2. It focuses on the internal processing logic and data structures within the
boundaries of the module.

49
UNIT TESTING

Module to
be tested
interface
local data structures

boundary conditions
independent paths
error handling paths

test cases
50
UNIT TEST ENVIRONMENT

driver
interface
local data structures

Module boundary conditions


independent paths
error handling paths

stub stub

test cases

RESULTS
51
II. INTEGRATION TESTING

• Top-down integration
• Bottom –up integration
• Regression testing
• Smoke testing

52
II. INTEGRATION TESTING

Options:
• the “big bang” approach
• an incremental construction strategy

53
TOP DOWN INTEGRATION

A
top module is tested with
stubs

B F G

stubs are replaced one at


a time, "depth first"
C
as new modules are integrated,
some subset of tests is re-run
D E

54
TOP DOWN INTEGRATION

The integration is performed in five steps:

55
BOTTOM-UP INTEGRATION

B F G

drivers are replaced one at a


time, "depth first"
C

worker modules are grouped into


builds and integrated
D E
cluster
56
SANDWICH TESTING

A
Top modules are
tested with stubs

B F G

Worker modules are grouped into


builds and integrated
D E

cluster
57
BOTTOM UP INTEGRATION

58
BOTTOM UP INTEGRATION

The integration is performed in following steps:

59
REGRESSION TESTING

1. Regression testing is the re-execution of some subset of tests that have


already been conducted to ensure that changes have not propagated
unintended side effects
2. Whenever software is corrected, some aspect of the software configuration
(the program, its documentation, or the data that support it) is changed.
3. Regression testing helps to ensure that changes (due to testing or for other
reasons) do not introduce unintended behavior or additional errors.

4. Regression testing may be conducted manually, by re-executing a subset of


all test cases or using automated capture/playback tools.

60
REGRESSION TESTING

The regression test suite contains three different


classes of test cases:

61
SMOKE TESTING

1. A common approach for creating “daily builds” for product software


2. Smoke testing steps:
1. Software components that have been translated into code are integrated into a “build.”

A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more
product functions.

2. A series of tests is designed to expose errors that will keep the build from properly performing its function.
The intent should be to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule.

3. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily.
The integration approach may be top down or bottom up.

62
OBJECT-ORIENTED TESTING

1. begins by evaluating the correctness and consistency of the analysis and


design models
2. testing strategy changes
• the concept of the ‘unit’ broadens due to encapsulation
• integration focuses on classes and their execution across a ‘thread’ or in
the context of a usage scenario
• validation uses conventional black box methods
3. test case design draws on conventional methods, but also encompasses
special features

63
BROADENING THE VIEW OF “TESTING”

1. It can be argued that the review of OO analysis and design models is especially
useful because the same semantic constructs (e.g., classes, attributes, operations,
messages) appear at the analysis, design, and code level.
2. Therefore, a problem in the definition of class attributes that is uncovered during
analysis will circumvent side effects that might occur if the problem were not
discovered until design or code (or even the next iteration of analysis).

64
TESTING THE CRC MODEL
1. Revisit the CRC model and the object-relationship model.

2. Inspect the description of each CRC index card to determine if a delegated responsibility
is part of the collaborator’s definition.

3. Invert the connection to ensure that each collaborator that is asked for service is receiving
requests from a reasonable source.

4. Using the inverted connections examined in step 3, determine whether other classes might
be required or whether responsibilities are properly grouped among the classes.

5. Determine whether widely requested responsibilities might be combined into a single


responsibility.

6. Steps 1 to 5 are applied iteratively to each class and through each evolution of the analysis
model.
65
OO TESTING STRATEGY

1. class testing is the equivalent of unit testing


• operations within the class are tested
• the state behavior of the class is examined
2. integration applied three different strategies
• thread-based testing—integrates the set of classes required to respond
to one input or event
• use-based testing—integrates the set of classes required to respond to
one use case
• cluster testing—integrates the set of classes required to demonstrate
one collaboration

66
WEBAPP TESTING - I

1. The content model for the WebApp is reviewed to uncover errors.


2. The interface model is reviewed to ensure that all use cases can be
accommodated.
3. The design model for the WebApp is reviewed to uncover navigation
errors.
4. The user interface is tested to uncover errors in presentation and/or
navigation mechanics.
5. Each functional component is unit tested.

67
WEBAPP TESTING - II

1. Navigation throughout the architecture is tested.


2. The WebApp is implemented in a variety of different environmental
configurations and is tested for compatibility with each configuration.
3. Security tests are conducted in an attempt to exploit vulnerabilities in the
WebApp or within its environment.
4. Performance tests are conducted.
5. The WebApp is tested by a controlled and monitored population of end-
users. The results of their interaction with the system are evaluated for
content and navigation errors, usability concerns, compatibility concerns,
and WebApp reliability and performance.

68
HIGH ORDER TESTING
1. Validation testing
1. Focus is on software requirements

2. System testing
1. Focus is on system integration

3. Alpha/Beta testing
1. Focus is on customer usage

4. Recovery testing
1. forces the software to fail in a variety of ways and verifies that recovery is properly performed

5. Security testing
1. verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration

6. Stress testing
1. executes a system in a manner that demands resources in abnormal quantity, frequency, or volume

7. Performance Testing
1. test the run-time performance of software within the context of an integrated system

69
THE ART OF DEBUGGING
THE DEBUGGING PROCESS

70
DEBUGGING PROCESS

71
DEBUGGING STRATEGIES

1. Objective of debugging is to find and correct the course of software error or defect.
2. In general, three debugging strategies have been proposed:
• Brute force / testing
• Backtracking
• Cause Elimination
3. All the three techniques need to address the following :
• Debugging tactics
• Automated debugging
• The people factor

72
SYMPTOMS & CAUSES

symptom and cause may be


geographically separated

symptom may disappear when


another problem is fixed

cause may be due to a


combination of non-errors

cause may be due to a system


or compiler error

symptom cause may be due to


assumptions that everyone
cause believes

symptom may be intermittent


73
CONSEQUENCES OF BUGS

infectious
damage

catastrophic
extreme
serious
disturbing

annoying
mild

Bug Type

Bug Categories: function-related bugs,


system-related bugs, data bugs, coding bugs,
design bugs, documentation bugs, standards
violations, etc.
74
CORRECTING THE ERROR

1. Is the cause of the bug reproduced in another part of the program? In many
situations, a program defect is caused by an erroneous pattern of logic that
may be reproduced elsewhere.
2. What "next bug" might be introduced by the fix I'm about to make? Before the
correction is made, the source code (or, better, the design) should be
evaluated to assess coupling of logic and data structures.
3. What could we have done to prevent this bug in the first place? This question is
the first step toward establishing a statistical software quality assurance
approach. If you correct the process as well as the product, the bug will be
removed from the current program and may be eliminated from all future
programs.
75
FINAL THOUGHTS

1. Think -- before you act to correct


2. Use tools to gain additional insight
3. If you’re at an impasse, get help from someone else
4. Once you correct the bug, use regression testing to uncover any side
effects

76
SECURITY ENGINEERING

Topics covered
1. Security engineering and security management
Security engineering concerned with applications; security
management with infrastructure.
2. Security risk assessment
Designing a system based on the assessment of security risks.
3. Design for security
How system architectures have to be designed for security.

77
SECURITY ENGINEERING

1. Tools, techniques and methods to support the development and


maintenance of systems that can resist malicious attacks that are
intended to damage a computer-based system or its data.
2. A sub-field of the broader field of computer security.
3. Assumes background knowledge of dependability and security concepts
(Chapter 10) and security requirements specification (Chapter 12)

78
APPLICATION/INFRASTRUCTURE
SECURITY

1. Application security is a software engineering problem where the system


is designed to resist attacks.
2. Infrastructure security is a systems management problem where the
infrastructure is configured to resist attacks.
3. The focus of this chapter is application security.

79
SYSTEM LAYERS WHERE SECURITY MAY BE
COMPROMISED

80
SYSTEM SECURITY MANAGEMENT

1. User and permission management


Adding and removing users from the system and setting up
appropriate permissions for users
2. Software deployment and maintenance
Installing application software and middleware and configuring these
systems so that vulnerabilities are avoided.
3. Attack monitoring, detection and recovery
Monitoring the system for unauthorized access, design strategies for
resisting attacks and develop backup and recovery strategies.

81
SECURITY RISK MANAGEMENT

1. Risk management is concerned with assessing the possible losses that


might ensue from attacks on the system and balancing these losses
against the costs of security procedures that may reduce these losses.
2. Risk management should be driven by an organisational security policy.
3. Risk management involves
1. Preliminary risk assessment
2. Life cycle risk assessment
3. Operational risk assessment

82
PRELIMINARY RISK ASSESSMENT

83
MISUSE CASES

1. Misuse cases are instances of threats to a system


2. Interception threats
Attacker gains access to an asset
3. Interruption threats
Attacker makes part of a system unavailable
4. Modification threats
A system asset if tampered with
5. Fabrication threats
False information is added to a system

84
ASSET ANALYSIS

Asset Value Exposure

The information system High. Required to support all High. Financial loss as clinics
clinical consultations. Potentially may have to be canceled. Costs
safety-critical. of restoring system. Possible
patient harm if treatment cannot
be prescribed.

The patient database High. Required to support all High. Financial loss as clinics
clinical consultations. Potentially may have to be canceled. Costs
safety-critical. of restoring system. Possible
patient harm if treatment cannot
be prescribed.

An individual patient record Normally low although may be Low direct losses but possible
high for specific high-profile loss of reputation.
patients.

85
THREAT AND CONTROL ANALYSIS

Threat Probability Control Feasibility

Unauthorized user Low Only allow system Low cost of


gains access as system management from implementation but care
manager and makes specific locations that must be taken with key
system unavailable are physically secure. distribution and to
ensure that keys are
available in the event of
an emergency.

Unauthorized user High Require all users to Technically feasible but


gains access as system authenticate themselves high-cost solution.
user and accesses using a biometric Possible user
confidential information mechanism. resistance.
Log all changes to Simple and transparent
patient information to to implement and also
track system usage. supports recovery.

86
SECURITY REQUIREMENTS

1. Patient information must be downloaded at the start of a clinic session to


a secure area on the system client that is used by clinical staff.
2. Patient information must not be maintained on system clients after a
clinic session has finished.
3. A log on a separate computer from the database server must be
maintained of all changes made to the system database.

87
LIFE CYCLE RISK ASSESSMENT

1. Risk assessment while the system is being developed and after it has
been deployed
2. More information is available - system platform, middleware and the
system architecture and data organisation.
3. Vulnerabilities that arise from design choices may therefore be identified.

88
LIFE-CYCLE RISK ANALYSIS

89
DESIGN DECISIONS FROM USE OF COTS

1. System users authenticated using a name/password combination.


2. The system architecture is client-server with clients accessing the system
through a standard web browser.
3. Information is presented as an editable web form.

90
VULNERABILITIES ASSOCIATED WITH
TECHNOLOGY CHOICES

91
SECURITY REQUIREMENTS

1. A password checker shall be made available and shall be run daily. Weak
passwords shall be reported to system administrators.
2. Access to the system shall only be allowed by approved client computers.
3. All client computers shall have a single, approved web browser installed
by system administrators.

92
OPERATIONAL RISK ASSESSMENT

1. Continuation of life cycle risk assessment but with additional information


about the environment where the system is used.
2. Environment characteristics can lead to new system risks
Risk of interruption means that logged in computers are left
unattended.

93
DESIGN FOR SECURITY

1. Architectural design
how do architectural design decisions affect the security of a system?
2. Good practice
what is accepted good practice when designing secure systems?
3. Design for deployment
what support should be designed into a system to avoid the
introduction of vulnerabilities when a system is deployed for use?

94
ARCHITECTURAL DESIGN

1. Two fundamental issues have to be considered when designing an architecture for


security.
Protection
How should the system be organised so that critical assets can be protected against external attack?
Distribution
How should system assets be distributed so that the effects of a successful attack are minimized?
2. These are potentially conflicting
If assets are distributed, then they are more expensive to protect. If assets are
protected, then usability and performance requirements may be compromised.

95
PROTECTION

1. Platform-level protection
Top-level controls on the platform on which a system runs.
2. Application-level protection
Specific protection mechanisms built into the application itself e.g.
additional password protection.
3. Record-level protection
Protection that is invoked when access to specific information is
requested
4. These lead to a layered protection architecture
96
A LAYERED PROTECTION
ARCHITECTURE

97
DISTRIBUTION

1. Distributing assets means that attacks on one system do not necessarily


lead to complete loss of system service
2. Each platform has separate protection features and may be different from
other platforms so that they do not share a common vulnerability
3. Distribution is particularly important if the risk of denial of service attacks
is high

98
KEY POINTS

1. Security engineering is concerned with how to develop systems that can


resist malicious attacks
2. Security threats can be threats to confidentiality, integrity or availability of
a system or its data
3. Security risk management is concerned with assessing possible losses
from attacks and deriving security requirements to minimise losses
4. Design for security involves architectural design, following good design
practice and minimising the introduction of system vulnerabilities

99
SOFTWARE CONFIGURATION MANAGEMENT

1. Software Configuration Management(SCM) is a process to systematically


manage, organize, and control the changes in the documents, codes, and
other entities during the Software Development Life Cycle.
2. The primary goal is to increase productivity with minimal mistakes.
3. SCM is part of cross-disciplinary field of configuration management and it
can accurately determine who made which revision.

100
NEED OF S/W CONFIGURATION MANAGEMENT
1. There are multiple people working on software which is continually
updating
2. It may be a case where multiple version, branches, authors are involved in
a software config project, and the team is geographically distributed and
works concurrently
3. Changes in user requirement, policy, budget, schedule need to be
accommodated.
4. Software should able to run on various machines and Operating Systems
5. Helps to develop coordination among stakeholders
6. SCM process is also beneficial to control the costs involved in making
changes to a system.

101
102
TASKS IN SCM PROCESS

1. Configuration Identification
2. Baselines
3. Change Control
4. Configuration Status Accounting
5. Configuration Audits and Reviews

103
CONFIGURATION IDENTIFICATION:

1. Configuration identification is a method of determining the scope of the


software system.
2. With the help of this step, you can manage or control something even if
you don’t know what it is.
3. It is a description that contains the CSCI type (Computer Software
Configuration Item), a project identifier and version information.

104
ACTIVITIES DURING THIS PROCESS:

1. Identification of configuration Items like source code modules, test case,


and requirements specification.
2. Identification of each CSCI in the SCM repository, by using an object-
oriented approach
3. The process starts with basic objects which are grouped into aggregate
objects. Details of what, why, when and by whom changes in the test are
made
4. Every object has its own features that identify its name that is explicit to
all other objects
5. List of resources required such as the document, the file, tools, etc.

105
BASELINE:

1. A baseline is a formally accepted version of a software configuration item.


2. It is designated and fixed at a specific time while conducting the SCM
process.
3. It can only be changed through formal change control procedures.

106
ACTIVITIES DURING THIS PROCESS:

1. Facilitate construction of various versions of an application


2. Defining and determining mechanisms for managing various versions of
these work products
3. The functional baseline corresponds to the reviewed system requirements
4. Widely used baselines include functional, developmental, and product
baselines

107
CHANGE CONTROL:

1. Change control is a procedural method which ensures quality and


consistency when changes are made in the configuration object.
2. In this step, the change request is submitted to software configuration
manager.

108
ACTIVITIES DURING THIS PROCESS:

1. Control ad-hoc change to build stable software development environment.


Changes are committed to the repository
2. The request will be checked based on the technical merit, possible side
effects and overall impact on other configuration objects.
3. It manages changes and making configuration items available during the
software lifecycle

109
CONFIGURATION STATUS ACCOUNTING:

1. Configuration status accounting tracks each release during the SCM


process.
2. This stage involves tracking what each version has and the changes that
lead to this version.

110
ACTIVITIES DURING THIS PROCESS:

1. Keeps a record of all the changes made to the previous baseline to reach a
new baseline
2. Identify all items to define the software configuration
3. Monitor status of change requests
4. Complete listing of all changes since the last baseline
5. Allows tracking of progress to next baseline
6. Allows to check previous releases/versions to be extracted for testing

111
CONFIGURATION AUDITS AND REVIEWS:

1. Software Configuration audits verify that all the software product satisfies
the baseline needs.
2. It ensures that what is built is what is delivered.

112
ACTIVITIES DURING THIS PROCESS:

1. Configuration auditing is conducted by auditors by checking that defined


processes are being followed and ensuring that the SCM goals are
satisfied.
2. To verify compliance with configuration control standards. auditing and
reporting the changes made
3. SCM audits also ensure that traceability is maintained during the process.
4. Ensures that changes made to a baseline comply with the configuration
status reports
5. Validation of completeness and consistency

113
PARTICIPANT OF SCM PROCESS:

114
1. Configuration Manager
• Configuration Manager is the head who is Responsible for identifying configuration
items.
• CM ensures team follows the SCM process
• He/She needs to approve or reject change requests
2. Developer
• The developer needs to change the code as per standard development activities or
change requests. He is responsible for maintaining configuration of code.
• The developer should check the changes and resolves conflicts
3. Auditor
• The auditor is responsible for SCM audits and reviews.
• Need to ensure the consistency and completeness of release.

115
4. Project Manager:
• Ensure that the product is developed within a certain time frame
• Monitors the progress of development and recognizes issues in the SCM
process
• Generate reports about the status of the software system
• Make sure that processes and policies are followed for creating,
changing, and testing
5. User
• The end user should understand the key SCM terms to ensure he has the
latest version of the software

116
CONCLUSION:
• Configuration Management best practices helps organizations to
systematically manage, organize, and control the changes in the
documents, codes, and other entities during the Software Development
Life Cycle.
• The primary goal of the SCM process is to increase productivity with
minimal mistakes
• The main reason behind configuration management process is that there
are multiple people working on software which is continually updating.
SCM helps establish concurrency, synchronization, and version control.
• A baseline is a formally accepted version of a software configuration item
• Change control is a procedural method which ensures quality and
consistency when changes are made in the configuration object.
117
• Configuration status accounting tracks each release during the SCM
process
• Software Configuration audits verify that all the software product satisfies
the baseline needs
• Project manager, Configuration manager, Developer, Auditor, and user are
participants in SCM process
• The SCM process planning begins at the early phases of a project.
• Git, Team foundation Sever and Ansible are few popular SCM tools.

118
ADVANCED TOPICS

119
ADVANCED TOPICS

❖ Software Process Improvement


❖ The SPI Process
❖ Trends and Return of Investment
❖ Technology Directions

120
SOFTWARE PROCESS
IMPROVEMENT

1. SPI implies that


• elements of an effective software process can be defined in an effective manner.
• an existing organizational approach to software development can be assessed against those
elements, and
• a meaningful strategy for improvement can be defined.
2. The SPI strategy transforms the existing approach to software development into
something that is more focused, more repeatable, and more reliable.

121
APPROACH TO SPI

• a set of characteristics that must be present if an effective software process


is to be achieved
• a method for assessing whether those characteristics are present
• a mechanism for summarizing the results of any assessment, and
• a strategy for assisting a software organization in implementing those
process characteristics that have been found to be weak or missing.
• An SPI framework assesses the “maturity” of an organization’s software
process and provides a qualitative indication of a maturity level.

122
ELEMENTS OF A SPI FRAMEWORK

123
CONSTITUENCIES

• Quality certifiers: Group experts on the following relationship:


• Quality(Process) --> Quality(Product)
• Most likely adopt to CMMI, SPICE, Bootstrap for process framework.
• Formalists: understand process workflow, use PML to create model of
existing either to modify or extend to make it more effective.
• Tool advocates: insists on a tool for process improvement.

124
CONSTITUENCIES

1. Practitioners: use pragmatic approach , emphasis on mainstream project,


quality & product management, apply project level planning and metrics.
2. Reformers: organizational change that lead to better s/w process,
emphasis on human issues & measures of human capabilities.
3. Ideologists: focus on suitability of a particular process model for a
specific application domain/ organisational structure, support for
reuse/reengineering

125
CONSTITUENCIES

The above constituency establish mechanism to


1. Support technology transition
2. Determine the degree to which an organization is ready to absorb process
changes that are proposed.
3. Measure the degree to which changes have been adopted.

126
MATURITY MODELS

1. A maturity model is applied within the context of an SPI framework.


2. The intent of the maturity model is to provide an overall indication of the
“process maturity” exhibited by a software organization.
• an indication of the quality of the software process, the degree to which
practitioner’s understand and apply the process,
• the general state of software engineering practice.

127
MATURITY MODELS

1. SEI’s CMMI suggests five levels of maturity ranging from initial to


optimised.

128
IS SPI FOR EVERYONE?

1. Can a small company initiate SPI activities and do it successfully?


Answer: a qualified “yes”
2. It should come as no surprise that small organizations are more informal,
apply fewer standard practices, and tend to be self-organizing.
SPI will be approved and implemented only after its proponents
demonstrate financial leverage.

129
THE SPI PROCESS

1. SEI has developed IDEAL – an organisational improvement model that


serves as a road map for initiating, planning, implementing improvement
actions.
2. IDEAL is representative of many process models for SPI defining five
distinct activities – initiating, diagnosing, establishing, acting and learning.

130
ASSESSMENT AND GAP ANALYSIS
1. is to uncover both strengths and weaknesses
2. It examines a wide range of actions and tasks that will lead to a high quality process.
3. The issues should be focused during process assessment are

Consistency. Are important activities, actions and tasks applied consistently across all software projects and
by all software teams?

Sophistication. Are management and technical actions performed with a level of sophistication that implies a
thorough understanding of best practice?

Acceptance. Is the software process and software engineering practice widely accepted by management and
technical staff?

Commitment. Has management committed the resources required to achieve consistency, sophistication and
acceptance?

4. Gap analysis—The difference between local application and best practice represents a “gap” that
offers opportunities for improvement.

131
ASSESSMENT AND GAP ANALYSIS

Gap analysis

1. The difference between local application and best practice represents a “gap” that offers
opportunities for improvement.

2. The degree to which consistency, sophistication, acceptance and commitment are achieved indicates
the amount of cultural change that will be required to achieve meaningful improvement.

132
THE SPI PROCESS—II

1. Education and Training


2. Three types of education and training should be conducted:
• Generic concepts and methods. Directed toward both managers and practitioners, this category stresses both process and
practice. The intent is to provide professionals with the intellectual tools they need to apply the software process effectively
and to make rational decisions about improvements to the process.

• Specific technology and tools. Directed primarily toward practitioners, this category stresses technologies and tools that have
been adopted for local use. For example, if UML has been chosen for analysis and design modeling, a training curriculum for
software engineering using UML would be established.

• Business communication and quality-related topics. Directed toward all stakeholders, this category focuses on “soft” topics
that help enable better communication among stakeholders and foster a greater quality focus.

133
THE SPI PROCESS—III

1. Selection and Justification


• choose the process model (Chapters 2 and 3) that best fits your organization, its stakeholders, and
the software that you build
• decide on the set of framework activities that will be applied, the major work products that will
be produced and the quality assurance checkpoints that will enable your team to assess progress
• develop a work breakdown for each framework activity (e.g., modeling), defining the task set that
would be applied for a typical project
• Once a choice is made, time and money must be expended to install it within an organization and
these resource expenditures should be justified.

134
THE SPI PROCESS—IV

1. Installation/Migration
1. actually software process redesign (SPR) activities. Scacchi [Sca00] states
that “SPR is concerned with identification, application, and refinement of
new ways to dramatically improve and transform software processes.”
2. three different process models are considered:
the existing (“as-is”) process,
a transitional (“here-to-there”) process, and
the target (“to be”) process.

135
THE SPI PROCESS—V

1. Evaluation
• assesses the degree to which changes have been instantiated and adopted,
• the degree to which such changes result in better software quality or other tangible process
benefits, and
• the overall status of the process and the organizational culture as SPI activities proceed
2. From a qualitative point of view, past management and practitioner attitudes about the
software process can be compared to attitudes polled after installation of process changes.

136
RISK MANAGEMENT FOR SPI
1. manage risk at three key points in the SPI process [Sta97b]:
• prior to the initiation of the SPI roadmap,
• during the execution of SPI activities (assessment, education, selection, installation), and
• during the evaluation activity that follows the instantiation of some process characteristic.
2. In general, the following categories [Sta97b] can be identified for SPI risk factors:
• budget and cost
• content and deliverables culture
• maintenance of SPI deliverables
• mission and goals
• organizational management and organizational stability
• process stakeholders
• schedule for SPI development
• SPI development environment and process
• SPI project management and SPI staff

137
CRITICAL SUCCESS FACTORS

1. The top five CSFs are [Ste99]:


1. Management commitment and support
2. Staff involvement
3. Process integration and understanding
4. A customized SPI strategy
5. A customized SPI strategy

138
THE CMMI

1. a comprehensive process meta-model that is predicated on a set of system


and software engineering capabilities that should be present as
organizations reach different levels of process capability and maturity
2. a process meta-model in two different ways: (1) as a “continuous” model
and (2) as a “staged” model
3. defines each process area in terms of “specific goals” and the “specific
practices” required to achieve these goals. Specific goals establish the
characteristics that must exist if the activities implied by a process area are
to be effective. Specific practices refine a goal into a set of process-related
activities.

139
THE PEOPLE CMM

1. “a roadmap for implementing workforce practices that continuously


improve the capability of an organization’s workforce.” [Cur02]
2. defines a set of five organizational maturity levels that provide an
indication of the relative sophistication of workforce practices and
processes

140
P-CMM PROCESS AREAS

141
OTHER SPI FRAMEWORKS

1. SPICE— a international initiative to support the International Standard


ISO/IEC 15504 for (Software) Process Assessment [ISO08]
2. Bootstrap—a SPI framework for small and medium sized organizations
that conforms to SPICE [Boo06],
3. PSP and TSP—individual and team specific SPI frameworks ([Hum97],
[Hum00]) that focus on process in-the-small, a more rigorous approach to
software development coupled with measurement
4. TickIT—an auditing method [Tic05] that assesses an organization
compliance to ISO Standard 9001:2000

142
SPI RETURN ON INVESTMENT

1. “How do I know that we’ll achieve a reasonable return for the money we’re spending?”
1. ROI = [S (benefits) – S (costs)] / S (costs)] 3 100%
2. where
benefits include the cost savings associated with higher product quality (fewer defects), less rework, reduced
effort associated with changes, and the income that accrues from shorter time-to-market.

costs include both direct SPI costs (e.g., training, measurement) and indirect costs associated with greater
emphasis on quality control and change management activities and more rigorous application of software
engineering methods (e.g., the creation of a design model).

143
SPI TRENDS

1. future SPI frameworks must become significantly more agile


2. Rather than an organizational focus (that can take years to complete
successfully), contemporary SPI efforts should focus on the project level
3. To achieve meaningful results (even at the project level) in a short time
frame, complex framework models may give way to simpler models.
4. Rather than dozens of key practices and hundreds of supplementary
practices, an agile SPI framework should emphasize only a few pivotal
practices

144
TECHNOLOGY DIRECTIONS

1. Process Trends
2. The Grand Challenge
3. Collaborative Development
4. Requirements Engineering
5. Model-Driven Software Development
6. Postmodern Design
7. Test-Driven Development

145
TECHNOLOGY DIRECTIONS

1. As the software engineering community develops new model-driven


approaches to the representation of system requirements and design, the
following characteristics must be addressed.

146

You might also like