You are on page 1of 215

Software Architecture

Methodology
Key Practices of a Software Architect

1
Training Plan
 Architecture Context; Inception;
Quality Requirements Identification
 Practice of Designing an Architecture
 Practice of Documenting an Architecture
 Analysis and Assessment of Architectures

 Each day is structured as:


 Presentation of theory with short exercises along narration
 A game in groups of 3-5 people: applying learned practice to a
sample business case (during these 3 days we will go on selected
samples through the inception down to validating designed
architecture)

2
Part 1: Context for Software
Architecture Discipline

3
Software Architecture - Definition
The software architecture of a system is the set of
structures needed to reason about the system, which
comprise software elements, relations among them,
and properties of both.
Len Bass, Paul Clements and Rick Kazman
Software Architecture in Practice, 3rd Edition

Architecture is the fundamental organization of a system,


embodied in its components, their relationships to
each other and the environment, and the principles
governing its design and evolution
ANSI/IEEE Std 1471-2000

4
Architecture Is a Set of Structures
 Structure - a set of elements held together by a
relation.
 Software systems are composed of many
structures, no single structure holds claim to be
the architecture.
 Three categories of architectural structures, which
will play an important role in the design,
documentation, and analysis of architectures:
 The implementation units (static)
 The runtime (dynamic) structure
 The mapping of software to the environment

5
The Implementation Units
 Implementation units - modules.
 Modules are assigned specific computational
responsibilities, and are the basis of work
assignments for programming teams.
 Kinds of module structures:
 The module decomposition structure
 Object-oriented analysis and design class diagrams
 Layers structures
 Module structures are static structures - focus on
the way the system functionality is divided.

6
The Runtime Structure
 Focus on the way the elements interact with
each other at runtime
 Dynamic structures
 Elements carry out the system functions
 May also be called component-and-
connector (C&C) structures

7
Software Mapping to the Environment
 Describes the mapping from software structures
to the system organizational, developmental,
installation, and execution environments.
 Examples:
 Modules are assigned to teams to develop
 Modules are assigned to places in a file structure for
implementation, integration, and testing.
 Components are deployed onto hardware to execute.
 These mappings are called allocation structures

8
Architectural Structures
 A structure is architectural if it supports
reasoning about the system and its
properties.
 Reasoning should be about an attribute of
the system, important to some stakeholder:
 Functionality achieved by the system
 Availability of the system
 Modifiability of the system
 Responsiveness of the system to user requests

9
Architectural Structures and Views
 A view is a representation of a coherent set of
architectural elements, as written by and read
by the system stakeholders. It consists of a
representation of a set of elements and the
relations among them.
 A structure is the set of elements itself, as they
exist in software or hardware.
 In short, a view is a representation of a
structure.
 Architects design structures. They document
views of those structures.
10
Module Structures
 Embody decisions: how to structure the
system as a set of code or data units to be
constructed/procured.

 Elements - modules of some kind (classes,


layers, merely divisions of functionality).
 A static way of considering the system.
 Modules are assigned areas of functional
responsibility.

11
Questions for the Module Structures
 What is the functional responsibility of a
module?
 What other software elements is a module
allowed to use?
 What other software does it actually use
and depend on?
 What other modules are related through
generalization or specialization?
 Reason about modifiability

12
Component & Connector Structures
 Embody decisions: how the system is to be
structured as a set of elements that have
runtime behavior (components) and
interactions (connectors).

 Elements - runtime components (services,


peers, clients, servers, filters) and
connectors (process synchronization
operators, pipes).

13
Questions for the C&C Structures
 What are the major executing components?
 How do they interact at runtime?
 What are the major shared data stores?
 Which parts of the system are replicated?
 How does data progress in the system?
 What parts can run in parallel?
 Can the system structure change as it
executes? If so, how?
 Reason about performance, security,
availability.
14
Allocation Structures
 Embody decisions: how the system will
relate to non-software structures (CPUs, file
systems, networks, development teams).

 Relationship between the software


elements and elements in external
environments where the software is created
and executed.

15
Questions for Allocation Structures
 What processor does each software
element execute on?

 In what directories or files is each element


stored during development, testing/system
building?

 What is the assignment of each software


element to development teams?

16
Useful Module Structures
 Decomposition structure

 Uses structure

 Layer structure

 Class structure

 Data model
17
Useful C&C Structures

 Service structure

 Concurrency structure

18
Useful Allocation Structures

 Deployment structure

 Implementation structure

 Work-assignment structure

19
Environment and Goals
Environment (or system context):
 circumstances of developmental, operational, political, and other

influences
 determines the boundaries of the system scope and constraints

Mission of a system:
 is a use or operation for which a system is intended by one or

more stakeholders to meet some set of objectives (goals)

Stakeholder is:
 a person, group, or organization with an interest in or concerns

about the realization of a system


System in Environment
Government
Agencies
Boundary of organization
Tax, import/export
regulations
Fin.
Stock
Suppliers Control Requests
Operations for
logs available
goods Available
goods Request
Requests for for credit Credit
distribution of Order agencies
goods
processing Credit
Delivery terms system approval
Confirmed orders for
Senders Purchase production
orders
Time
Orders, schedule of
Purchases Status Requests production Production
Payments Order status,
Invoices

Goods Clients

21
Why Architecture is Important?
 The architecture builds a bridge between problem space
(business goals) and solution space (the resulting
system)
 It is a manifestation of the earliest design decisions
about the system, that dramatically constrain possible
decisions on later stages and resulting qualities of a
system
Axiom 1: Software architecture of a system is a fundamental
artifact that guides development

 Wrong architecture usually spells some form of disaster


 Good architecture opens the way for the system success
22
Why Architecture is Important?
1. Architecture will inhibit or enable a system
driving quality attributes.
2. The decisions made in architecture allow to
reason about and manage change as the system
evolves.
3. The analysis of an architecture enables early
prediction of a system qualities.
4. A documented architecture enhances
communication among stakeholders.
5. Architecture is a carrier of the earliest,
fundamental, hardest-to-change design decisions.

23
Why Architecture is Important?
6. Architecture defines a set of constraints on
subsequent implementation.
7. Architecture dictates the structure of an
organization, or vice versa.
8. Architecture can provide the basis for
evolutionary prototyping.
9. Architecture is the key artifact that allows the
architect and project manager to reason about cost
and schedule.

24
Why Architecture is Important?
10. Architecture can be created as a transferable,
reusable model, the heart of a product line.
11. Architecture-based development focuses
attention on the assembly of components, rather
than simply on their creation.
12. By restricting design alternatives, architecture
channels the creativity of developers, reducing
design and system complexity.
13. Architecture can be the foundation for training a
new team member.

25
Good Architecture
 Good architecture is one that allows a system to
meet its functional, quality attribute, and life-cycle
requirements and thus fulfill the business goals
and satisfy its stakeholders (just after going live
and in observed future)

 How to achieve a good architecture?


 Architecture must be feasible within given environment
(from technological, economical and other perspectives)
 A change in structure improving one quality (responding
to a demand from one of the forces) often affects the
other qualities

26
Role of the Architect
 That is the role of an Architect: find a right balance of
all important forces:
 Business (goals, requirements)
 Technology
 People (stakeholders)
in a feasible, good architecture
 And thereafter:
 Communicate the designed balance to stakeholders
 Maintain the system conceptual integrity during
development
 Facilitate transition from the model to production system –
through the software development life-cycle

27
Architecture Cycle

28
Essential Architect’s Skills
Wide technology Computer science
background and some education
specific hands-on Industrial experience and
experience knowledge of SDLC

Architect’s
way of
thinking

Understanding of
Initiative and
application domain and
communication skills
analytical skills

29
Methodology: Design and Analysis

Design is making the decisions that lead to the


creation of architecture
 Which design decisions will lead to a software architecture that
successfully addresses the desired system qualities?

Analysis ensures that the architecture used is the


good one
 How do you know if a given software architecture is deficient or at
risk relative to its target system qualities?
 What trade-offs are made, and is there an alternative with less risks,
improved qualities or greater value for business?

30
Architecture-Centric Activities
Inception

Architectural
Requirements Design To be effective, these activities
Identification must :
 involve stakeholders
Architecture  directly link to business and
Description mission goals
 be focused on significant,
driving factors
Business Value Trade-off and Risks
Analysis Analysis

31
Axioms of the Architecture-Centric Approach

1. The software architecture of a system is the fundamental artifact


that guides development.
2. Systems are built to satisfy business goals.
3. Architecture design is based on a set of architecturally significant
requirements, derived from business goals.
4. Quality attribute requirements exert the strongest influence on
architectural design.
5. Architecture design can be made tractable by considering a small
number of primitives, called tactics.
6. Architecture design can and should be guided by analysis.
7. Architectures are developed by people within an
organizational/business context; so economic and organizational
concerns shape and constrain architecture.

32
Part 1: Inception and
Identification of Quality
Attribute Requirements

33
The real life

34
The Design and Analysis Process
Inception Activities Review Activities
 Identify key stakeholders.  Identify, describe, and prioritize
ASRs.
 Identify business objectives of
the stakeholders.  Identify architecture
description.
 Prioritize business objectives.
 Analyze architecture
description against ASRs.
Design Activities
 Identify, describe, and prioritize
Post-Review Activities
architecturally significant
requirements (ASRs).  Summarize findings and review
them with architecture owners.
 Design and document the
architecture.  Plan architecture
improvements.
 Validate the design decisions.
 Refine review methods.

35
Inception Activities
Axiom 2: Systems are built to satisfy business
goals

Thus at inception stage we need to:


 Identify key stakeholders and their concerns

 Identify business objectives from the


environment and stakeholder’s concerns
 Prioritize business objectives

36
Practice

 Start filling B-1

37
Stakeholders

Acquirers Developers
Assessors Testers
Communicators Maintainers

System
Administrators Suppliers
Support Staff Owners of External
Systems
Users
Characteristics of a Good Stakeholder
 Informed: Do your stakeholders have the information, the
experience, and the understanding needed to make the right
decisions?

 Committed: Are your stakeholders willing and able to make


themselves available to participate in the process, and are they
prepared to make some possibly difficult decisions?

 Authorized: Can you be sure that decisions made now by your


stakeholders will not be reversed later (at potentially high cost)?

 Representative: If a stakeholder is a group rather than a person,


have suitable representatives been selected from the group? Do
those representatives meet the above criteria for individual
stakeholders?

39
Business Goals - 1
Stakeholders often find expressing and/or prioritizing their
business goals to be difficult.
A taxonomy of business goals can aid in stakeholder
elicitation and facilitation:

40
Business Goals - 2
Each of these categories are broken down into
sub-categories, sub-sub-categories:

 Reduce IT total cost of ownership (TCO):


 reduce cost of development
 ...
 reduce cost of deployment and operations
 ease of installation
 ease of repair
 reduce cost of maintenance
 ..
 reduce cost of retirement/moving to a new system
 ..

41
Business Goals - 3

Once business goals are identified, they must be


prioritized:

 Prioritization exercise may reveal inconsistent


expectation for the system among stakeholders

 This will give us a guidance for making and


justifying trade-off decisions on design phase

42
Practice

 Fill-in B-2 and B-3

43
Practice in Teams

 Elaborate a sample business case of a web-


conferencing system
 Imagine details of the case and formulate:
context, mission, stakeholders, and prioritized
business objectives in the QAW template
 Model key actors and use cases in use case
diagram or just with bullet points

44
Part 2: Identification of
Architecturally Significant
Requirements

45
The Design and Analysis Process
Inception Activities Review Activities
 Identify key stakeholders.  Identify, describe, and prioritize
ASRs.
 Identify business objectives of
the stakeholders.  Identify architecture
description.
 Prioritize business objectives.
 Analyze architecture
description against ASRs.
Design Activities
 Identify, describe, and prioritize
Post-Review Activities
architecturally significant
requirements (ASRs).  Summarize findings and review
them with architecture owners.
 Design and document the
architecture.  Plan architecture
improvements.
 Validate the design decisions.
 Refine review methods.

46
Architecturally Significant Requirements - 1

 Even correctly articulated


business goals are too vague
for design and analysis

 Designing to satisfy all of the


requirements at once is too
difficult

 Some requirements have


more influence than others
on the architecture

47
Architecturally Significant Requirements - 2

Axiom 3: Architecture design is based on a set of


architecturally significant requirements (ASRs),
derived from business goals

ASRs or Architectural drivers are a combination of:


 major functional requirements,

 quality attribute requirements (sometimes called NFRs)

 life-cycle requirements,

 and constraints

that “shape” the architecture

48
Architecturally Significant Requirements - 3

Axiom 4: Quality attribute requirements exert the


strongest influence on architectural design

49
Practice

 Fill in C-1 list

50
Quality Attributes
Operational User experience
• Performance • Usability
• Availability • Internationalization
• Consistency • Accessibility
• …. • ….

Risks Prevention Growth & Evolution


• Security • Scalability
• Safety • Modifiability
• Regulations compliance • Testability
• …. • Interoperability
• ….

51
Quality Attribute Requirements-1
 As far as quality attributes exert strongest influence on the
architecture, they should be formalized as requirements

 However, in many project they are not:

 Stakeholders often omit quality attribute requirements from


consideration at all and after the system construction claim that
these ones were clear by default: (e.g. “of course, you should have
made the system able to support all our users, we relied that you are
a good vendor and thus make good software!”)

 Stakeholders thinking about them are often fuzzy (e.g. “I want a


system which is performing well, very reliable, and sufficiently
secure”)

52
Quality Attribute Requirements-2
 Sometimes they are formulated in a non verifiable form: lacking
sufficient details, it is necessary to validate them and later verify the
system behavior against the requirements.

 Often, a particular design tactic or implementation approach is


declared as a requirement (e.g. “must be clustered on hardware
level”, “must be implemented as HTML5 web-application to support
web-based deployment and reuse on different platforms”):
 Thus limiting possible design options
 In most cases it hides a real root-cause need, which could be revealed

 The vocabulary describing quality attributes varies widely

53
Practice

 Complete C-1 list

54
Quality Attribute Scenarios
 There are well-known structures to describe functional
requirements (e.g. Use cases)

 We need similar structures to describe quality attributes

 A solution to the problem of describing expected quality


attributes is to use quality attribute scenarios to better
characterize them.
 A quality attribute scenario is a short description of how
a system is required to respond to some stimulus.

55
Parts of a Quality Attribute Scenario - 1
A quality attribute scenario consists of six parts:
1. Source – an entity that generates a stimulus
2. Stimulus – a condition that affects the system
3. Artifact – the part of the system that was stimulated by
the stimulus
4. Environment – the condition under which the stimulus
occurred
5. Response – the activity that results because of the
stimulus
6. Response measure – the measure by which the system
response will be evaluated

56
Parts of a Quality Attribute Scenario -2

57
Guidance on Scenario Brainstorming

Well-formed scenarios have at least: a stimulus,


an environment, and a response specified.

We focus on eliciting three types of scenarios:


 use cases – anticipated uses of the system
 growth – anticipated changes to the system
 exploratory – unanticipated stresses to the system (uses
and/or changes)

58
Stimuli, Environment, Responses
Use case scenario
A remote user requests a database report via the Web
during a peak period and receives it within 5 seconds.
Growth scenario
Add a new data server to reduce latency in Scenario 1
to 2.5 seconds within 1 person-week.
Exploratory scenario
Half of the servers go down during normal operation
without affecting overall system availability.

These scenarios are “falsifiable hypotheses”.

59
Practice

 Review the following sample scenarios, analyze


them, identify "bad" examples and provide
suggestions on how to fix them

60
Samples: Use Case Scenarios
 The user wants to examine budgetary and actual data under
different fiscal years without re-entering project data. (usability)
 A data exception occurs and the system notifies a defined list of
recipients by e-mail and displays the offending conditions in red on
dashboard screens. (reliability)
 The user changes the graph layout from horizontal to vertical and
the graph is redrawn in one second. (performance)
 The remote user requests a database report via the Web during
peak period and receives it within five seconds. (performance)
 The caching system will be switched to another processor when its
processor fails, and will do so within one second. (reliability)
 The user can withdraw a limit of $300 from an account that has
sufficient funds in less than 10 seconds (performance)

61
Samples: Growth
 Change the heads-up display to track several targets simultaneously
without affecting latency.

 Add a new message type to the system repertoire in less that a person-
week of work.

 Add a collaborative planning capability where two planners at different


sites collaborate on a plan in less than a person-year of work.

 Migrate to a new operating system, or a new release of the existing


operating system in less than a person-year of work.

 Add a new data server to reduce latency in use case Scenario 4 to 2.5
seconds within one person-week.

 Double the size of existing database tables while maintaining 1 second


average retrieval time.

62
Samples: Exploratory
 Add a new 3-D map feature, and a virtual reality interface for viewing the
maps in less than five person-months of effort.

 Change the underlying database management system from MS SQL Server


to Oracle.

 Re-use the 25-year-old software on a new generation of the aircraft.

 The time budget for displaying changed track data is reduced by a factor of
10.

 Improve the system availability from 98% to 99.999%.

 Half of the servers go down during normal operation without affecting


overall system availability.

 Tenfold increase in the number of bids processed hourly while keeping


worst-case response time below 10 seconds.

63
Practice

 Fill in scenarios into C-2

64
Quality Attribute Workshop (QAW)
The QAW is a facilitated method that engages
system stakeholders early in the life cycle to
discover the driving quality attribute requirements
of a software-intensive system.

Key points about the QAW are that it is


 system-centric
 stakeholder focused
 scenario based
 used before the software architecture has been created

65
QAW Steps
1. QAW Presentation and Introductions
2. Business/Programmatic Presentation
3. Architectural Plan Presentation
4. Identification of Architectural Drivers
5. Scenario Brainstorming
6. Scenario Consolidation
7. Scenario Prioritization
8. Scenario Refinement

66
Step 1: QAW Presentation and
Introductions
QAW Presentation
 QAW facilitators describe the motivation for the QAW and explain
each step of the method.

Introductions
 QAW facilitators introduce themselves to the stakeholders.
 Stakeholders introduce themselves and briefly describe their
background and relationship to the system.

67
Step 2: Business/Programmatic
Presentation
A representative from the system stakeholder community
presents the system business and/or programmatic
drivers.
 business/programmatic context for the system
 high-level functional requirements
 high-level constraints
 high-level quality attribute requirements
 plan for development

Facilitators capture information that may shed light on the


quality attribute drivers.

68
Step 3: Architectural Plan Presentation
The system architect presents the architecture
development plans including
 key business/programmatic requirements
 key technical requirements and constraints that will
drive architectural decisions, such as
 mandated operating systems, hardware, middleware,
and so forth
 other systems with which the system must interact
 existing context diagrams, high-level system
diagrams, and descriptions

69
Step 4: Identification of Architectural Drivers
The QAW facilitators identify the architectural drivers that
are key to realizing quality attribute goals. QAW
facilitators:
 present a distilled list of the architectural drivers they heard
during the Business/Programmatic Presentation and the
Architecture Plan Presentation
 ask for clarifications, additions, and/or deletions from the
stakeholders to reach a consensus on the distilled list of
architectural drivers

The final list of architectural drivers help focus the


stakeholders during scenario brainstorming.

70
Step 5: Scenario Brainstorming
 Stakeholders generate scenarios using a
facilitated brainstorming process

 Each stakeholder generates a scenario in round-


robin fashion or may pass

 Stakeholders may have an opportunity to


contribute one or more scenarios (depending on
the number of stakeholders in the QAW and the
allocated time for the workshop)
71
Step 6: Scenario Consolidation

The QAW facilitators ask stakeholders to identify


scenarios that are similar in content.

 Similar scenarios are merged to prevent a “dilution” of


votes when voting is done.

 QAW facilitators attempt to reach a consensus with the


stakeholders before merging scenarios.

72
Example Scenario Consolidation

Scenarios that are similar in content are grouped


together, e.g.

 In the event of a processor fault, the system can be


rebooted/reinitialized
 A processor failure or crash doesn’t adversely affect any
other components
 Software continues to operate even if the host fails

73
Step 7: Scenario Prioritization

Each stakeholder is then allocated votes (30%


of the number of scenarios generated)

 Voting occurs in two rounds; each stakeholder will


“spend” half of the votes in each round
 Stakeholders can spend any number of votes on any
scenario they like
 Votes are counted and the scenarios are prioritized

74
Step 8: Scenario Refinement
 The top scenarios are further refined.
 The number of scenarios refined depends on the time available
 Typically top five scenarios are refined

 For each scenario, the QAW facilitators further elaborate


and document the following:
 business/mission goals affected by the scenario
 description of relevant quality attributes
 list of questions with respect to the scenarios that stakeholders
would like to pose
 any issues that may arise during the scenario refinement

75
Template
Refinement Description for Scenario <scenario number>
< scenario description.
Scenarios(s): e.g. “When a garage door opener senses an object in the door’s path, it stops the door in
less than one millisecond.” >
< business goal, that affected the scenario.
Business Goals:
e.g. “Safest system; feature-rich product.”>
< Relevant quality attributes, associated with the scenario
Relevant Quality Attributes:
e.g. “safety, performance” >
< Relevant quality attributes, associated with the scenario
Stimulus:
e.g. “An object is in the path of a garage door” >
< Relevant quality attributes, associated with the scenario
Stimulus source: e.g. “Object external to the system such as bicycle” >
Scenario Components

< Relevant quality attributes, associated with the scenario


Environment:
e.g. “The garage door is in process of opening” >
< Relevant quality attributes, associated with the scenario
Artifact (if known): e.g. “system motion sensor, motion-control software component” >

<activity that must result from stimulus,


Response:
e.g. “The garage door stops moving” >
<measure by which system’s response may be evaluated, e.g. in less than 1
Response Measure: milliseconds>
< any unresolved questions to the scenario, may result in follow-up refinement
Questions: e.g. “How large must an object be before it is detected by the system sensor?” >

< Issues raised regarding scenario, to be handled separately


Issues: e.g. “May need to train installers to prevent malfunctions and avoid
potential legal issues.”>

76
QAW Steps

77
QAW Conceptual Flow and Outputs

 Increased stakeholder communication


 Clarified quality attribute requirements
 Informed basis for architectural design decisions
78
Practice in teams

 Fill QAW template for sample web-conf with at


least:
 5 ASRs
 5 quality attributes scenarios
 1 elaborated in details

79
Part 3: Practice of Designing
Architectures

80
Inspiring Quotation

I often describe the life of a software architect as


a long and rapid succession of suboptimal
design decisions taken partly in the dark.
P. Kruchten, 2001

81
The Design and Analysis Process
Inception Activities Review Activities
 Identify key stakeholders.  Identify, describe, and prioritize
ASRs.
 Identify business objectives of
the stakeholders.  Identify architecture
description.
 Prioritize business objectives.
 Analyze architecture
description against ASRs.
Design Activities
 Identify, describe, and prioritize
Post-Review Activities
architecturally significant
requirements (ASRs).  Summarize findings and review
them with architecture owners.
 Design and document the
architecture.  Plan architecture
improvements.
 Validate the design decisions.
 Refine review methods.

82
Software Architecture Design

 The design for a system consists of a sequence of


decisions

 Design is the process of making decisions

What kind of decisions we could make?

83
Allocation of Functionality

What are the major categories of system use?

How is functionality divided and assigned to


software elements?

84
Data and Object Model

Which are the information entities?

Which are the different access levels data entities


allow?

Which are the criteria for data retention?

85
Coordination Model
 What are the communication mechanisms between
the system and external systems?

 What are the inter- and intra- element


communication mechanisms?

 Which properties has the communication protocol?


 one-way, two-way, broadcast, etc
 synchronous or asynchronous
 stateful or stateless

86
Management of Resources
What scheduling strategies will be employed?

What resources need to be managed?

Which are the resource limits?

87
Binding Time Decisions

How and where are execution dependencies among


elements resolved?

How are different variants of a system to be managed?


 Compile time (e.g., compiler switches)
 Build time (e.g., replace modules, pick from library)
 Load time (e.g., dynamic link libraries [DLLs])
 Initialization time (e.g., resource files)
 Run time (e.g., load balancing)

88
How to start?
 We have many areas to make dozens of
decisions
 Which approach should we take to solve a
problem?

 Decisions made early constrain the ones made


later
 So… make those decisions early that have the
farthest reaching impact on desired system
qualities!

89
Design decisions and ASRs
 Early software architecture design decisions must be
made in the context of architectural drivers (ASRs)

 For each decision, consider whether the decision


impacts any of the architectural drivers – either supports
them or hinders them

 For example, consider the early design decisions


associated with the Coordination model and the
communication mechanisms between the system and
external entities …

90
What Are the Communication Mechanisms?
How would quality attributes enter into this decision?

 Availability Security
 Does the mechanism have to  Is the communication with the
support failure of the external entity? external entity subject to a
 Does the mechanism have to threat?
guarantee delivery?  Testability
 Modifiability  How will the communication be
Will the external entity change? tested?
Will the information being  Can the communication be
communicated change? played back for testing?
 Performance  Usability
 Is the communication with the  If the external entity is a user,
external entity sensitive to system are any of the usability scenarios
latency or throughput? relevant?

91
Tactics – 1
The design for a system consists of a collection of
design decisions:
 Some decisions are intended to ensure the achievement of the
system functionality.
 Other decisions are intended to help control the quality attribute
responses.

Axiom 5: Architecture design can be made tractable


by considering a small number of primitives, called
tactics.

92
Tactics – 2
A tactic is a design decision that is influential in the control
of a quality attribute response.

Each tactic is a design option for the architect.


 For example, to promote availability, we might choose the
Redundancy tactic

One tactic can refine another tactic.


 For example, redundancy could be refined to data and/or
computational redundancy tactics

93
Availability
Availability refers to a property of software to be
there, ready to carry out its task when you need it
 Availability builds upon the following concepts:
 system reliability
 recovery

94
Thinking about Availability
 System availability builds upon the concept of system
reliability by adding the notion of recovery:

 MTBF – mean time between failures


 MTTR – mean time to repair
Availability Downtime/Year
99.0 % 3 days, 15.6 hr
99.9 % 8 hr, 0 min, 46 sec
99.99 % 52 min, 34 sec
99.999 % 5 min, 15 sec
99.9999 % 32 sec
 These requirements exclude planned downtime
 High-availability typically refers to 5 nines or more
95
Availability Tactics – Fault Detection
 Ping/Echo: when one component issues a ping and
expects to receive an echo within a predefined time from
another component

 Heartbeat: when one component issues a message


periodically while another listens for it

 Exception: using exception mechanisms to raise faults


when an error occurs

 Voting: when processes take equivalent input and


compute output values that are sent to a voter

96
Availability – Fault Recovery Preparation and Repair

 Active Redundancy: when redundant


components are used to respond to
events in parallel

 Passive Redundancy: when a


primary component responds to
events and informs standby
components of the state updates they
must make

 Spare: when a standby computing


platform is configured to replace failed
components

97
Availability– Fault Recovery and Reintroduction

 Shadow Operation: running a previously failed


component in “shadow mode” before it is returned to
service

 State Resynchronization: using another state to


resynchronize failed components

 Checkpoint/Rollback: recording a consistent state


that is created periodically or in response to specific
events

98
Availability Tactics – Fault Prevention
 Removal from service: removing a system component from
operation so it can undergo some procedure that will help it avoid
failure in the future (e.g., rebooting a component prevents failures
caused by memory leaks)

 Transactions: the bundling of several sequential steps such


that the entire bundle can be undone at once:
 prevents data from being affected if one step in a process fails
 prevents simultaneous access to data by concurrent threads

 Process Monitor: Monitoring processes are used to monitor


critical components, remove them from service and re-instantiate
new processes in their place.

99
Summary of Availability Tactics

100
Practice

 Fill in D-1

101
Thinking About Performance - 1

 The goal of performance tactics is to generate a


response to an event arriving at the system within some
time constraint

 Two basic contributors to the response time are resource


consumption and blocked time:

After an event arrives, either the system is processing on that event


or the processing is blocked for some reason

102
Thinking About Performance - 2

 Resource  Blocked time


consumption
 Contention for the
 Central processing unit resource
(CPU)  Resource is unavailable
 Memory  Computation depends
 Data stores on the result of other
 Network communication computations that are
bandwidth not yet available

103
Performance Scenario

104
Queuing Model for Performance

105
Managing Performance
 Architectural means for controlling the parameters of a performance
model:
 Arrival rate – restrict access

 Queuing discipline – first-come first served (FCFS), priority


queues, etc.
 Service time:

 Increase efficiency of algorithms


 Cut down on overhead (reduce inter-process communication, use
thread pools, connection pools, etc.)
 Use faster processor
 Scheduling algorithm – round robin, service last interrupt first, ...
 Topology – add/delete processors
 Network bandwidth – faster networks
 Routing algorithm – load balancing

106
Performance Tactics

Performance tactic categories and their goals:

 Resource demand – Reduce or manage the demand


for resources.

 Resource management – Manage resources even


though the demand for resources is not controllable.

 Resource arbitration – Control contention for


resources through scheduling.

107
Performance Tactics – Control Resource Demand
 Increase Computational Efficiency
 improve algorithm efficiency in critical areas

 Reduce Computational Overhead


 eliminate intermediaries, co locate resources, periodic

cleanups
 Bound queue sizes
 control the maximum number of queue arrivals

 Bound execution times


 limit execution time used to respond to an event

 Manage event rate


 reduce frequency of monitoring environmental variables

108
Performance Tactics –Resource Management
 Introduce concurrency
 reduce blocked time by performing the requests in

parallel on different threads

 Maintain multiple copies (data or computations)


 servers in client server pattern are computation replicas

 caching is a tactic to replicate data

 Increase available resources


 add additional (faster) processors, additional memory, or

faster networks. Cost is always a consideration in this


case

109
Performance Tactics –Resource Arbitration
 First in / first out

 Fixed priority scheduling


 semantic importance

 deadline monotonic

 rate monotonic

 Dynamic priority scheduling


 round robin

 earliest deadline first

 least slack first

110
Performance Tactics

111
Thinking about Security
Security refers to system ability to protect data and
information from unauthorized access while still providing
access to people and systems that are authorized

 Security characteristics:
 Integrity: data or services are not subject to unauthorized
manipulation
 Confidentiality: data or services are protected from
unauthorized access
 Authentication: verifies the identities of the parties
 Authorization: grants a user the privileges to perform a task
 Nonrepudiation: guarantees that the sender of a message
cannot later deny its action

112
Security Tactics – Detect Attacks
 Detect intrusion
 compare network traffic to known patterns of malicious
behavior (historic or defined by rules)
 Detect denial of service
 compare the pattern of network traffic coming into to
historic profiles of known denial-of-service attacks
 Verify message integrity
 verify checksums or hash values
 Detect message delay
 detect potential man-in-the-middle attacks by checking
the time needed to deliver a message

113
Security Tactics – Resist Attacks - 1
 Authenticate users
 ensure identities by using passwords, digital certificates,
and/or biometrics
 Authorize users
 access control to data and/or services for authenticated
users (through user classes, groups, or roles)
 Limit access
 limit access to memory, network connections, access
points memory protection, blocking a host, closing a port,
reject a protocol
 DMZ –to access only certain services from external

114
Security Tactics – Resist Attacks - 2
 Limit exposure
 have the least possible number of access points for
resources, data, services
 reduce the number of connectors that may provide
unanticipated exposure

 Separate entities / compartmentalization


 sensitive data is frequently separated from non-sensitive
data
 physical separation on different servers from different
networks

115
Security Tactics – Resist Attacks - 3
 Maintain data confidentiality
 encrypt data and communication links

 Maintain integrity
 use of encoded redundancy information such as
checksums and hash results

116
Security Tactics – React to Attacks
 Revoke access
 If a desktop has been compromised by a virus, access to
certain resources may be limited until the virus is
removed from the system.
 Lock computer
 Limit access from a particular computer if there are
repeated failed attempts to access an account from that
computer.
 Inform users
 Ongoing attacks may require action by operators, other
personnel, or cooperating systems.

117
Security Tactics – Recovering from Attacks
 Identification of attackers and affected data
 Maintain an audit trail of each transaction applied to
data and access to services.
 Sign messages with digital signature for non-
repudiation

 Restoring state
 Maintain redundant copies of system data with ability
to restore to a check-point

118
Thinking about Scalability
Scalability refers to ability to maintain the required
quality of service as the system load increases
 Scalability types:
 vertical
 add more processing power
 horizontal
 add more resources

119
Scalability Tactics
 Load balancers in network switches
 are implemented in firmware

 Load balancers in cluster


 are implemented with software

 Load balancers based on DNS server


configuration
 map to the same DNS host name

120
Scalability Algorithms
 Round-robin algorithm

 Response-time or first-available algorithm

 Least-loaded algorithm

121
Thinking about Modifiability
Modifiability is about change

122
Modifiability Tactics – 1
 Localize modifications
 Maintain semantic coherence: ensuring that all of the
responsibilities in a module work together without
excessive reliance on other modules
 Anticipate expected changes: design and build-in a system
with a set of envisioned changes in mind
 Generalize the module: creating modules that are more
general allowing them to compute a broader range of
functions based on input

123
Modifiability Tactics – 2
 Reduce Module Size
 Split module
 refine the module into several smaller modules reduces
the average cost of future changes

 Increase Cohesion
 Increase semantic coherence
 ensuring that all of the responsibilities in a module work
together without excessive reliance on other modules

124
Modifiability Tactics – 3
 Reduce Coupling
 Encapsulate
 introduces an explicit Interface to a module. Interfaces
reduce the probability that a change to one module
propagates to other modules.

 Use intermediaries
 publish-subscribe intermediary, shared data repository,
bridges, proxies, etc.

125
Modifiability Tactics – 4
 Reduce Coupling
 Restrict dependencies
 restrict the module visibility and allow access through
authorization
 seen in layered architectures and wrappers
 Abstract common services
 two modules provide “not-quite-the-same but similar
services”, it may be cost-effective to implement the
services just once in a more general (abstract) form

126
Modifiability Tactics – 5
 Prevent accidental ripple effect
 Hide information: ensuring that all of the module
responsibilities are related, and that the module works
without excessive reliance on other module
 Maintain existing interfaces: by adding adapters, stubs
 Restrict communication paths: restricting the modules with
which a given module shares data
 Use intermediaries: utilize repositories, bridges, proxies,
etc.
 Isolate common services: providing common services
through specialized modules

127
Modifiability Tactics – 6
 Defer binding time
 Run-time registration: supporting plug-and-play
operation
 Configuration files: setting parameters and configuring
elements at start-up time
 Polymorphism: allowing late binding of method calls
 Component replacement: allowing load time binding
 Adhere to defined protocols allowing run-time binding
of independent processes

128
Example: Web E-Commerce
 System context: Internet
 Technical environment: e-commerce reference
Architecture
 Initial pattern: canonical e-commerce three-tier
architecture

129
Web E-Commerce Architectural Drivers

 Modifiability First Design Round

 Security Second Design Round

 High performance
 Scalability Third Design Round
 High availability

130
First Design Round: Problem to Solve

 Modifiability:
 E-commerce Web sites change frequently, in many
cases daily, so their content must be very simple to
change.

131
First Design Round: Patterns and Tactics
The e-commerce pattern provides modifiability by virtue of
separation of responsibilities into distinct tiers.
However, when later analyzing the architecture, it is helpful
to understand the underlying tactics.

 Tactics  How Achieved


 Abstract common  Separation of browser
Services functionality,
database, and
 Semantic coherence business logic into
 Use an intermediary distinct tiers.
 Maintain existing
interfaces

132
First Design Round: Design Decisions

 The e-commerce pattern does not exempt the


architect from having to make other early design
decisions such as
 state management (which elements are stateful and
which are stateless).

 This affects whether clients are “thick” or “thin”


the choice to use cookies, etc.

133
First Design Round: Design Concept

134
Web E-Commerce Architectural Drivers

 Modifiability First Design Round

 Security Second Design Round

 High performance
 Scalability Third Design Round
 High availability

135
Second Design Round: Problem to Solve

 Security
 Users must be assured that any sensitive information
they send across the Web is secure from snooping.
 Operators of Web sites must be assured that their
system is secure from attack (stealing or modifying
data, rendering data unusable by flooding it with
requests, crashing it, etc.).

136
Second Design Round: Tactics

 Tactics  How Achieved


 Limit access  Router/Firewall
 Maintain integrity  Encryption across
 Limit exposure public networks by
 Maintain data using one-way SSL
confidentiality (HTTPS) protocol
 Authentication by
login/password in
application server

137
Second Design Round: Design Concept

138
Web E-Commerce Architectural Drivers

 Modifiability First First Design Round

 Security Second Second Design Round

 High performance
 Scalability Third Design Round
 High availability

139
Third Design Round: Problem to Solve

 High performance. A popular Web site will typically have tens


of millions of “hits” per day, and users expect low latency from
it. Customers will not tolerate the site simply refusing their
requests.
 Scalability. As Web sites grow in popularity, their processing
capacity must be able to similarly grow, to both expand the
amount of data they can manage and maintain acceptable
levels of customer service.
 High availability. E-commerce sites are expected to be
available “24/7.” They never close, so must have minimal
downtime-perhaps a few minutes a year.

140
Third Design Round: Patterns and Tactics
To achieve high performance and availability in the
eCommerce architecture we need to make some further
architectural changes.

 Tactics  How Achieved


 Introduce Concurrency
 Replicated servers
 Maintain Multiple (clusters)
Copies
 Load balancing
 Increase Available
Resources
 Scheduling Policy

141
Third Design Round: Design Concept

142
Thinking about Testability
Testability refers to the ease with which software
can be made to demonstrate its faults through
testing

143
Testability Tactics - 1
Control and observe system state
 Specialized testing interfaces
 allows to control or capture variable values for a
component either through a test harness or through
normal execution
 Record/playback
 record and playback information that crosses interface
boundaries to re-create the fault
 Build in monitors (e.g. telemetry)
 record information and/or notify if values are out of
ranges

144
Testability Tactics - 2
Control and observe system state (continued)
 Abstract data sources
 abstract interfaces can substitute implementations for
various testing purposes (mocking, injection)
 Sandbox
 isolate an instance of the system from the real world
without having to undo the experiment consequences

145
Testability Tactics - 3
Limit complexity
 Limit structural complexity
 resolve cyclic dependencies between components
 isolate and encapsulate dependencies to externals
 reduce dependencies between components
 Limit nondeterminism
 find all the sources of nondeterminism and reduce
them out as much as possible:
 unconstrained parallelism

146
Thinking about Usability
Usability refers to how easy it is for the user to
accomplish a desired task and the kind of user
support the system provides

147
Usability Tactics – Design Time
 UI design standardization

 Separate user interface

 Prototyping and interface evaluation

148
Usability Tactics -Support User Initiative
 Cancel
 the system must be listening for cancel commands
 Undo
 maintain a sufficient amount of information about system
state so that an earlier state may be restored
 Pause/resume
 when user has initiated a long-running operation,
provide the ability to pause and resume the operation
 Aggregate
 the user is performing repetitive operations, aggregate
objects and apply the operation to the group

149
Usability Tactics -Support System Initiative
 Maintain a model of the user
 maintain the user’s knowledge of the system, behavior,
response
 wizards, remember previous choice, suggestions
 Maintain a model of the system
 give appropriate feedback to the user
 progress bars to complete some activities
 Maintain a model of the task
 determine the context so that the system has some idea of
what the user is attempting to accomplish and can provide
assistance
 auto capitalize first word in a sentence

150
Practice

 Fill in D-2

151
Practice in teams

 Design the architecture from ASRs applying


tactics in availability, performance, modifiability,
and security

 You can use http://draw.io as a diagramming


software

152
Tactics and Patterns

 A tactic is a design decision that is influential in the


control of a single quality attribute response

 A pattern is a prepackaged solution to a recurring


problem that resolves multiple forces.

 Patterns package tactics. If a pattern is a molecule, a


tactic is an atom.

153
Pattern Structure

And
 Guiding principles

 Related patterns

 Guidelines how to apply the pattern within certain technology domain

154
Where to find patterns?

155
Design Principles
 Taking design decisions can be made easier, when the
architect follows design principles, which are an
aggregation of positive and negative experience of
design decisions embodied into multiple successful or
failed architectures and shared in a community.
 Guiding design principles prescribe to honor one quality
attribute over another in trade-off decisions or to prefer
certain tactics while solving certain types of problems,
and explains why
 Examples:
 Do not sacrifice loose-coupling and cohesion of functional
components (maintainability) for performance
 Design messages atomic and services stateless

156
Architectural Styles
 Architectural style is a stable and consistent set of
design principles and patterns, which could be applied to
multiple architectures to solve common problems and
achieve common goals within an industry
 Similar good architectures are turning into a style, when:
 Abstracted to be reusable within problem domain
 Communicated and rationalized well in a community
 Implementation of main patterns becomes supported by open-
source and/or COTS technology
 Examples of styles are:
 2-tier client-server architecture based on distributed RDBMS
 Multi-tier web/enterprise application architecture
 Service-oriented enterprise architecture

157
Summary of Attribute-Driven Design
 Make important decisions early. Software architecture
focuses on design decisions that help control a quality
attribute response
 Choose the most influential (few) ASRs on which to
focus. These are the “architectural drivers”
 Choose a pattern, if you can find one, and then adjust
the pattern based on tactics

158
Software Architecture Design Workflow
Identify problem to solve
• Determine importance and difficulty
• Characterize quality attribute(s)
• Analyze existing architectural approaches

Manage design decisions Identify solution options


• Manage backlog of problems, • Consider patterns/tactics that
open questions, postponed might solve the problem -
decisions improve quality attribute(s)
• Ensure consistency

Make design decisions


• Assess impact of each option
• Select a best one and capture made decision
• Rework the architectural model / description 159
Practice

 Fill in D-3, D-4


 Quality Attributes Identification

160
Part 4: Practice of
Documenting Architectures

161
Role of Architectural Description
 Why Architecture is important?
 It is a fundamental artifact that guides development
 It is a manifestation of the earliest design decisions about the
system

But in addition to that:


 It provides a vehicle for communication among
stakeholders
 A transferable, reusable abstraction of a system

That’s why an Architectural Description of a system is


necessary

162
Architectural Description

has an

Architecture System

can be documented by Addresses the needs of

0..n 1..n

documents architecture for


Architectural
Stakeholder
Description 1..n
How to describe the complexity?
 There should be a common basis for communication

 One kind of diagram or blueprint is not enough to


describe all important system structures and their
qualities

 Different stakeholders have different sets of concerns


 which sometimes may be grouped around aspects of the system
organization
Views
 So, let’s describe architecture within a limited set of views

 View:
 is a representation of structural aspects of an architecture
 illustrates how the architecture addresses concerns held by one or
more of its stakeholders
 Many-to-many

 What views to use for my SAD, what and how should I


describe within a view?
Viewpoints
A viewpoint defines: Viewpoint sets:
 stakeholders and their concerns  General-purpose:

 expected content of view  RUP/Kruchten “4+1”

 Rozanski & Woods


 design and documenting guidelines

 common problems and pitfalls


Kruchten’s Viewpoint Set

IEEE, The “4+1” View Model of Software Architecture, P. Kruchten, Nov 1995

167
Woods&Rozanski Viewpoint Set
Development
Functional Viewpoint
Viewpoint

Construction focus
Information
Viewpoint
Deployment
Viewpoint
Concurrency
Viewpoint
Operational
Viewpoint
Abstract system model Production focus
Functional Viewpoint
The functional structure of the system
Content:
the system runtime functional elements and their responsibilities, interfaces, and
primary interactions

Concerns:
functional capabilities, external interfaces, internal structure, and design philosophy

Models:
functional structure model

Pitfalls:
poorly defined interfaces, poorly understood responsibilities,
diagrams without element definitions,
difficulty in reconciling the needs of multiple stakeholders,
inappropriate level of detail, “God elements,” too many dependencies
Functional View – Example Model
Information Viewpoint
The information structure, ownership and processing in the system

Content:
how the system stores, manipulates, manages, and distributes information
Concerns:
information structure and content; information flow; data ownership;
timeliness, latency, and age;
transaction management and recovery; data quality; data volumes;
archives and data retention

Models:
static data structure models, information flow models, information lifecycle models,
data ownership models, data quality analysis, metadata models

Pitfalls:
data incompatibilities, poor data quality, unavoidable multiple updaters,
poor information latency, interface complexity
Concurrency Viewpoint
The packaging of the system into processes and threads
Content:
the concurrency structure of the system, mapping functional elements to
concurrency units to clearly identify the parts of the system that can
execute concurrently, how this is coordinated and controlled

Concerns:
task structure, mapping of functional elements to tasks,
inter-process communication,
state management, synchronization and integrity,
startup and shutdown, task failure, and re-entrancy

Models:
system-level concurrency models and state models

Pitfalls:
modeling of the wrong concurrency, excessive complexity,
resource contention, deadlock, and race conditions
Development Viewpoint
The architectural constraints on the development process

Content:
how the architecture supports and constraints the software development process

Concerns:
module organization, common processing,
standardization of design, standardization of testing
codeline organization

Models:
module structure models, common design models, and codeline models

Pitfalls:
too much detail, lack of precision, problems with the specified environment
Deployment Viewpoint
The runtime environment and the distribution of software across it
Content:
the environment into which the system will be deployed, including the dependencies
the system has on its runtime environment

Concerns:
types of hardware required, specification and quantity of hardware required,
third-party software requirements, technology compatibility,
network requirements, network capacity required, physical constraints

Models:
runtime platform models, network models, and technology dependency models

Pitfalls:
unclear or inaccurate dependencies, unproven technology,
lack of specialist technical knowledge,
late consideration of the deployment environment
Operational Viewpoint
How the system is installed, migrated to, run and supported
Content:
describes how the system will be operated, administered, and supported when it is running
in its production environment

Concerns:
installation and upgrade, configuration management,
functional migration, data migration,
operational monitoring and control, performance monitoring,
backup and restore, support

Models:
installation models, migration models, configuration management models,
administration models, support models

Pitfalls:
lack of engagement with the operational staff,
lack of migration planning, insufficient migration window,
missing management tools, lack of integration into the production environment,
inadequate backup models
The Design and Analysis Process
Inception Activities Review Activities
 Identify key stakeholders.  Identify, describe, and prioritize
ASRs.
 Identify business objectives of
the stakeholders.  Identify architecture
description.
 Prioritize business objectives.
 Analyze architecture
description against ASRs.
Design Activities
 Identify, describe, and prioritize
Post-Review Activities
architecturally significant
requirements (ASRs).  Summarize findings and review
them with architecture owners.
 Design and document the
architecture.  Plan architecture
improvements.
 Validate the design decisions.
 Refine review methods.

186
Validate Design Decisions
Axiom 6: Architecture design can and should
be guided by analysis

 Design and analysis are two sides of the same


coin
 To validate a design, it must be analyzed

187
Practice

 Software Architecture Description

188
Part 5: Software Architecture
Analysis and Assessment

189
The Design and Analysis Process
Inception Activities Review Activities
 Identify key stakeholders.  Identify, describe, and prioritize
ASRs.
 Identify business objectives of
the stakeholders.  Identify architecture
description.
 Prioritize business objectives.
 Analyze architecture
description against ASRs.
Design Activities
 Identify, describe, and prioritize
Post-Review Activities
architecturally significant
requirements (ASRs).  Summarize findings and review
them with architecture owners.
 Design and document the
architecture.  Plan architecture
improvements.
 Validate the design decisions.
 Refine review methods.

191
Why Evaluate an Architecture?

Because so much depends on it


 An unsuitable architecture will precipitate disaster
 Architecture determines the structure of the project
Because we can
 Repeatable, structured methods offer a low-cost risk mitigation
capability that can be employed early in the development life
cycle

Architecture evaluation should be a standard part of every


architecture-based development methodology

192
Evaluation Techniques
 Questioning techniques use questionnaires, checklists, and
scenarios to investigate the way an architecture addresses
its quality requirements

 Measuring techniques apply some measurement tool to a


software artifact

 Review workshop techniques (inspection, assessment)

 Validating techniques comparing reverse-engineered


requirements (designs) with original requirements (designs)

193
The ATAM
 Today our focus is a combined approach – Architecture
Trade-off Analysis Method

 The purpose of the ATAM is to assess the


consequences of architectural decisions in light of quality
attribute requirements and business goals

 The ATAM brings together three groups in an evaluation:


 a trained evaluation team
 the architecture “decision makers” (architect, senior designers,
project managers, customers)
 representatives of the system stakeholders

194
Purpose of the ATAM
 The ATAM is a method that helps stakeholders ask the
right questions to discover potentially problematic
architectural decisions

 Discovered risks can then be made the focus of


mitigation activities such as further design, further
analysis, and prototyping

 Surfaced tradeoffs can be explicitly identified and


documented

 The purpose is NOT to provide precise analyses or


finding best solutions . . . the purpose IS to discover any
risks created by architectural decisions
195
ATAM Phases

196
ATAM Phase 0

 Phase 0 precedes the technical part of the


evaluation:
 The customer and a subset of the evaluation team
exchange their understanding about the method and
the system whose architecture is to be evaluated
 An agreement to perform the evaluation is worked out
 A core evaluation team is fielded

197
ATAM Phase 1

Phase 1 involves a small group of predominantly


technically oriented stakeholders

Phase 1 is
 architecture-centric
 focused on eliciting detailed architectural information
and analyzing it
 top-down analysis

198
ATAM Phase 1 Steps

199
Step 1: Present the ATAM
 The evaluation team presents an overview of the
ATAM including:
 ATAM steps in brief
 Techniques
 utility tree generation
 architecture elicitation and analysis
 scenario brainstorming/mapping
 Outputs
 architectural approaches
 utility tree and scenarios
 risks, non-risks, sensitivity points, and tradeoffs

200
Step 2: Present Business Drivers

 The customer representative describes the


system business drivers including its:
 business context
 high-level functional requirements
 high-level quality attribute requirements

 architectural drivers: quality attributes that “shape” the


architecture
 critical requirements: quality attributes
most central to the system success

201
Step 3: Present Architecture
 The architect presents an overview of the
architecture including
 technical constraints such as an operating
system, hardware, or middleware prescribed for use
 other systems with which the system must interact
 architectural approaches used to address quality
attribute requirements

 The evaluation team begins probing for and


capturing risks

202
Step 4: Identify Architectural Approaches

 Identify predominant architectural approaches


such as:
 client-server
 3-tier
 publish-subscribe
 redundant hardware
 …

 The evaluators begin to identify places in the


architecture that are key to realizing quality attribute
goals

203
Step 5: Generate Quality Attribute Utility Tree

 Identify, prioritize, and refine the most important quality


attribute goals by building a utility tree
 A utility tree is a top-down vehicle for characterizing the
“driving” attribute-specific requirements
 The most important quality goals are the high-level nodes
(typically performance, modifiability, security, and
availability)
 Scenarios are the leaves of the utility tree
 Prioritization (H, M, L) is done along two dimensions:
 importance of each node to the success of the system
 degree of perceived risk posed by the achievement of this
node

204
Example Quality Attribute Utility Tree

205
How Scenarios Are Used – Recall
We use six-part scenarios as described earlier:
1. source – an entity that generates a stimulus
2. stimulus – a condition that affects the system
3. artifact – the part of the system that was stimulated by the stimulus
4. environment – the condition under which the stimulus occurred
5. response – the activity that results because of the stimulus
6. response measure – the measure by which the system response
will be evaluated

Scenarios should cover a range of


 anticipated uses of the system (use case scenarios)
 anticipated changes to the system (growth scenarios)
 unanticipated stresses on the system (exploratory scenarios)

206
Step 6: Analyze Architectural Approaches

 The evaluation team probes architectural


approaches from the point of view
of specific quality attributes to
identify risks

The team:
 identifies the architectural approaches
 asks quality-attribute-specific questions for the highest
priority scenarios
 identifies and records risks, non-risks, sensitivity points,
and tradeoffs

207
Scenario Analysis Outputs
 As each scenario is analyzed against the architecture,
the evaluation team identifies risks, non-risks, sensitivity
points, and tradeoffs.
 A risk is a potentially problematic architectural decision.
 Non-risks are good architectural decisions that are
frequently implicit in the architecture.
 A sensitivity point is a place in the architecture that
significantly affects whether a particular quality attribute
response is achieved.
 A tradeoff is a property that affects more than one
attribute and is a sensitivity point for more than one
attribute.

208
Risks and Non-Risks
Example risk:
 Rules for writing business logic modules in the
second tier of your three-tier architecture are not
articulated clearly. This could result in the replication
of functionality, thereby compromising the modifiability
of the third tier.

Example non-risk:
 Assuming message-arrival rates of no more than once
per second and a processing time of less than 30 ms,
the architecture should meet the 1-second soft
deadline requirement.

209
Sensitivity Points and Trade-offs

Example sensitivity point:


 The response time to system events is sensitive to the
number of processes running on the main processor.

Example tradeoff:
 Increasing the level of encryption will significantly
increase security but decrease performance.

210
Scenario Analysis Template - Part 1

211
Scenario Analysis Template - Part 2

212
ATAM Phase 2

 Phase 2 involves a larger group of stakeholders.

 Phase 2 is:
 stakeholder-centric
 focused on eliciting diverse stakeholders’ points of
view and on verifying the results of Phase 1
 bottom-up analysis

213
ATAM Phase 2 Steps

214
Step 7: Brainstorm and Prioritize Scenarios

 Scenarios are brainstormed in a round-robin


manner.
 As in the QAW, each stakeholder is given 30%
of the number of scenarios as votes.
 Stakeholders can “spend” any number of votes
on any scenario they like.
 Votes are counted and the scenarios are
prioritized.

215
Step 8: Analyze Architectural Approaches

 The evaluation team probes architectural


approaches from the point of view
of specific quality attributes to
identify risks

The team
 identifies the architectural approaches
 asks quality-attribute-specific questions for the highest
priority scenarios
 identifies and records risks, non-risks, sensitivity points,
and tradeoffs

217
ATAM Phase 3

 Phase 3 primarily involves producing a final


report for the customer

 Typically a written report and a presentation are


created

 Follow-on activities may also be scheduled

218
Conceptual Flow of the ATAM

219
Benefits of the ATAM
 The benefits of performing ATAM evaluations
include
 clarified quality attribute requirements
 increased communication among stakeholders
 identification of risks early in the life cycle
 documented basis for architectural decisions
 improved architecture documentation

 The end result is improved architectures

221
ATAM Summary

The ATAM is
 a method for evaluating an architecture with respect
to multiple quality attributes
 an effective strategy for discovering the
consequences of architectural decisions
 a method for identifying trends, not for performing
precise analyses

222
The Design and Analysis Process
Inception Activities Review Activities
 Identify key stakeholders.  Identify, describe, and prioritize
ASRs.
 Identify business objectives of
the stakeholders.  Identify architecture
description.
 Prioritize business objectives.
 Analyze architecture
description against ASRs.
Design Activities
 Identify, describe, and prioritize
Post-Review Activities
architecturally significant
requirements (ASRs).  Summarize findings and review
them with architecture owners.
 Design and document the
architecture.  Plan architecture
improvements.
 Validate the design decisions.
 Refine review methods.

223
Training Summary

226
Summary
 Design and analysis of architectures are mirror activities,
and documenting complements them and makes
effective
 These activities should reflect the axioms of the
architecture-centric approach
 To do them well you need:
 active stakeholder involvement
 clear characterizations and prioritizations of business goals and
architectural drivers
 an understanding of tactics and patterns
 methods that keep you focused

227
Where to read more?
 Architecture practices
definition
 Architecture design process
and examples
 Viewpoints
 Lists of tactics for:
 Availability
L. Bass, P. Clements, R. Kazman,  Modifiability
Software Architecture in Practice,  Performance
2nd ed, Addison-Wesley, 2003; 3rd
 Security
ed., Addison-Wesley, 2012.
 Testability
http://www.sei.cmu.edu/architecture/  Usability

228
Where to read more?
 Foundation of design and documenting
 Own viewpoint set
 Lists for perspectives:
 Availability/Resilience
 Evolution (Modifiability)
 Performance/Scalability
 Security
N. Rozanski, E. Woods,  Internationalization
Addison-Wesley
 Location
(1st 2003, 2nd 2012)
 Usability
http://www.viewpoints-  Accessibility
and-perspectives.info/
 Overview on analysis practices

229
Thank you

231

You might also like