You are on page 1of 192

ISO 26262 TRAINING

Day 3 – Hardware Development – Software Development


CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 2
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 3
PARTS CONSIDERED

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 4
RESPONSIBILITIES AND TARGETS
Responsibilities
 The hardware development phase is typically the responsibility of the
hardware suppliers (also Chip supplier, IP supplier), who have the
knowledge for the implementation of safety mechanisms at the hardware level
 Chip and IPs can be either developed as so called Safety Element out of
Context (SEooC) or in line with given customer requirements as Safety
Element in Context (SEiC)

Targets
 In the hardware development phase an electronic circuit is designed in
accordance with the required safety integrity of safety requirements derived
from the system development phase. The evaluation of the achieved safety
integrity is done by calculation of probabilistic hardware metrics.

 Functional Safety of hardware is mainly based on the evaluation of


probabilistic metrics
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 5
GENERAL WORKFLOW DURING THE
HARDWARE DEVELOPMENT PHASE

 In Context development  Out of context development


System develop. acc. ISO 26262-4 Application assumption document

Technical Safety Assumptions for technical


safety requirements and
Concept component design

Input
Initiation of the
Input
Hardware Development

Hardware Safety
Requirements
Specification

Hardware Design

Design Verification

Hardware Tests
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED
INITIATION OF HW-DEVELOPMENT

 Definition of scope of development


 Development type
– Safety Element out of Context
– Safety Element in Context
 In case of SEooC
– Use “Application Assumptions” as input
 In case of SEiC
– Use Component Design (TSC) from customer as input
 Identification of development category
– New, Modification, Reuse
– In case of modification or reuse there shall be a delta
analysis, which identifies possible impacts on Functional
Safety
– Only the safety relevant changes have to be considered
 Tailoring of activities
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 7
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 8
ALLOCATION OF HW REQUIREMENTS

Technical Safety Concept


System Element

Allocation to Allocation to
Harmonization
Hardware comes via Software

HSI

Refinement of Refinement of
requirement Hardware- requirement
Software
Interface
HW specification (HSI)
SW specification
The interface between hardware and software has to be clarified
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 9
PART 5, CLAUSE 6
DERIVATION OF
HW-SAFETY REQUIREMENTS

System Assumptions Input Out


put Component Design
(or system safety requirements (TSC)
from customer) Power

Refinement

VBAT
Hardware Development
HW-Safety Supply HS
Driver
Out1

Requirements INP
Input Control
Driver
Control
HW-Circuit
Circuit level ENB

GND
Diagnostics

IC
LS
Driver
Out2

Design

Refinement

HW-Safety 1

1
2

2
3

3
4 5 6
Requirements 4
1
5
2

9
6
3
Block design
7 8

Block level Part 7


4
8
5
9
6

(e.g. IC / IP)
Part 7 8 9
Part

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 10
HARDWARE SAFETY REQUIREMENTS
EXAMPLE INTRODUCTION

 The refinement of the already


allocated component safety
requirements from the training
example is introduced

 Input requirements are taken


from the TSC

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 11
HARDWARE SAFETY REQUIREMENTS
EXAMPLE INTRODUCTION

Hardware Safety Requirements and Design


(System Component: “Power Unit and Motor Monitoring”)

1. Introduction

This specification refines the allocated component safety requirements


down to the hardware level. Requirements should be refined as much as it
is necessary to derive a hardware design (hardware circuit).
The following tables only contain the hardware safety requirements. In
reality, of course many additional non-requirements to the hardware have
to be described.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 12
HARDWARE SAFETY REQUIREMENTS
EXAMPLE INTRODUCTION

Hardware Safety Requirements and Design


(System Component: “Power Unit and Motor Monitoring”)
2. Hardware Safety Requirements

If necessary additional HW safety requirements are to be defined after the


design has been verified by means of the probabilistic calculations (HW-metrics)
Reference
ID Description ASIL
from TSC
COMPR1 HWSR_01 The motor current shall be measured continuously by using a B
shunt resistor {R1}
COMPR2 HWSR_02 The measured motor current shall be continuously B
transmitted to the motor ECU as standardized signal
(0,1 – 10V)
COMPR3 HWSR_03 Power shall be switched off by a power relay {K1} B
COMPR3 HWSR_04 The power relay shall be able to cut the energy supply to the B
motor within 0,5s {K1}
COMPR4 HWSR_05 Switch off shall be defined by 0V B
COMPR4 HWSR_06 Switch on shall be defined by 5V B
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 13
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 14
PART 5, CLAUSE 7
HARDWARE DESIGN

 Hardware Design is derived from:


 Component Design (TSC)
 HW safety requirements
 Hardware Design consists of:
 Circuit diagram / circuit design of the particular system element
 Assembly diagram(Layout)
 Bill Of Material (BOM)
 Hardware Design is developed in collaboration with the
Software Design

 Hardware safety requirements are a refinement


of the component safety requirements
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 15
HW DESIGN PROPERTIES
ACC. TO ISO 26262-5

ASIL
Properties
A B C D
1 Hierarchical design + + + +
Precisely defined interfaces of safety-related hardware
2 ++ ++ ++ ++
components

3 Avoidance of unnecessary complexity of interfaces + + + +

Avoidance of unnecessary complexity of hardware


4 + + + +
components
5 Maintainability (service) + + ++ ++
6 Testabilitya + + ++ ++
a Testability includes testability during development and operation.

Source: ISO 26262-5, Table 2

 There are no detailed hardware design rules from ISO 26262-5


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 16
HARDWARE DESIGN
EXAMPLE INTRODUCTION

 Introduction of possible
hardware design for the
example component

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 17
HARDWARE DESIGN
EXAMPLE INTRODUCTION

Hardware Safety Requirements and Design


(System Component: “Power Unit and Motor Monitoring”)
3. Hardware Design

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 18
HARDWARE DESIGN
EXAMPLE INTRODUCTION

Hard wired
CAN-Bus (Feedback)
Speed signal Power Unit
(ABS item) with Motor
Monitoring

Pedal Motor-ECU E-
Sensor Motor
QM

Hard wired
(Torque off) 400V
HV
Battery
QM
Display
(other item
QM)

Measures current
(dependent on
Torque) Motor current sent
to Motor-ECU for
From Motor-ECU to control motor Torque comparison

From Motor-ECU for safety


shutdown

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 19
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 20
DEFINITIONS

 Fault
 abnormal condition that can cause an element or an item to fail

 Error
 discrepancy between a computed, observed or measured
value or condition and the true, specified, or theoretically
correct value or condition

 Failure
 termination of the ability of an element or an item to perform a
function as required

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 21
DIFFERENCE:
FAULT – ERROR - FAILURE

failure  termination of the ability


SW Component Level of an element or item, to perform a
function as require
Fault Error Failure

Programming Unwanted
fault causes loop endless loop
(non-)termination (leads to Item Level
condition Watchdog
reset) Fault Error Failure

Engine control Ignition Vehicle bucks


fault  abnormal condition that can unit stops interrupted
cause an element or an item to fail operation intermittently
intermittently

ISO 26262-10, Figure 5

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 22
MORE DEFINITIONS

 safety related element


 element which has the potential to contribute to the violation
or achievement of a safety goal
 safety mechanism
 Function implemented by E/E elements, or by other
technologies, to detect faults or control failures in order to
achieve or maintain a safe state

Note: Safety mechanisms must


always control failures, but may or
may not need to detect failures

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 23
SAFE FAULT

 Fault, which does not increase the probability of a


safety goal violation significantly.

 Independent multiple faults with order >2 (e.g.


three points of failure, four points of failure, etc.)
are classified as “safe faults”.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 24
SINGLE POINT FAULT

 Fault in a HW element, which is not detected by


a safety mechanism and therefore leads directly
to a safety goal violation.

 A “single point fault” is only allowed for ASIL C


and D, if the probability is low and there are
additional safeguards through dedicated
measures (e.g. a burn in test).
• design features such as hardware part over design (e.g. electrical or thermal stress rating) or
physical separation (e.g. spacing of contacts on a printed circuit board);
• a special sample test of incoming material to reduce the risk of occurrence of this failure mode;
• a burn-in test;
• a dedicated control set as part of the control plan; and
• assignment of safety-related special characteristics.
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 25
EXAMPLE: SINGLE POINT FAULT

Example: Microcontroller with external Watchdog


The microcontroller shall activate the Output, if Input 1 has the state low and Input 2 the state high.

Failure mode
Input 1
IN_1 “stuck-at high” at A3 is a
“single point fault”
Input 2 Microcontroller
IN_2
(µC)
Input 3
IN_3
A1
A3 Output
SPI &
A2
External ok
Window
Safety related Watchdog Failure mode
HW elements “short-circuit” at Output
driver is a “single point
Non-safety related fault”
HW elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 26
RESIDUAL FAULT

 Portion of fault in a Hardware element, which is


not covered by diagnosis (diagnosis coverage
>0%) and immediately leads to a safety goal
violation.
 A diagnostic coverage of less than 90% is only
allowed for ASIL C and D, if there are additional
safeguards of the residual fault provided by
dedicated measures (e.g. a burn in test).

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 27
EXAMPLE: RESIDUAL FAULT

Example: Microcontroller with external Watchdog


The microcontroller shall activate the Output, if Input 1 has the state low and Input 2 the state high.
uC output (A1) stuck-high is a
Input 1 “residual fault”. There is diagnostic
IN_1 coverage of the uC with the
Input 2 watchdog, but it can’t diagnose the
Microcontroller failure mode of a stuck-at fault of
IN_2
(µC) the output. This fault is residual
Input 3
IN_3
A1
A3 Output
SPI &
A2
External ok
Window
Safety related Watchdog
HW elements

Non-safety related
HW elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 28
DUAL POINT FAULT

 Fault, which leads to the violation of a safety


goal with an additional independent fault.

 Example: A combination of a fault in the hardware


element itself, along with the loss or malfunction
of the safety mechanism protecting that hardware
element that would violate the safety goal, is
classified as a dual point fault.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 29
EXAMPLE: DUAL POINT FAULT
Example: Microcontroller with external Watchdog
The microcontroller shall activate the Output, if Input 1 has the state low and Input 2 the state high.

Stuck-at low:
dual point fault
A fault of a µC internal safety mechanism
Input 1 leads to a dual point fault in combination
IN_1
with the initial fault it was protecting (i.e.,
Input 2 Microcontroller memory write error + MPU, etc.).
AND IN_2
(µC)
Input 3
IN_3
A1
Stuck-at high: A3 Output
SPI AND &
dual point fault
External A2
ok
Window
Safety related Watchdog
HW elements Both the output of the µC
(A1) and External Window
Watchdog (A2) fail high
Non-safety related
HW elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 30
MULTIPLE POINT FAULT

 A dual point fault is a multiple point fault


 Multiple point faults with a higher order of 2 and
sufficient independence can normally be
considered safe (= Safe Fault).
 If multiple point faults can be detected and
controlled by electronics (e.g. power-up test), they
are classified as detected multiple point faults.
 If multiple point faults can be detected and
controlled by the driver (e.g. defect of the
dimmed headlight lamp), they are classified as
perceived multiple point faults.
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 31
EXAMPLE: MULTIPLE POINT FAULT

Example: Microcontroller with external Watchdog


The microcontroller shall activate the Output, if Input 1 has the state low and Input 2 the state high.

1. µC has stopped execution


Input 1 failure
IN_1 L1
2. External Window
Input 2 Microcontroller Watchdog fails
IN_2
(µC) Protection of 2. is power-up
Input 3 test and notifying driver
IN_3
A1
A3 Output
SPI &
A2
External ok
Window
Safety related nok
Watchdog
HW elements

Non-safety related
HW elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 32
LATENT FAULT

 The portion of the multiple point fault is called a


latent fault, if it can not be detected or
perceived and is “hidden” in the system element.
 A latent fault has the potential to violate a safety
goal (usually timing is not critical, because it is
only relevant in combination with another
independent fault. ASIL C and D requires the
safety analyses of latent faults).

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 33
SUMMARY OF
HW FAULT CATEGORIES
ltotal =
lS Safe fault (ls)
 Fault that leads to a safe condition or has no impact on the respective safety goal
+
Detected multiple point fault (lMPF detected)
lMPF,D  Detected multiple fault (detected by system/hardware elements)
 No safety goal violation
+
Perceived multiple point fault (lMPF perc.)
lMPF,P  Multiple fault detected by the driver
 No safety goal violation
+
Residual fault (lRF)
lRF  Portion of a fault not detected by the safety mechanism
 Leads to safety goal violation

+
Latent multiple point fault (lMPF latent)
 Undetected multiple fault or portion of multiple point fault that is undetected by the safety mechanism
lMPF,L  Leads to safety goal violation

+ Single point fault (lSPF)


 Undetected single point fault (no safety mechanism)
lSPF  Leads to safety goal violation

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 34
PART 5, CLAUSE 9
RANDOM HARDWARE FAILURE

 Failure that can occur unpredictably during the lifetime of a


hardware part and that follows a probability distribution is
considered a random hardware failure (λ), commonly
expressed in
failures/hour

Practical Note:

FIT = Failures In Time

1 FIT = 10-9
failures/hour

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 35
QUANTITATIVE DESIGN
VERIFICATION IN PRACTICE

Step 1: Verification of “Hardware architecture


metrics” (SPFM, LFM)

Step 2a: Verification of safety goal violation due


to random hardware failures (PMHF),
referring to the Item
Step 2b: Verification of failure classes per
hardware part

Note: Steps 2a and 2b are alternatives


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 36
RANDOM HARDWARE FAULT
CATEGORIES

ISO 26262-5, Figure B.1

 Random hardware faults are categorized


according to their impact on each safety goal
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 37
RANDOM HARDWARE FAULT
CATEGORIES

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 38
RANDOM HARDWARE FAULT
CATEGORIES IN AN FMEDA

Does this
Does this
failure mode
Is this a safety- Diagnostic failure mode
have the Safety mechanism(s) Latent
related Coverage with Residual or have potential Safety Diagnostic Multi-point
potential to that prevent the multi- Safe
Component component to Base Failure Failure Failure mode respect to Single-point to violate a SG mechanisms that Coverage with failure rate,
violate a failure mode from point failure
Name be considered Rate [FIT] Mode distribution single- failure rate in combination prevent the failure respect to detected/perc
safety goal, in violating the safety failure rate [FIT]
in the point/residual [FIT] with a 2nd from being latent: latent faults eived [FIT]
the absence of goal: rate [FIT]
calculations? faults independent
safety
failure?
mechanisms ? , , , /

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 39
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 Verification of robustness of a HW architecture regarding


random Hardware failures
 Calculation of “Single Point Fault Metric (SPFM)” and
“Latent Fault Metric (LFM)”
 ASIL depending target values have to be reached

ASIL Calculation of SPFM ? Calculation of LFM ?


A no verification required no verification required

B recommended recommended

C highly recommended recommended


D highly recommended highly recommended

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 40
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 Calculation of
Single Point Fault Metric (SPFM)

 Ratio related to the fraction of single point faults,


which violate a safety goal, compared to all possible
faults in the safety related hardware elements of an Item

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 41
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 Calculation of
Latent Fault Metric (LFM)

 Ratio related to the fraction of latent multiple point


faults, which violate a safety goal, compared to all
possible multiple point faults in the safety related
hardware elements of an Item.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 42
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

A Hardware Element has a failure rate of 10 FIT which is all safety related
• Example 1: There is a D.C. of the SPF of 0% and a D.C. of the LF of 90%:
– λspf = ?
– λrf = ?
SPFM = ?
– λmpf,l = ?
– λmpf,d = ?

• Example 2: There is a D.C of the SPF of 70% and a D.C. of the LF of 50%:
– λspf = ?
– λrf = ?
– λmpf,l = ?
SPFM = ?
– λmpf,d = ? LFM = ?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 43
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

A Hardware Element has a failure rate of 10 FIT which is all safety related
• Example 1: There is a D.C. of the SPF of 0% and a D.C. of the LF of 90%:
– λspf = 10 FIT
– λrf = 0 FIT
SPFM = 0%
– λmpf,l = 0 FIT
– λmpf,d = 0 FIT

• Example 2: There is a D.C of the SPF of 70% and a D.C. of the LF of 50%:
– λspf = 0 FIT
– λrf = 3 FIT
– λmpf,l = 3.5 FIT
SPFM = 70%
– λmpf,d = 3.5 FIT LFM = 50%

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 44
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 3 ways to establish the target values


 Use of predecessor design (values must not be worse)
 Expert Judgement (determination by assessors, for example)
Typically used  Use of target values from tables 4 and 5

ASIL B ASIL C ASIL D


Single Point Fault
>= 90% >= 97% >=99%
Metric (SPFM)

Latent Fault Metric


>= 60% >= 80% >= 90%
(LFM)

ISO 26262-5, Table 4 + 5

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 45
DAY 3
EXERCISE

 Refer to the TSC from Day 2


 Are the defined target values
for SPFM and LFM correct?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 46
STEP 2A
VERIFICATION OF SAFETY GOAL
VIOLATING FAILURE RATE

 In a second step the “absolute” failure rate associated


with the violation of a safety goal of an item has to be
evaluated
 Determination of the failure rate of an item, which are
caused by random hardware faults and violate a safety goal.
 Determination of PMHF values (Probabilistic Metric of
random Hardware Failures).

ASIL When to do?


A no verification required

B verification recommended

C verification highly recommended


D verification highly recommended

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 47
STEP 2A
VERIFICATION OF SAFETY GOAL
VIOLATING FAILURE RATE
Note: This formula will be deleted in Edition 2 and substituted by more detailled considerations.
 Calculation of PMHF acc. to ISO 26262-10

Safety goal violating failure rate


Single point faults Residual faults Latent multiple point
faults

Further SlSPF SlRF SlMPF latent


simplified
expression:

MPMHF = SlSPF + SlRF + 0,5 * SlDPF * Slsm,DPF,latent * TLifetime


• MPMHF is the value for the probabilistic metric for random hardware failure (PMHF)
• Simplified formula to calculate PMHF (refer to ISO 26262-10)
• Partitioning of latent faults in faults of mission block (m) and safety mechanism (sm)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 48
STEP 2A
VERIFICATION OF SAFETY GOAL
VIOLATING FAILURE RATE

 Simplified calculation (estimation is not within ISO 26262)

Safety goal violating failure rate


Single point faults Residual faults Latent multiple point
faults

SlSPF SlRF SlMPF latent

PMHF = SlSPF + SlRF + β * SlMPF latent


Considered „common cause“ fraction (acc. IEC61508), typically set at 10%

Attention: Above formula represents a rough and usually conservative estimation, which is
not shown in ISO 26262.
Instead we recommend to use mathematical models (e.g. quantitative FTA).
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 49
STEP 2A
VERIFICATION OF SAFETY GOAL
VIOLATING FAILURE RATE

 3 ways to establish the target values


 Use of predecessor design for orientation (values
must not be worse)
 Expert Judgement (determination by assessors, for
example)
Typically used  Use of target values from table 6

Random hardware failure


FIT
ASIL target values (RHFT)
D < 10-8 h-1 < 10 FIT
C < 10-7 h-1 < 100 FIT
B < 10-7 h-1 < 100 FIT

 ISO 26262 establishes target values for the calculation of PMHF


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 50
STEP 2A
VERIFICATION OF SAFETY GOAL
VIOLATING FAILURE RATE

 Target values for verification of PMHF


 For each safety goal it must be demonstrated that the
cumulative safety target-violating fault rate of all HW elements
of an item within the respective safety goal meets the stated
values in the table
(compare PFH value acc. to IEC 61508)

Random hardware failure


FIT
ASIL target values (RHFT)
D < 10-8 h-1 < 10 FIT
C < 10-7 h-1 < 100 FIT
B < 10-7 h-1 < 100 FIT

ISO 26262-5, table 6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 51
DAY 3
EXERCISE 8

 Go back to the TSC from Day 2


 Why is there a budget given for
the PMHF of the example
component?

Ref. ID Requirement ASIL


SYSR20 COMPR20 PMHF < 0.1*10-7/h B
with respect to SG1

Note: It is assumed, that a budget of 10% of the


total PMFH target value is given to this
component

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 52
STEP 1.6
DETERMINE THE ARCHITECTURE
METRIC

• You conducted an FMEDA and find the following:


• A transistor component called “T71” has a base failure rate of 5.0 FIT
• A short circuit of this transistor violates a safety goal.
• An open circuit failure of this transistor does not violate the safety
goal
• The modal distribution of failures is 50% open, 50% short
• A safety mechanism called “SM1”, for example output monitoring with
a controlled shut-down reaction, is estimated to provide 90%
diagnostic coverage of the dangerous short-circuit failure.
• The same safety mechanism can detect 80% of latent failures of the
transistor.

• Task: Populate the FMEDA Matrix for this element and calculate SPFM,
LFM, PMHF, ASIL:

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 53
STEP 1.6
DETERMINE THE ARCHITECTURE
METRIC

SPFM:

LFM: What ASIL can this


system achieve:
PMHF:
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 54
STEP 1.6
DETERMINE THE ARCHITECTURE
METRIC

What ASIL can this


• Calculate the SPFM: 95% system achieve?
• Calculate the LFM: 90.5%
ASIL B
• Calculate the PMHF: 0.25 FIT
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 55
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 6 activities to determine the architecture metric

1.1 Listing of all HW elements (from BOM)


1.2 Identification if the HW element is safety-relevant or not
Quantitative design verification

Identification of base failure rate, failure modes and their


1.3
statistical distribution per HW element

1.4 Classification of fault categories related to the considered


safety goal (safe fault, residual fault, single point fault etc.)

1.5 Determine the diagnostic coverage for residual faults and


multiple point faults

1.6 Summing all fault rates and calculate the HW architecture


metrics and PMHF according to the formulas

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 56
STEP 1.1
DETERMINE THE ARCHITECTURE
METRIC

1.1  Listing of all HW elements (from BOM)


Quantitative design verification

1.2

1.3

element name

Description
1.4

Hardware
1.5

R1 Shunt Resistor
1.6 R2 Resistor
T1 Transistor
K1 Relay
D1 Diode
IC1 Amplifier
M Motor
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 57
STEP 1.2
DETERMINE THE ARCHITECTURE
METRIC

1.1  Identification if a HW element is safety-relevant or not


 Per definition a HW element is safety relevant, if it contributes
Quantitative design verification

1.2 to a violation or achievement of a safety goal.

1.3  In other words: The component is part of a signal safety path


of the safety goal.
1.4

1.5

1.6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 58
DAY 3
EXERCISE

SG1: Unintended acceleration has to be avoided

In this quantitative analysis, we


will be focusing on the “Power
Unit with Motor Monitoring”.

The primary torque calculation


and command is performed by
the “Motor-ECU”, which it
receives its own independent
torque feedback from the “E-
Motor”.

The “Power Unit with Motor


Monitoring” independently
calculates current with a
current shunt and provides the
current value back to the “Motor-
ECU” for it to estimate torque
and perform a plausibility
check.
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 59
DAY 3
EXERCISE

SG1: Unintended acceleration has to be avoided

Power Unit with Motor Monitoring: Please establish whether or


not a Hardware component
is safety-relevant regarding
Safety Goal 1
Give reasons for your choice.

Safety Related
element name

Description
Hardware
R1 Shunt Resistor
R2 Resistor
T1 Transistor
K1 Relay
D1 Diode
IC1 Amplifier
M Motor

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 60
DAY 3
EXERCISE

SG1: Unintended acceleration has to be avoided

Please establish whether or


R1: Used to measure current to the motor. The not a Hardware component
“Motor- ECU” uses the current value to estimate a is safety-relevant regarding
current-based torque calculation to compare to Safety Goal 1
measured torque for a safety plausibility check Give reasons for your choice.
R2: Failed R2 could cause T1 to unintentionally
switch which may leaded to a SG violation

Safety Related
element name

Description
T1: Failed T1 gate could cause unintentional

Hardware
switching
K1: This is part of the safe state actuator path
R1 Shunt Resistor Y
D1: D1 protects from motor transients. A failed D1 R2 Resistor Y
could lead to T1 failures T1 Transistor Y
K1 Relay Y
IC1: Could translate current wrong. See R1. D1 Diode Y
M: Some short failures could cause continuous IC1 Amplifier Y
M Motor Y
operation
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 61
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

1.1  Identification of base failure rate, failure modes and their


statistical distribution per HW element
Quantitative design verification

1.2  Failure rates from recognised industry sources and/or


manufacturers’ information:
– IEC 62380, SN29500, MIL HDBK 217 F notice 2, RAC HDBK 217 Plus,
1.3
NPRD95, EN50129 Annex C, EN 62061 Annex D, RAC FMD97 und MIL
HDBK 338.
1.4

 Example from SN29500:


1.5

– λref Failure rate under reference conditions


1.6 – pU Voltage dependency
– pI Current dependency
– pT Temperature dependency

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 62
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

• IEC62380 Universal model for


reliability prediction of
electronics, components,
PCBs and equipment.
Includes mission profiles for
industries such as
automotive. Base failure
rates provided.

• SN29500 Developed by
Siemens AG and states failure
rates under reference
conditions. User must find
base failure rates from other
reference. Does not consider
mission profiles.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 63
DAY 3
EXERCISE

Base failure rates in the


example are taken from
SN29500.
What impact does the assumed
operating temperature make?

Basis for fault rate

environment, p(E)

Fault rate in use,


Correction factor-

Correction factor-

Correction factor-

Correction factor-

Correction factor-

Correction factor-

Correction factor-
Operat. temp. °C
Reference temp.
FIT value at ref.

failure criterion,
element name

voltage, p(Q)
voltage, p(U)
temperature

quality, p(D)
Technology
Description

temp., p(T)
Hardware

load, p(L)
Value

FIT
°C
R1 Shunt Resistor Wire Wound SN29500 0.20 55 50 0.89 1 1 1 1 1 1 0.18
R2 Resistor Metal film SN29500 0.20 55 70 1.4 1 1 1 1 1 1 0.28
T1 Transistor TO3 MOS-FET SN29500 60.00 100 80 0.43 1 1 1 1 1 1 25.80
K1 Relay Power SN29500 30.00 70 70 1 1 1 1 1 1 1 30.00
D1 Diode Standard SN29500 1.00 40 70 3.7 1 1 1 1 1 1 3.70
IC1 Amplifier AD8200 Dual Diff OP SN29500 3.00 55 70 1.8 1 1 1 1 1 1 5.40
M Motor DC Supplier 50.00 1 1 1 1 1 1 1 50.00
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 64
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

1.1  Identification of base failure rate, failure modes and their


statistical distribution per HW element
Quantitative design verification

1.2

 Failure rates can be determined through testing and analysis


1.3 methods and expert judgement

1.4  Failure rates can be determined by statistical analysis based


on field returns
1.5

1.6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 65
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

• Confidence Interval (CI): A type of interval


estimate of a population parameter which
consist of a range of values that act as Population
good estimates of the unknown parameter

population parameter.
Distribution of
Population parameter
“I am pretty sure the true value of a number I
am approximating is within this range”

• Confidence level or confidence


coefficient: How frequently the observed Distribution of
sample parameter
interval contains the parameter is
determined by the confidence level or
confidence coefficient.

“How sure? I am n% sure”

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 66
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

• IEC 62380 has a confidence level of 95%


– Means that the FIT rate observed from the IEC
62380 standard is 95% likely to lie within the
confidence interval.
• SN 29500 has a confidence level of 80%
– Means that the FIT rate observed from the SN
29500 standard is 80% likely to lie within the
confidence interval.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 67
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

Not necessarily, because we don’t know the confidence intervals

Example 1: Confidence Level of 95% - Large Confidence Interval


0 FIT 100 FIT

I’m 95% certain the failure rate is between 0 and 100 FIT

Example 2: Confidence Level of 80% - Small Confidence Interval


0 FIT 1.5 FIT

I’m 80% certain the failure rate is between 0 and 1.5 FIT
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 68
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

Statistical Confidence Interval for a constant failure rate :


Used when failure detected
– For two-sided confidence limit during test interval

⁄ 2 ⁄ 2 +2 n: No. of components tested


< < T: Test time per component
2 2
r: No. of failures
: Confidence level
– For one-sided confidence limit
or Used with no failure
2 2 +2 detected during test interval
< <
2 2

Ref: Chi-Square Probabilities are Poisson Probabilities in Disguise


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 69
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC
r: No. of failures
: Confidence level
2 +2
<
2

Example: 1 failure (r
= 1) and want to
calculate for 95% (
= 0.05) confidence
level:

9.488
<
2
Ref: http://flylib.com/books/en/3.287.1.235/1/
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 70
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

Statistical Confidence Interval for zero failures:


– Two-sided confidence limit is not possible for zero
failures case (i.e., in test time had no component
failures)
– One-sided confidence limit

2 n: No. of components tested


<
2 T: Test time per component
: Confidence level

<

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 71
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC
.
• For 60% confidence: <

.
• For 90% confidence: <
Width of the
interval
. increases with
• For 95% confidence: < increase in
confidence level

99% is 5 times
.
• For 99% confidence: < larger than 60%

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 72
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

One-sided confidence limit


approach assumes the upper bound,
i.e., that another hour of testing
would result in a failure

0.92 0.92
= ℎ ≈ 0.02
48065534164

2.3 2.3
= ℎ ≈ 0.05
48065534164

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 73
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

Equivalent hours based on


accelerated lifetime testing concepts 0.92 0.92
= ℎ ≈ 4.62
199265692
Ref: Reliability Report by IXYS
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 74
STEP 1.3
DETERMINE THE ARCHITECTURE
METRIC

1.1  Identification of base failure rate, failure modes and


their statistical distribution per HW element
Quantitative design verification

1.2  E.g. for a resistor the failure modes “short, open and drift” must
be analyzed.
1.3  Failure mode “Functional” refers to the specified function of the
Hardware component (e.g. Filter, amplifier etc.)
1.4  Possible resources
– IEC 62061: 2005, Annex D (IEC 62061: “Safety of machinery – Functional
safety of safety-related electrical, electronic and programmable electronic
1.5 control systems”); IEC 62380; SN29500
– A. Birolini: e.g. “Reliability Engineering - Theory and Practice”.

1.6

Example from Birolini:


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 75
DAY 3
EXERCISE

 Example: See the completed


failure modes in the
template.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 76
DAY 3
EXERCISE

Fault distribution by component type (according to Birolini)

Type of HW part Failure Mode Distribution Sum


Open 40%
R-M Short 0%
100%
(Meral Film) Drift 0,5 30%
Drift 2,0 30%
Open 40%
R-W Short 0%
100%
(Wire-Wound) Drift 0,5 30%
Drift 2,0 30%
open circuit Gate 5%
open circuit Drain 5%
open circuit Source 5%
T-M
short circuit G-D 20% 100%
(Transistor-MOSFET)
short circuit G-S 20%
short circuit D-S 20%
Drift (wrong amplifying) 25%
K-P Open 20%
100%
(Power-Relais) Short (welded contacts) 80%
Open 20%
V-S Short 70%
100%
(Diode Standard) Change of limiting characteristics high 5%
Change of limiting characteristics high 5%

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 77
DAY 3
EXERCISE

Fault distribution by component type (according to Birolini)

Type of HW part Failure Mode Distribution Sum


Open 40%
R-M Short 0%
100%
(Meral Film) Drift 0,5 30%
Drift 2,0 30%
Open 40%
R-W Short 0%
100%
(Wire-Wound) Drift 0,5 30%
Drift 2,0 30%
open circuit Gate 5%
open circuit Drain 5%
open circuit Source 5%
T-M
short circuit G-D 20% 100%
(Transistor-MOSFET)
short circuit G-S 20%
short circuit D-S 20%
Drift (wrong amplifying) 25%
K-P Open 20%
100%
(Power-Relais) Short (welded contacts) 80%
Open 20%
V-S Short 70%
100%
(Diode Standard) Change of limiting characteristics high 5%
Change of limiting characteristics high 5%

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 78
STEP 1.4
DETERMINE THE ARCHITECTURE
METRIC

1.1  Classification of random HW-fault categories


 Determine the effects of the fault per safety goal in the
Quantitative design verification

1.2 following sequence:


– Is there a direct safety goal violation due to a single point
1.3 failure?
– Is there a safety mechanism in place?
– Is there a safety goal violation in combination with another
1.4
independent failure?
– Is there a safety mechanism in place or can the driver perceive
1.5 the effects?

1.6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 79
DAY 3
EXERCISE

SG1: Unintended acceleration


has to be avoided  Classify the “Fault categories“
regarding Safety Goal 1 in the
example project.
 As a first step, check whether
the considered fault has the
potential to be a single point
fault
 In a second step, check
whether the considered fault
has the potential to become a
latent fault

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 80
HW Part

R2
R1
HW Part type

R-M
R-M

ISO 26262 Training - Day 3


0.28
0.18
λ value [FIT]

x
x
Safety-relevant?

Failure mode

Short
Short

Open
Open

Drift 2,0
Drift 0,5
Drift 2,0
Drift 0,5
Failure mode

0%
0%
distribution

30%
30%
40%
30%
30%
40%
Fault potential if no

o
o
o
o
o
o
o

x
safety mechanism
DAY 3

exists?
SPF/RF?

Safety mechanism that

SM1
SM
EXERCISE

prevents this fault (SM)

DC safety mechanism
DC

[%]

Single Point Fault Rate


λSP

[FIT]

Residual Point Fault


λRF

Rate [Fit]

Fault potential in
combination with
x
x

o
o
o
o
o
o
LF?

another independent
fault

Safety mechanism (SM)


SM1
SM1
SM

that prevents this fault

DC safety mechanism
Analysis with respect to Safety Goal1 (Unintended acceleration has to be avoided)

DC

[%]

Latent Fault Rate [FIT]


© SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED
λLF

81
DAY 3
EXERCISE
Analysis with respect to Safety Goal1 (Unintended acceleration has to be avoided)

Safety mechanism (SM)


prevents this fault (SM)

Single Point Fault Rate


Safety mechanism that

that prevents this fault

Latent Fault Rate [FIT]


DC safety mechanism

DC safety mechanism
another independent
Residual Point Fault
Fault potential if no
safety mechanism

combination with
Fault potential in
Safety-relevant?

Failure mode

Failure mode
HW Part type

distribution

Rate [Fit]
HW Part

exists?

[FIT]

fault
[%]

[%]
λ value [FIT] SPF/RF? SM DC λSP λRF LF? SM DC λLF

Open 40% Since R1 is part of the safety mechanism which


o x
independently is used to estimate torque, it
Short 0%
R1 R-M 0.18 x
o failing by itself cannot directly
x lead to the
Drift 0,5 30% violation of the safety goal
o o

Drift 2,0 30%


o o

Open 40%
x If R2 fails open, then T1’s ogate is floating which
Short 0% could cause unintentional switching and could
o o
R2 R-M 0.28 x potentially violate the safety goal
Drift 0,5 30%
o o
Drift 2,0 30%
o o
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 82
DAY 3
EXERCISE
Analysis with respect to Safety Goal1 (Unintended acceleration has to be avoided)

Safety mechanism (SM)


prevents this fault (SM)

Single Point Fault Rate


Safety mechanism that

that prevents this fault

Latent Fault Rate [FIT]


DC safety mechanism

DC safety mechanism
another independent
Residual Point Fault
Fault potential if no
safety mechanism

combination with
Fault potential in
Safety-relevant?

Failure mode

Failure mode
HW Part type

distribution

Rate [Fit]
HW Part

exists?

[FIT]

fault
[%]

[%]
λ value [FIT] SPF/RF? SM DC λSP λRF LF? SM DC λLF

Open 40%
IfoR1 failed short, or drifted to a ox
lower resistance (indicating a
Short 0%
R1 R-M 0.18 x
smaller
o torque then actual), in x

Drift 0,5 30% combination with the primary


o
torque path failing, then the xo
Drift 2,0 30% safety goal could be violated.
o o

Open 40% IfxR2 failed, and SM1 failed, ox


then the safety goal could be
Short 0%
violated
o o
R2 R-M 0.28 x
Drift 0,5 30%
o o
Drift 2,0 30%
o o
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 83
STEP 1.5
DETERMINE THE ARCHITECTURE
METRIC

1.1  Determination of the reachable Diagnostic Coverage in


accordance with the safety mechanism, which is
Quantitative design verification

1.2 defined in the TSC, for Residual Faults and Detected


Multiple Point Faults.
1.3  Refer also to ISO26262-5, annex D
Safety See overview of Typical diagnostic
1.4 mechanism/measure techniques coverage considered Notes
achievable
Failure detection by Depends on diagnostic coverage
D.2.1.1 Low
on-line monitoring of failure detection
1.5 Test pattern D.2.6.1 High -
Input comparison/voting
Low  DC =60%
Only if dataflow changes within
(1oo2, 2oo3 or better D.2.6.5 High
diagnostic test
Medium DC = 90%
interval
1.6 redundancy)

Sensor Valid Range D.2.10.1 Low


High  DC
Detects shorts = 99%
to ground or
power and some open circuits
Sensor Correlation D.2.10.2 High Detects in range failures
Sensor Rationality Check D.2.10.3 Medium

Example table ISO 26262-5, D.11


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 84
STEP 1.5
DETERMINE THE ARCHITECTURE
ISO 26262-5: Table D.1 (First Edition) METRIC
Component Diagnostic coverage factor
Low (60%) Medium (90%) High (99%)
Power supply Under and over Voltage Drift Drift and oscillation
Under and over Voltage Under and over voltage
Power spikes
Clock Stuck-at d.c. fault model d.c. fault model
Incorrect frequency
Period jitter
Non-volatile memory Stuck-at for data, addresses and d.c. fault model for data, addresses d.c. fault model for data, addresses
control interface, lines and logic (includes address lines within same (includes address lines within same
block and inability to write to cell) block and inability to write to cell)
and control interface, lines and and control interface, lines and
logic logic
Volatile memory Stuck-at for data, addresses and d.c. fault model for data, addresses d.c. fault model for data, addresses
control interface, lines and logic (includes address lines within same (includes address lines within same
block and inability to write to cell) block and inability to write to cell)
and control interface, lines and and control interface, lines and
logic logic
Soft error model for bit cells Soft error model for bit cells
Digital I/O Stuck-at (including signal lines d.c. fault model (including signal d.c. fault model (including signal
outside of the microcontroller) lines outside of the microcontroller) lines outside of the microcontroller)
Drift and oscillation
Analog I/O Stuck-at (including signal lines d.c. fault model (including signal d.c. fault model (including signal
outside of the microcontroller) lines outside of the microcontroller) lines outside of the microcontroller)
ISO 26262 Training - Day 3 Drift and oscillation Drift and
© SGS-TÜV oscillation
Saar GmbH 2017 ALL RIGHTS RESERVED 85
DAY 3
EXERCISE 6

 Which diagnostic coverage can


be assumed for the used
diagnostic measure (safety
mechanism)?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 86
DAY 3
EXERCISE 6

Safety
Description ref. to ISO 26262 DC achievable
Mechanism
ISO 26262-5, Table
SM1 Sensor correlation
D.11 – Sensors

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 87
DAY 3
EXERCISE 6

Sensor correlation
Aim: To detect sensor-in-range drifts, offsets or other errors using a redundant sensor.
Description: Comparison of two identical or similar sensors to detect in-range
failures such as drifts, offsets or stuck-at failures.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 88
DAY 3
EXERCISE 6

Safety
Description ref. to ISO 26262 DC achievable
Mechanism
ISO 26262-5, Table
SM1 Sensor correlation
D.11 – Sensors
99%

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 89
STEP 1.6
DETERMINE THE ARCHITECTURE
METRIC

1.1  Totalling up of the fault rates and metric calculation


 Summing the failure rates per failure mode
Quantitative design verification

1.2
 Fill in the sums in the formulas for SPFM, LFM and PMHF.
 Comparison with the required target values
1.3
 If necessary, optimize the HW architecture

1.4

1.5

1.6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 90
DAY 3
EXERCISE 7

 Calculate all of the failure rates


for each failure mode?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 91
HW Part

R2
R1
HW Part type

R-M
R-M

ISO 26262 Training - Day 3


0.28
0.18
λ value [FIT]

x
x
Safety-relevant?

Failure mode

Short
Short

Open
Open

Drift 2,0
Drift 0,5
Drift 2,0
Drift 0,5
Failure mode

0%
0%
distribution

30%
30%
40%
30%
30%
40%
Fault potential if no

o
o
o
o
o
o
o

x
safety mechanism
DAY 3

exists?
SPF/RF?

Safety mechanism that


SM

SM1
prevents this fault (SM)
EXERCISE 7

DC safety mechanism
DC

99% [%]

Single Point Fault Rate


λSP

[FIT]

Residual Point Fault


λRF

Rate [Fit]

Fault potential in
combination with
x
ox

o
o
o
o
xo

ox
LF?

another independent
fault

Safety mechanism (SM)


SM

that prevents this fault


SM1
SM1

SM1

DC safety mechanism
Analysis with respect to Safety Goal1 (Unintended acceleration has to be avoided)

DC

[%]
99%

99%

99%

Latent Fault Rate [FIT]


© SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED
λLF

92
HW Part

R2
R1
HW Part type

R-M
R-M

ISO 26262 Training - Day 3


0.28
0.18
λ value [FIT]

x
x
Safety-relevant?

Failure mode

Short
Short

Open
Open

Drift 2,0
Drift 0,5
Drift 2,0
Drift 0,5
Failure mode

0%
0%
distribution

30%
30%
40%
30%
30%
40%
Fault potential if no

o
o
o
o
o
o
o

x
safety mechanism
DAY 3

exists?
SPF/RF?

Safety mechanism that


SM

SM1
prevents this fault (SM)
EXERCISE 7

DC safety mechanism
DC

99% [%]

Single Point Fault Rate


λSP

[FIT]

Residual Point Fault


λRF

Rate [Fit]
0.0011

Fault potential in
combination with
x
ox

o
o
o
o
xo

ox
LF?

another independent
fault

Safety mechanism (SM)


SM

that prevents this fault


SM1
SM1

SM1

DC safety mechanism
Analysis with respect to Safety Goal1 (Unintended acceleration has to be avoided)

DC

[%]
99%

99%

99%
0

Latent Fault Rate [FIT]


© SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED
λLF

0.0011
0.00053

93
DAY 3
EXERCISE 7

Analysis with respect to Safety Goal1 (Unintended acceleration shall be avoided)

Safety mechanism that

Single Point Fault Rate

(SM) that prevents this

Latent Fault Rate [FIT]


DC safety mechanism

DC safety mechanism
another independent
Residual Point Fault
Fault potential if no

Safety mechanism
safety mechanism

prevents this fault

combination with
Fault potential in
Safety-relevant?
HW Part type

Failure mode

Failure mode
distribution
λ value [FIT]

Rate [Fit]
HW Part

exists?

(SM)

[FIT]

fault

fault
[%]

[%]
SPF/RF? SM DC λSP λRF LF? SM DC λLF
open circuit Gate 5% SM1
x 99% 0.013 o
open circuit Drain 5%
o o
open circuit Source 5%
o o

T1 T-M 25.8 x short circuit G-D 20%


o o
short circuit G-S 20%
o o
short circuit D-S 20% SM1
x 99% 0.052 o

Drift (wrong amplifying) 25% SM1


x 99% 0.065 o
Open 20%
o o
K1 K-P 30 x
Short (welded contacts) 80%
o x 0% 24.000
Open 20% o x 0% 0.740
Short 70% o o 0.000
D1 V-S 3.7 x
Change of limiting characteristics high 5%
o o 0.000

Change of limiting characteristics high 5%


o o 0.000

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 94
DAY 3
EXERCISE 7

Analysis with respect to Safety Goal1 (Unintended acceleration shall be avoided)

Safety mechanism (SM) that


Single Point Fault Rate [FIT]
Fault potential if no safety

combination with another


DC safety mechanism [%]

DC safety mechanism [%]


Residual Point Fault Rate
Failure mode distribution

prevents this fault (SM)


Safety mechanism that

Latent Fault Rate [FIT]


mechanism exists?

prevents this fault


independent fault
Fault potential in
Safety-relevant?
HW Part type

Failure mode
λ value [FIT]
HW Part

[Fit]
SPF/RF? SM DC λSP λRF LF? SM DC λLF

Safety Related 98% SM1


o x 99% 0.053
IC1 Amp 5.4 x
Non safety Related 2%
o o
Break (interruption) of the coil 40% o o
Short circuit of the coil 40% o x 0 20.000
M DC 50 x High transition resistance on the
10%
commutator o o
Intermittent operation 10%
o o

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 95
DAY 3
EXERCISE

 Based on all of the provided


total failure rates, determine
the “SPFM”, “LFM” and
“PMHF” for the example
component (system).
 Can the defined target values
be reached?
 What could be done, if the
target values are not reached?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 96
DAY 3
EXERCISE
ltotal,SR = 115.4 FIT
lS SPFM =
+
lMPF,D
+ LFM =
lMPF,P
+
lRF = 0.13 FIT PMHF ≈
+
lMPF,L = 44.79 FIT
+ Achievable ASIL =
lSPF = 0 FIT
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 97
DAY 3
EXERCISE
ltotal,SR = 115.4 FIT
lS SPFM = 99.9% [ASIL D]
+
lMPF,D
+ LFM = 61.1% [ ASIL B]
lMPF,P
+
lRF = 0.13 FIT PMHF ≈ 0.13 FIT or 4.6 FIT [ASIL D]
+
Using simplified RF + Using β = 10%
lMPF,L = 44.79 FIT LF formula coefficient approach

+ Achievable ASIL = ASIL B


lSPF = 0 FIT
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 98
DAY 3
EXERCISE

 If ASIL C was required, then the LFM is the limiting factor.


 When evaluating the results, it can be seen the K1 short produces a high
Latent Fault Rate.
 By adding a “High” diagnostic coverage safety mechanism to this latent
fault, would change the LFM from 61.1% to 81.8%, which would be
sufficient for ASIL C

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 99
STEP 2B
VERIFICATION OF FAILURE CLASSES
PER HARDWARE COMPONENT

 There is an alternative method (called “method 2”),


instead of performing the PMFH-evaluation
 Determination of the Failure Rate Class (FRC) per HW part
 Reachable failure classes are depended on the ASIL and
type of failure (SPF, RF, LF)

ASIL When to do?


A no verification necessary

B verification recommended

C verification highly recommended


D verification highly recommended

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 100
STEP 2B
VERIFICATION OF FAILURE CLASSES
PER HARDWARE COMPONENT
 Process for single point faults (SPF and RF)
Begin

Single-point fault?
No

Yes
Residual fault? Evaluation procedure
No for dual-point failures

Meet failure rate class Yes


with respect to Add safety mechanism
Single-point fault (table 7) to mitigate fault
No

Meet failure rate class


Yes and DC with respect to Improve safety
residual fault (table 8) No mechanism

Yes

End ISO 26262-5, Figure 3

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 101
STEP 2B
VERIFICATION OF FAILURE CLASSES
PER HARDWARE COMPONENT

 Process for multiple point faults (LF)

Begin

Potential for Evaluation and resolution


dependent failures ? of dependent failures
(see ISO26262-9:—, (see ISO26262-9:—,
Yes
Clause 7) Clause 7)

No

Plausible dual-
point failure?
No

Yes

Meet failure rate class Add or improve


and DC with respect to safety mechanism
latent fault (table 9) No

Yes
ISO 26262-5, Figure 4
End
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 102
STEP 2B
VERIFICATION OF FAILURE RATE
CLASSES PER HARDWARE PART

failure target value* FIT  Definition of failure rate classes


rate class (FRC)
1 < 10-10 h-1 < 0.1  The values for failure rate class 1
should be smaller than the value for
2 < 10-9 h-1 <1
ASIL D divided by 100.
3 < 10-8 h-1 < 10  The values for failure rate class 2
i < 10-(11-i) h-1 < 100 should be smaller than the ten-fold
value for failure rate class 1.
* The reference point is the ASIL D
target value from ISO 26262-5, Table 6  The values for failure rate class 3
should be smaller than the hundred-
fold value for failure rate class 1.
ASIL PMHF FIT  The values for failure rate classes >
3 result accordingly.
D < 10-8 h-1 < 10
C < 10-7 h-1 < 100
(B) < 10-7 h-1 < 100

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 103
STEP 2B
VERIFICATION OF FAILURE CLASSES
PER HARDWARE COMPONENT

 Target values for Single Point Faults (SPF)


 For each “Single Point Fault” it must be shown that the target values from ISO
26262-5, Table 7, are met.

D Failure rate class 1 + dedicated measures


Failure rate class 2 + dedicated measures
ASIL of C Or
the
Safety Failure rate class 1
Goal Failure rate class 2
B Or
Failure rate class 1
ISO 26262-5, Table 7
Dedicated measures: Measures which back up the assumed fault rates, e.g.
» Design features such as over-design
» Initial test of the assembly components used
» “Burn in” test
» Dedicated control plan
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 104
STEP 2B
VERIFICATION OF FAILURE CLASSES
PER HARDWARE COMPONENT

 Target values for Residual Faults (RF)


 For each “Residual Point Fault” it must be shown that the target values from
ISO 26262-5, Table 8, are met

Diagnostic Coverage with respect to residual faults


>= 99,9% >= 99 % >= 90 % < 90 %

Failure rate Failure rate Failure rate Failure rate class 1 +


D
class 4 class 3 class 2 dedicated measures

ASIL of the
Safety Goal Failure rate Failure rate Failure rate Failure rate class 2 +
C
class 5 class 4 class 3 dedicated measures

Failure rate Failure rate Failure rate


B Failure rate class 2
class 5 class 4 class 3
ISO 26262-5, Table 8

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 105
STEP 2B
VERIFICATION OF FAILURE CLASSES
PER HARDWARE COMPONENT

 Target values for Latent Faults (LF)


 For each “Latent Fault” it must be shown that the target values from ISO
26262-5, Table 9, are met.

Diagnostic Coverage with respect to latent faults

>= 99 % >= 90 % < 90 %

D Failure rate class 4 Failure rate class 3 Failure rate class 2


ASIL of Safety Goal
C Failure rate class 5 Failure rate class 4 Failure rate class 3

ISO 26262-5, Table 9

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 106
DAY 3
EXERCISE

 What Failure Rate Class can be


achieved by R1 and R2?
 What are the advantages and
disadvantages of each
method?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 107
DAY 3
EXERCISE

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 108
DAY 3
EXERCISE
failure FIT
rate
class  For very low failure rate HW
1 < 0.1 elements, the FRC method
2 <1 can be much quicker to
3 < 10 evaluate
i < 100  However, in most cases a
safety mechanism is
FRC 2 required and therefore time
isn’t save
 The FRC method is only a
substitute for PMHF, the
quantitative analysis would
FRC 2 still be required for
calculating the SPFM and
LFM
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 109
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 110
DERIVING TEST CASES (1)

ASIL
Methods
A B C D
1a Analysis of requirements ++ ++ ++ ++
1b Analysis of internal and external interfaces + ++ ++ ++
1c Generation and analysis of equivalence classesa + + ++ ++
1d Analysis of boundary valuesb + + ++ ++
1e Knowledge or experience based error guessingc ++ ++ ++ ++
1f Analysis of functional dependencies + + ++ ++
1g Analysis of common limit conditions, sequences and sources of common
cause failures
+ + ++ ++
1h Analysis of environmental conditions and operational use cases + ++ ++ ++
1i Standards if existingd + + + +
1j Analysis of significant variantse ++ ++ ++ ++
a In order to derive the necessary test cases efficiently, analysis of similarities can be conducted.
b EXAMPLE values approaching and crossing the boundaries between specified values, and out of range values
c “Error guessing tests” can be based on data collected through a lessons learned process, or expert judgment, or both. It can be supported by
FMEA.
d Existing standards include ISO 16750 and ISO 11452.
e The analysis of significant variants includes worst case analysis.

ISO 26262, Table 10 — Methods for deriving test cases for hardware integration testing

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 111
CHOICE OF TEST METHODS (1)

ASIL
Methods
A B C D
1 Functional testinga ++ ++ ++ ++
2 Fault injection testing + + ++ ++
3 Electrical testing ++ ++ ++ ++
ISO 26262-5, Table 11 — Hardware integration tests to verify the completeness and correctness of the safety
mechanisms implementation with respect to the hardware safety requirements

aThe HW is given input data, which adequately characterizes the expected normal operation.
The outputs are observed and their response is compared with that given by the specification.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 112
CHOICE OF TEST METHODS (2)

 1 Functional testing: Only the HW aspect of the item


should be considered. Test cases should demonstrate, that
the HW fulfills the specified functions (“straightforward
tests”)
 2 Fault injection test: The injection of failures can be
performed physically (test adapter) or logically (SW
flags).
In case of physically injection, SW test functions are used
together with SW basic functions (e.g. driver). Both kind of
functions have to be verified sufficiently to fulfill the
requirements for the highest ASIL. The test SW should
communicate the HW failures to the tester.
Note: Logically injection-based tests can be done on HW
simulations (e.g. Model based development, net list etc.)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 113
CHOICE OF TEST METHODS (3)

 3 Electrical testing: Test cases are concentrated on the


compliance to all HW safety requirements (not functional)
and all electrical properties specified in the HW design.
 Example standards: ISO 16750 and ISO 11452
 ISO 16750, Road vehicles – Environmental
conditions and electrical testing for electrical and
electronic equipment (e.g., overvoltage, reverse
voltage, etc.)
 ISO 11452 - Component Test Methods for Electrical
Disturbances in Road Vehicles Package (e.g., EMI)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 114
MORE HW INTEGRATION TESTS

ISO 26262-5, Table 12 — Hardware integration tests to verify robustness and operation under external stresses
ASIL
Methods
A B C D
1a Environmental testing with basic functional verificationa ++ ++ ++ ++
1b Expanded functional testingb o + + ++
1c Statistical testingc o o + ++
1d Worst case testingd o o o +
1e Over limit testing e + + + +
1f Mechanical testing ++ ++ ++ ++
1g Accelerated life testf + + ++ ++
1h Mechanical Endurance testg ++ ++ ++ ++
1i EMC and ESD testh ++ ++ ++ ++
1j Chemical testingi ++ ++ ++ ++
a During environmental testing with basic functional verification the hardware is put under various environmental conditions during which the hardware requirements are
assessed. ISO 16750-4 (Road vehicles -- Environmental conditions and testing for electrical and electronic equipment -- Part 4: Climatic loads) can be applied.
b Expanded functional testing checks the functional behaviour of the item in response to input conditions that are expected to occur only rarely (for instance extreme mission
profile values), or that are outside the specification of the hardware (for instance an incorrect command). In these situations, the observed behaviour of the hardware element is
compared with the specified requirements.
c Statistical tests aim at testing the hardware element with input data selected in accordance with the expected statistical distribution of the real mission profile. The
acceptance criteria are defined so that the statistical distribution of the results confirms the required failure rate.
d Worst case testing aims at testing cases found during worst case analysis. In such a test, environmental conditions are changed to their highest permissible marginal values
defined by the specification. The related responses of the hardware are inspected and compared with the specified requirements.
e In over limit testing, the hardware elements are submitted to environmental or functional constraints increasing progressively to values more severe than specified until they
stop working or they are destroyed. The purpose of this test is to determine the margin of robustness of the elements under test with respect to the required performance.
f Accelerated life test aims at predicting the behaviour evolution of a product in its normal operational conditions by submitting it to stresses higher than those expected during
its operational lifetime. Accelerated testing is based on an analytical model of failure mode acceleration.
g The aim of these tests is to study the mean time to failure or the maximum number of cycles that the element can withstand. Test can be performed up to failure or by damage
evaluation.
h ISO 11452-2; ISO 11452-4; ISO 7637-2; ISO 10605 and ISO 7637- 3 can be applied for EMC tests and ISO 16750-2 can be applied for ESD tests.
I For chemical test, ISO 16750–5 can be applied.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 115
HARDWARE TESTS SUMMARY

 Test methods are the same for safety relevant and non-
safety relevant requirements
 Reference to other well-known industry standards
 An analysis of the existing test strategies is required,
whether they are sufficient
 Assure focus on safety related HW parts

 Existing hardware test strategies shall be analyzed and extended


if necessary
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 116
SUMMARY OF DAY 3
HARDWARE DEVELOPMENT

 Functional Safety of hardware is laregely based on the


evaluation of probabilistic metrics
 The interface between hardware and software has to
be clarified (HSI)
 Hardware safety requirements are a refinement of
component safety requirements (TSC)
 Random hardware faults are categorized according to
their impact on each safety goal
 ISO 26262 establishes target values for the
calculation of SPFM, LFM, PMHF and FRC
 Existing hardware test strategies shall be analyzed
and extended if necessary

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 117
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 118
CONSIDERED PARTS
OF THE SOFTWARE DEVELOPMENT

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 119
RESPONSIBILITIES AND TARGETS

Responsibilities
 The software development phase typically is in the responsibility of the
software suppliers, who have the knowledge for the implementation
of software safety mechanisms at component level
Targets
 In the software development phase, a software is designed in
accordance with the required safety integrity of safety requirements
derived from the system development phase (TSC)

 Functional Safety in the software development is mainly based on the use of


processes, techniques and methods (best practices)
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 120
SOFTWARE PHASE MODEL

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 121
INITIATION OF PRODUCT
DEVELOPMENT AT SOFTWARE
LEVEL - ACTIVITIES

Activities during this phase:


 Planning of activities during software development
 Definition and documentation of selection criteria for
tools and programming languages
 Selection of suitable tools, techniques and methods
(see also Day 4)
 Definition of company-/project-internal tool application
guidelines

 Initiation of software development means to plan activities and select tools,


techniques and methods along Functional Safety rules of the company
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 122
INTENTION OF SW-GUIDELINES

 Supporting the developer in designing robust and fault resistant SW-


code
 Allow more flexibility in the development teams
 Improve the quality and maintainability of the source code
Example
Without brackets: With brackets (Misra-C, rule 14.8):
while ( new_data_available ) while ( new_data_available )
{
Process_data(); Process_data();
Service_watchdog(); Service_watchdog();
}
Reduces risk to empty and single statements;
nearby code can be changed or commented out
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 123
SUITABLE PROGRAMMING LANGUAGE
ISO 26262 requires use of a “suitable” programming language, but doesn’t provide recommendations.
IEC 61508 recommends the following “suitable” programming languages:

IEC 61508, Part 7,


Annex C:
Table C.1 –
Recommendations for
specific programming
languages

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 124
SUITABLE PROGRAMMING LANGUAGE

Suitable programming languages according to IEC 61508

IEC 61508, Part 7, Annex C:


Table C.1 – Recommendations for specific programming languages

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 125
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 126
PART 6, CLAUSE 6
SPECIFICATION OF SOFTWARE
SAFETY REQUIREMENTS

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 127
SPECIFICATION OF SOFTWARE
SAFETY REQUIREMENTS - ACTIVITIES

Activities during this phase:


 Derivation of the software safety requirements from the technical
safety concept and the system design specification considering the
 Specified hardware and software configurations
 Hardware-software interface
 Hardware design specification (e.g. use of multicore architecture)
 Time-related limitations (e.g. speed of the μprocessors and interfaces)
 External interfaces (e.g. sensors)
 Every operating mode of the vehicle, the system and the hardware which
may have an impact on the software.
 Determination and documentation of the interdependencies between
software and hardware
 Verification of the work results/products regarding consistency and
completeness
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 128
SW-SPECIFICATION AND DESIGN

Source: ISO26262-10, Figure 8


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 129
SPECIFICATION OF SOFTWARE
SAFETY REQUIREMENTS (SWSR)

 SWSR are valid for software-based functions where a


malfunction may lead to the violation of a technical safety
requirement.
 Examples: functions
 which are used to detect, report and treat faults
– of safety-relevant hardware elements (e.g.faulty sensor);
– of safety-relevant software elements (e.g. Watchdog);
 which serve to achieve a safe system condition (safe state);
 which are used to perform self-tests, during operation and in a service case (e.g.
storage tests);
 which permit modifications to be made to the software during production and in a
service case (e.g. new software release; over-the-air programming (OTA));
 which deal with time-critical or performance-relevant operations
 Software Safety Requirements are derived from safety requirements at
component (system) level
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 130
DAY 3
EXERCISE 10

 Create software safety


requirements based on
following technical safety
requirements:

COMPR100: Faults in RAM shall be


detected

COMPR101: The number of faults shall


be counted.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 131
SOLUTION TO EXERCISE 10 (DAY 3)

 Create software safety requirements based on following


technical safety requirements:

COMPR100: Faults in RAM shall be detected

COMPR101: The number of faults shall be counted

© SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 132
SOLUTION TO EXERCISE 10 (DAY 3)

 Create software safety requirements based on following


technical safety requirements:

COMPR100: Faults in RAM shall be detected

SW-SR100_01: A “March Test” of the entire RAM shall be


implemented
SW-SR100_02: The entire RAM-test shall run during power up of the
ECU
COMPR101: The number of faults shall be counted

SW-SR101_01: A counter unit shall be implemented, which


counts all detected safety related faults
SW-SR101_02: When the counter exceeds a threshold of TBD, the
motoring function shall be disabled
© SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 133
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 134
PART 6, CLAUSE 7
SOFTWARE ARCHITECTURAL DESIGN

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 135
WHAT IS A SOFTWARE
ARCHITECTURE?

 Definition provided by ISO 26262-1, 1.3 Architecture:


 Representation of the structure or functions or systems or elements
that allows identification of building blocks, their boundaries and
interfaces, and includes the allocation of functions to hardware and
software elements.
 Examples: Layered architecture
Event-driven architecture
Component
Events 1
Generator
Interface 1-2

Component
2
Dispatcher
Interface 2-3

Component
Handler 1 Handler 2 ... Handler 3 3

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 136
SOFTWARE ARCHITECTURE DESIGN
ACTIVITIES

Activities during this phase:


 Description of the software architecture using a suitable level of
abstraction
 Identification of all software units
 Verification of the architecture

ASIL
Methods
A B C D
1a Informal notations ++ ++ + +
1b Semi-formal notations + ++ ++ ++
1c Formal notations + + + +
Table 2 — Notations for software architectural design from ISO 26262-6

 The use of notations shall avoid systematic faults


by providing a clear interpretation
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 137
EXAMPLE OF NOTATIONS

 Natural language - e.g.


The car shall be painted in red (RAL 3020).
 Informal notations – syntax
(symbols/structure of code) and semantics
(meaning of symbols) have not at all or
only partially been defined,
e.g. use cases
 Semi-formal notations – syntax has been
defined, the associated semantics, at best
partially defined,
e.g. data flow diagram
 Formal notations - syntax and semantics
have been defined,
e.g. finite state machine
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 138
EXAMPLE
INFORMAL NOTATION

 Informal Notation
Drawings and diagrams without fully defined syntax and
semantics

Example:

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 139
EXAMPLES
SEMI-FORMAL NOTATION

 Semi-formal Notation
The syntax is defined, but the semantic is not completely defined

Examples:
 logic/function block diagrams: described in IEC 61131-3
 sequence diagrams: described in IEC 61131-3
 data flow diagrams: see IEC 61508-3, ref. C.2.2
 finite state machines/state transition diagrams: see IEC 61508-3, ref. B.2.3.2
 time Petri nets: see IEC 61508-3, ref. B.2.3.3
 entity-relationship-attribute Data models: see IEC 61508-3, ref. B.2.4.4
 message sequence charts: see IEC 61508-3, ref. C.2.14
 decision/truth tables: see IEC 61508-3, ref. C.6.1
 UML: see IEC 61508-3, ref. C.3.12
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 140
EXAMPLE:
SEMI-FORMAL NOTATION
DATA FLOW DIAGRAM

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 141
FORMAL NOTATION

 Formal Notation
 Both syntax and semantic are fully defined
 Based on mathematical models
 Very rarely use, because difficult to implement
 Not common in automotive

 Examples:
 CCS, CSP, HOL, LOTOS, OBJ, temporal logic, VDM
and Z ( methods not described in detail)
 Other techniques like “finite state machines” and “Petri
nets”, are often seen as formal methods, depending on
their mathematical basis

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 142
SOFTWARE ARCHITECTURE

Objectives of the architecture description:


 A software structure which is verifiable.
 Progressive refinement ( SW design, SW module design)
 Maintainable/servicable SW structure
 Allocation of the safety-relevant functionality to subcomponents
 Estimations in the architecture document (runtime, memory (RAM,
ROM, parameters), interface capacities (bus, etc.)

with
 Modular structure Hiding data structures to
 Encapsulation principle prevent manipulation

 Low complexity

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 143
BASIC PRINCIPLES OF SW
ARCHITECTURE
ASIL Examples
Methods
A B C D Tool / Metric Interpretation

1a Hierarchical structure of software components ++ ++ ++ ++ DoxyGen Is it understandable?

a McCabe,
1b Restricted size of software components ++ ++ ++ ++ SW modularisation, use of HIS metric
LOC metric
a
1c Restricted size of interfaces + + + + DoxyGen Is it understandable?
measure of “tightness” of the connections
High cohesion within each software LCOM4,
1d + ++ ++ ++ between data and subprograms within one
component b Cohesion metric
module
Restricted coupling between software measure for the “tightness” of connections
1e + ++ ++ ++ RFC metric
components a, b, c between modules
Operating Check of the maximum task run time,
1f Appropriate scheduling properties ++ ++ ++ ++
System (OS) TTA (time-triggered) Architecture feasible?
Only 1 timer, 1 receive and 1
transmission-interrupt is allowed
a, d Otherwise: Interrupts to be prioritized
1g Restricted use of interrupts + + + ++
correctly, Interrupts closed only for a short
time for critical SW areas, Interrupts to be
documented in details
a In methods 1b, 1c, 1e and 1g "restricted" means to minimize in balance with other design considerations.
b Methods 1d and 1e can, for example, be achieved by separation of concerns which refers to the ability to identify, encapsulate, and
manipulate those parts of software that are relevant to a particular concept, goal, task, or purpose.
c Method 1e addresses the limitation of the external coupling of software components.
d Any interrupts used have to be priority-based.
ISO26262-6, Table 3 — Principles for software architectural design (with add-ons from SGS TÜV Saar)
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 144
ERROR DETECTION (ARCHITECTURE)

ASIL
Methods Comments
A B C D
Online check of valid input and
1a Range checks of input and output data ++ ++ ++ ++
output data range

a e.g.: Assertion checks, comparison of


1b Plausibility check + + + ++
calculated data and expected data

b e.g.: Check sums, CRC check, dual


1c Detection of data errors + + + +
invers stored variables
c
1d External monitoring facility o + + ++ e.g.: Watchdog, Two CPUs,

1e Control flow monitoring o + ++ ++ e.g.: Watchdog


Diverse algorithms. Sometimes
1f Diverse software design o o + ++ difficult to realize and expensive 
decomposition
a Plausibility checks can include using a reference model of the desired behavior, assertion checks, or comparing signals from
different sources.
b Types of methods that may be used to detect data errors include error detecting codes and multiple data storage.
c An external monitoring facility can be, for example, an ASIC or another software element performing a watchdog function.

ISO26262-6, Table 4 — Mechanisms for error detection at the software architectural level (with add ons from SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 145
ERROR CONTROL (ARCHITECTURE)

ASIL
Methods Comments
A B C D
If a fault is detected the system will be reset to
a
1a Static recovery mechanism + + + + an earlier internal operating condition, which
correctness is known or recovery by repeating

The architecture ensures the functions with


b higher priority will be operated before the lower
1b Graceful degradation + + ++ ++
ones, if the resources are not sufficient to
perform all system functions.
c
1c Independent parallel redundancy o o + ++ N-Version Programming

Although only a part of faults can be corrected in


1d Correcting codes for data + + + + a safety relevant system, it is often better to
reject wrong data

a Static recovery mechanisms can include the use of recovery blocks, backward recovery, forward recovery and recovery through
repetition.
b Graceful degradation at the software level refers to prioritizing functions to minimize the adverse effects of potential failures on
functional safety.
c Independent parallel redundancy can be realized as dissimilar software in each parallel path.

ISO 26262-6, Table 5 — Mechanisms for error handling at the software architectural level (with add ons from SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 146
VERIFICATION OF SW
ARCHITECTURE: METHODS
Objectives :
 Compliance with SW SRS
 Compatibility with the target hardware
 Demonstration of guidelines
ASIL
Methods Comments
A B C D
a
1a Walk-through of the design ++ + o o Discussion with reviewer
a
1b Inspection of the design + ++ ++ ++ Review acc. a defined process (e.g. use of check lists)
Simulation of dynamic parts of
1c b + + + ++ Model based development
the design
1d Prototype generation o o + ++ Model based development

1e Formal verification o o + + Model based development. Needs a plant model


c
1f Control flow analysis + + ++ ++ Analysis of the correct program flow
c
1g Data flow analysis + + ++ ++ e.g. SW-FMEA
a In the case of model-based development these methods can be applied to the model.
b Method 1c requires the usage of executable models for the dynamic parts of the software architecture.
c Control and data flow analysis may be limited to safety-related components and their interfaces.
ISO 26262-6, Table 6 — Methods for the verification of the software architectural design (add ons by SGS TÜV Saar)
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 147
DIGRESSION: SW-FMEA (NASA-GB-
1740.13)

 SW-FMEA’s (Software Failure Mode and Effect Analysis) identify key


software fault modes for data and software actions.
 It analyzes the effects of abnormalities on the system and components
in the system.
 Each component is examined, and all the ways it can fail are listed.
 SW-FMEA can potentially identify:
 Hidden and unanticipated failure modes,
 System interactions,
 Dependencies within the SW and between SW and HW
 Unstated assumptions
 Inconsistencies between the requirements and the design
 Safety Analysis at the software architectural level is required
by ISO 26262 (Edt. 1), but the method is not described in detail
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 148
SOFTWARE SAFETY ANALYSIS

To satisfy ISO 26262 requirements for software safety analysis and dependent
failure analysis for software, a series of three steps can be followed:

STEP 1: Software Error Mitigations Checklist


• Global safety mechanisms to cover common static and dynamic failure modes
• Simplifies the main protections in a checklist format
• Can be used for a generic microprocessor, and later identify specific errors

STEP 2: Application Software FMEA


• Evaluates causes, effects, and protections in safety application software
• Relates findings to specific measures and test cases

STEP 3: Dependent Failure Analysis (Freedom-From-Interference Analysis)


• Evaluates potential for safety software to be defeated/corrupted by non-safety
elements or lower ASIL elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 149
EXAMPLE
SOFTWARE MITIGATION CHECKLIST

Dynamic
Static
ID Potential Software Failures Mitigation Measures Safety Requirement Reference

Incomplete execution

SW_M1 Execution interrupted X Program flow check SWSR-53

SW_M2 Execution stopped X Program flow check SWSR-53


Deadlock (two or more processes
SW_M3 X X Program flow check SWSR-53
waiting on each other)
Resource starvation - execution
SW_M4 time available is less than what is X Program flow check SWSR-53
needed
Usage of Organization Coding
Compilation errors - ignored
Guidelines - No compiler errors/
SW_M5 warning leading to incorrect X X Coding Guideline 37
warning allowed in production
execution
software
COTS execution - errors in vendor
SW_M6 software leading to stopped X Program flow check SWSR-53
execution
Usage of Organization Coding
guidelines: Unsupported case
SW_M7 Unsupported case condition X Coding Guideline 33, Test Case 1234
conditions are not allowed in
production software

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 150
EXAMPLE
SOFTWARE FMEA

Does the system (vehicle


System Effect level) impact map to any
Software Software (sensor level); of the identified Hazards
Local Effect (function
No. Component Description Failure Mode Functions (Consider if in HARA?
level)
(Function) Affected mitigation is already List the Hazard ID, Title
available) and ASIL ranking of the
Hazard

sensor delivers wrong


flash error not detected; value, if watchdog Wrong output
Fails to execute - does
To be clarified system behaviour can does not detect a signal/malfunction of
not run
be undefined failure in program safety function/ASIL D
behaviour

To be flash test are not sensor does not have


Incomplete Execution
clarified completed, no output any output

no local effect because


test of Timing Error - sum of
To be sensor does not take sensor delivers the
b1 Testunit_2 memory of start-up timing do not
clarified care about timing during first output delayed
flash meet requirement
initialization

sensor delivers wrong


flash error not detected; value, if watchdog
Error in Execution - ok, To be
system behaviour can does not detect a
but memory has errors clarified
be undefined failure in program
behaviour

Error in Execution - not To be specified behaviour: sensor does not have


ok, but memory ok clarified reset of sensor any output

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 151
PART 6, CLAUSE 8
SOFTWARE UNIT DESIGN AND
IMPLEMENTATION

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 152
SOFTWARE UNIT DESIGN AND
IMPLEMENTATION - ACTIVITIES

ASIL
Methods Comments
A B C D

1a Natural language ++ ++ ++ ++ Always required


Syntax and semantic is not
1b Informal notations ++ ++ + +
defined
Syntax is defined, sematic not
1c Semi-formal notations + ++ ++ ++
completely defined

1d Formal notations + + + + Syntax and semantic is defined

ISO 26262-6, Table 7 — Notations for software unit design (add ons by SGS TÜV Saar)

 To specify the software unit design it is mandatory to use natural language


plus notations (rules for syntax and semantic)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 153
SOFTWARE UNIT
AND EMBEDDED SOFTWARE

 A software unit is
 An atomic level software component of the software architecture that
can be subjected to stand-alone testing.

 An embedded software is
 A fully-integrated software to be executed on a processing element
(e.g. Microcontroller, Field Programmable Gate Array (FPGA) or an
Application Specific Integrated Circuit (ASIC))

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 154
SOFTWARE UNIT DESIGN AND

ISO 26262-6, Table 8 — Design principles for software unit design and implementation (added SGS TÜV Saar)
IMPLEMENTATION - METHODS
ASIL Umsetzung (Beispiele)
Methods
A B C D Tool Interpretation
One entry and one exit point in
1a ++ ++ ++ ++ MISRA-Rules
subprograms and functions a

1b
No dynamic objects or variables, or else
online test during their creation a,b
+ ++ ++ ++ MISRA-Rules All methods
1c Initialization of variables ++ ++ ++ ++ MISRA-Rules generally require
a
1d No multiple use of variable names + ++ ++ ++ MISRA-Rules a SW Tool
Avoid global variables or else justify their
1e + + ++ ++ MISRA-Rules
usage a
No pointer arithmetic, no pointers at functions,
a
1f Limited use of pointers o + + ++ MISRA-Rules Array indexing with pointers is allowed, HW
access with pointers is allowed
a,b
1g No implicit type conversions + ++ ++ ++ MISRA-Rules
c No continue statements, no backwards gotos, no
1h No hidden data flow or control flow + ++ ++ ++ MISRA-Rules
unreachable code, no dead code (e.g. A=A;)
1i No unconditional jumps a,b,c ++ ++ ++ ++ MISRA-Rules No continue statements, no backwards gotos
1j No recursions + + ++ ++ MISRA-Rules
a Methods 1a, 1b, 1d, 1e, 1f, 1g and 1i may not be applicable for graphical modelling notations used in model-based development.
b Methods 1g and 1i are not applicable in assembler programming.
c Methods 1h and 1i reduce the potential for modelling data flow and control flow through jumps or global variables.

ISO 26262-6, Table 8 — Design principles for software unit design and implementation (add-ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 155
SOFTWARE UNIT DESIGN AND
IMPLEMENTATION - VERIFICATION

ASIL
Methods Comments
A B C D
a
1a Walk-through ++ + o o
a
1b Inspection + ++ ++ ++
1c Semi-formal verification + + ++ ++
1d Formal verification o o + +
b,c See next slides
1e Control flow analysis + + ++ ++
b,c
1f Data flow analysis + + ++ ++
1g Static code analysis + ++ ++ ++
d
1h Semantic code analysis + + + +
a In the case of model-based software development the software unit specification design and implementation can be verified at
the model level.
b Methods 1e and 1f can be applied at the source code level. These methods are applicable both to manual code development
and to model-based development.
c Methods 1e and 1f can be part of methods 1d, 1g or 1h.
d Method 1h is used for mathematical analysis of source code by use of an abstract representation of possible values for the
variables. For this it is not necessary to translate and execute the source code.
ISO 26262-6, Table 9 — Methods for the verification of software unit design and implementation (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 156
SW UNIT DESIGN
VERIFICATION BY INSPECTION

Example: Checklist
Checklist for Code Reviews, (list is not complete, only an example)
 Has the design been properly translated into code? (The results of the procedural design should be
available during this review.)
 Is the document header complete:
a) the title, referring to the scope of the content,
b) the author and approver,
c) unique identification of each different revision (version) of a document,
d) the change history,
e) the status. E.g.: “draft", "released".
 Are there misspellings and typos?
 Is there compliance with coding standards for language style, comments, etc.?
 Are there incorrect or ambiguous comments?
 Are comments useful or are they simply poor coding?
 Are data types and data declarations correct?
 Are physical constants correct?
 Has maintainability been considered?
and so on ….

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 157
SW UNIT DESIGN
VERIFICATION BY THE USE OF
CONTROL FLOW ANALYSIS

 Objective
Detection of bad or incorrect program structures
 Description
The control flow analysis is a static analysis method. The
program is analyzed to get a flow graph, which can be
analyzed for:
• Unreachable code, as a result of unconditional jumps
• Knotted code  bad structured code (see example next page)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 158
SW UNIT DESIGN
VERIFICATION BY THE USE OF
CONTROL FLOW ANALYSIS - EXAMPLE

Good structured Code Bad structured Code

Legend
= Knot = Instruction
= Path = Control Flow

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 159
SW UNIT DESIGN
VERIFICATION BY THE USE OF
DATA FLOW ANALYSIS

 Objective
Identification of bad or incorrect program structures
 Description
The data flow analysis is a static method, which is typically combined
with the information from the control flow analysis. The analysis checks
the following:
• Variables which can be read without initialization
• Variables which can be written several times, without reading. This could
be an indication for a skipped code
• Variables which are written but never read. This could be an indication for
a redundant code

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 160
SW UNIT DESIGN
VERIFICATION BY THE USE OF
STATIC ANALYSIS

 The goal of static analysis is the detection of existing errors in a


document (e.g. source code).
 Static Analysis
 An evaluation process in which a software program is systematically
assessed without necessarily executing the program.
 All static analysis methods can in principle be performed without tool
support. However, tool support is useful.
 The evaluation may typically be computer-aided generated and usually
includes analysis of such features as program logic, data paths, interfaces
and variables
 Typically static analysis combines Control Flow and Data Flow Analysis

 Static code analysis can be seen as “state of the art” at software unit level
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 161
SW UNIT DESIGN
VERIFICATION BY THE USE OF
DATA FLOW ANALYSIS - EXAMPLE

void MinMax (int &min, int &max)


{
int hilf;
if (min > max)
Alarm: Variable “hilf” is not initialized.
{
max = hilf; Warning: Variable “max” is written
several times but never read between.
max = min;
hilf = min;
Warning: Variable “hilf” is written but
} never read.
}
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 162
DAY 3
EXERCISE 11
Static Code Analysis
01 int main ()
What issues do you see?
02 {
03 char c = 'x';
04 int y;
05 int i;
06 while (c != 'x');
07 {
08 c = getchar ();
09 if (c == 'e') return 0;
10 switch (c)
11 {
12 case '\n':
13 printf ("line break\n");
14 default:
15 printf ("%c",c);
16 }
17 }
18 printf ("\nreturn now");
19 return y;
20 }
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 163
SOLUTION TO EXERCISE 11 (DAY 3)
int main ()
{
char c = 'x';
int y; Variable “i” declared but not used
int i;
while (c != 'x'); Suspected infinite loop. No value
{
used in loop test (c) is modified by
c = getchar ();
test or loop body.
if (c == 'e') return 0;
switch (c)
{ Assignment of int to char:
case '\n':
printf ("line break\n");
default: Fall through case (no preceding break)
printf ("%c",c);
}
}
printf ("\nreturn now"); Variable “y” is not initialized
return y;
}

© SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 164
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 165
PART 6, CLAUSE 10
SOFTWARE-INTEGRATION AND TEST

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 166
PLANNED INTEGRATION

 Software integration shall be a planned integration of all software


units considering the functional interactions/context and interfaces
 Software integration can be processed in one or more steps.
 A “big bang” strategy can be confusing
 A stepwise procedure is recommended

Stepwise procedure Big Bang (rapid change)

 Integration means a stepwise building up of the planned software architecture


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 167
SOFTWARE-UNIT TESTING

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 168
SOFTWARE UNIT TESTING
ACTIVITIES AND AIMS

Objectives:
 Compliance with SW unit design specification
 Compliance with HW/SW interface specification (HSI)
 Demonstration of correct implementation of the functionality
 Demonstration that no unintended functionality has been
implemented
 Robustness
 Demonstration that sufficient resources for functionality are available

Activities during this phase:


 Selection of suitable test cases with PASS / FAIL criteria
 Execution and documentation of tests
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 169
DERIVING TEST CASES FROM
SPECIFICATIONS

 This is the standard approach for deriving test cases in the industry.
 Deriving test cases is only possible after the release of the requirements.
 Quality of test cases depends on the quality of the requirements.
 Example:
 Requirement: One procedure named “InterriorLightning” shall switch
off or on the interior lightning depending on the input value.
 Following test cases are used during testing:

Test Case Input Value Expected Output Value


TF1 (LIGHT_ON) Interior lightning on
TF2 (LIGHT_OFF) Interior lightning off
TF3 () ERROR
TF4 (LIGHT) ERROR
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 170
DERIVATION OF TEST CASES –
EXAMPLE OF BOUNDARY VALUE
ANALYSIS

 A 10 Bit Digital-Analog-Converter has a valid input value


range from 0 till 1023.
 Creating test cases with following input values:

Test Case Description Input value Expected Output


No. (Test value) Value
1 below minimum -1 Error
2 at minimum 0 0
3 above minimum 1 1
4 below maximum 1022 1022
5 at maximum 1023 1023
6 above maximum 1024 Error

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 171
SW UNIT TESTING
METHODS
ASIL
Methods Comments
A B C D
a
1a Requirements-based test ++ ++ ++ ++
1b Interface test ++ ++ ++ ++
b
1c Fault injection test + + + ++ See next slides
c
1d Resource usage test + + + ++
Back-to-back comparison test between model and
1e d + + ++ ++
code, if applicable

a The software requirements at the unit level are the basis for this requirements-based test.
b This includes injection of arbitrary faults (e.g. by corrupting values of variables, by introducing code mutations, or by corrupting
values of CPU registers).
c Some aspects of the resource usage test can only be evaluated properly when the software unit tests are executed on the
target hardware or if the emulator for the target processor supports resource usage tests.
d This method requires a model that can simulate the functionality of the software units. Here, the model and code are stimulated
in the same way and results compared with each other.

ISO 26262-6, Table 10 — Methods for deriving test cases for software unit testing (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 172
SW UNIT TESTING
REQUIREMENTS BASED TEST
EXAMPLE

Requirement Function Test Case Teststatus


R1.1 Create a user ID input box. TC1.1 PASSED
R1.2 Create a Password input box. TC1.2 PASSED
R1.3 Create a Submit button. TC1.1 PASSED
TC1.2
TC1.3
R1.4 Create a Cancel button. TC1.1 PASSED
TC1.2
TC1.4
R2.1 Check User ID value. TC2.1 FAILED
R2.2 Check User Password value. TC2.2 EXECUTED
R3.1 Display Homepage. TC3.1 OPEN

 Requirements based testing means to have minimum one test case


per safety requirement
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 173
INTERFACE TESTING

 This method demonstrates the consistency and correct


implementation of interfaces, e.g. HW-SW Interface or
interfaces between software components.
 Interface tests consider the testing of analog & digital
inputs and outputs, boundary tests, tests with equivalency
classes, in order to fully test the specified interface
(including compatibility, timings and specified design)
 Internal interfaces can be tested by static tests of the
software / hardware compatibility as well as by dynamic
tests.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 174
SW UNIT TESTING
FAULT INJECTION TEST

 Objective: Check for robustness of each SW-unit


 This improves the test coverage
 Test of code, which cannot be reached easy
(e.g. failure handling code).
 Examples:
 Manipulation of CPU registers by the use of a debugger
 Manipulation of variables by the use of a debugger
 Manipulation of HW signals by the use of a special test board
 Manipulation of source code before compilation
e.g. a = a + 1 will be a = a - 1

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 175
RESOURCE USAGE TESTING

 Objective: Evaluation, that the requirements of the HW-


architecture are fulfilled with sufficient tolerance:
 Minimum and maximum process execution times,
 Memory usage, e.g.:
RAM for stack and heap,
ROM for executable and non volatile data.
 Bandwidth for communication resources (e.g. data bus).

 Some aspects of the resource usage test can only be


evaluated properly when the software unit tests are
executed on the target hardware or if the emulator for the
target processor supports resource usage tests.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 176
SW UNIT TESTING
RESSOURCE USAGE TEST
EXAMPLE

 Measured memory use after testing:


Maximum memory use (kB)
Function
ROM RAM EEPROM
function 1 20.30 50.00 0.00
function 2 60.21 38.21 0.02
function 3 42.43 77.02 1.27
… .. .. ..
Test-Sum: 821.21 450.23 12.21

 Resource usage test analysis:


Memory Size (kB)
ROM RAM EEPROM
Board size: 1024 1024 64
Test-Sum: 821.21 450.23 12.21
Free 202.79 573.77 51.79

 Stack memory usage analysis:


Memory Size (kB)
Max-Stack-Definition 256
Stack-use 125.33
Free 130.67

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 177
BACK-TO-BACK TESTING

 This method demonstrates the correct implementation of


functional and technical safety requirements.
 Comparison of a simulated behavior and the behavior of
the implementation using the same test case.
 Comparison of the results of test cases performed with a
simulation and the actual implementation.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 178
SW UNIT TESTING
TEST COVERAGE

ASIL
Methods Comments
A B C D
1a Statement coverage ++ ++ + +

1b Branch coverage + ++ ++ ++ See next slides


MC/DC (Modified Condition/Decision
1c + + + ++
Coverage)

ISO 26262-6, Table 12 — Structural coverage metrics at the software unit level (add ons by SGS TÜV Saar)

 Integration means a stepwise building up of the planned software architecture


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 179
STRUCTURAL COVERAGE

 The SW-unit level is distinguished between


 Statement Coverage
(i.e. percentage of statements within the software that
have been executed)
 Branch Coverage
(i.e. percentage of branches of the control flow that have
been executed)
 MC/DC (Modified Condition/Decision Coverage) refer to
next slide

 There are no target values given for the structural test coverage.
This means 100% is the target
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 180
MC/DC (MODIFIED
CONDITION/DECISION COVERAGE)

 MC/DC:
 Every point of entry and exit in the program has been invoked at least once,
 Every decision in the program has taken all possible outcomes at least once
 Every condition in a decision in the program has taken on all possible outcomes at
least once,
 Every condition in a decision has been shown to independently affect that
decision’s outcome.
 A condition is shown to affect a decision’s outcome independently by varying
just that condition while holding fixed all other possible conditions.
 The condition/decision criterion does not guarantee the coverage of all
conditions in a module because in many test cases, some conditions of a
decision are masked by the other conditions.
 Using the modified condition/decision criterion, each condition must be shown
to be able to act on the decision outcome by itself, everything else being held
fixed. The MC/DC criterion is thus much stronger than the
condition/decision coverage.
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 181
SIMPLE C++ EXAMPLE

int fct_example (int x, int y) ‘fct_example’ is part of a bigger program


and this program was run with some test
{
suite.
int z = 0;
if ((x>0) && (y>0)){
z = x; }
return z;
}

Execute following test cases to reach 100% structural


coverage:
 branch coverage: ‘fct_example (0,1)’, ‘fct_example (1,1)’
 condition/decision coverage: the function was called as
‘fct_example (1,1)’ , ‘fct_example (0,1)’ , ‘fct_example (1,0)’
and ‘fct_example (0,0)’
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 182
SOFTWARE-UNIT TESTING

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 183
SOFTWARE INTEGRATION AND
TEST METHODS

ASIL
Methods Comments
A B C D
a
1a Requirements-based test ++ ++ ++ ++
1b Interface test ++ ++ ++ ++
b
1c Fault injection test + + ++ ++
cd
see SW-Unit Testing
1d Resource usage test + + + ++
Back-to-back comparison test between model and
1e e + + ++ ++
code, if applicable
a
The software requirements at the architectural level are the basis for this requirements-based test.
b
This includes injection of arbitrary faults in order to test safety mechanisms (e.g. by corrupting software or hardware
components).
c
To ensure the fulfilment of requirements influenced by the hardware architectural design with sufficient tolerance, properties such
as average and maximum processor performance, minimum or maximum execution times, storage usage (e.g. RAM for stack and
heap, ROM for program and data) and the bandwidth of communication links (e.g. data buses) have to be determined.
d
Some aspects of the resource usage test can only be evaluated properly when the software integration tests are executed on
the target hardware or if the emulator for the target processor supports resource usage tests.
e
This method requires a model that can simulate the functionality of the software components. Here, the model and code are
stimulated in the same way and results compared with each other.

ISO 26262-6, Table 13 — Methods for software integration testing (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 184
SOFTWARE INTEGRATION UND TEST
DERIVATION OF TEST CASES

ASIL
Methods Comments
A B C D

1a Analysis of requirements ++ ++ ++ ++
a
1b Generation and analysis of equivalence classes + ++ ++ ++
See SW-Unit Testing
b
1c Analysis of boundary values + ++ ++ ++

c
1d Error guessing + + + +

a Equivalence classes can be identified based on the division of inputs and outputs, such that a representative test value can be
selected for each class.
b This method applies to interfaces, values approaching and crossing the boundaries and out of range values.
c Error guessing tests can be based on data collected through a “lessons learned” process and expert judgment.

ISO 26262-6, Table 14 — Methods for deriving test cases for software integration testing (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 185
SOFTWARE INTEGRATION AND TEST
TEST COVERAGE

ASIL
Methods Comments
A B C D
a
1a Function coverage + + ++ ++
See next slides
b
1b Call coverage + + ++ ++

a Method 1a refers to the percentage of executed software functions. This evidence can be achieved by an appropriate
software integration strategy.
b Method 1b refers to the percentage of executed software function calls.

ISO 26262-6, Table 15 — Structural coverage metrics at the software architectural level (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 186
STRUCTURAL COVERAGE

 At SW Architectural Level it is distinguished


between
 Function coverage (according to ISO 26262):
(i.e. the percentage of executed software functions)
 Call coverage (according to ISO 26262):
(i.e. the percentage of executed software function calls)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 187
SIMPLE C++ EXAMPLE

int fct_example (int x, int y) ‘fct_example’ is part of a bigger program


and this program was run with some test
{
suite.
int z = 0;
if ((x>0) && (y>0)){
z = x; }
return z;
}

Execute following test cases to reach 100% structural


coverage:
 function coverage: function ‘fct_example' was called at
least once during test run.
 statement coverage: the function was called (executed) as
‘fct_example (1,1)’
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 188
SOFTWARE-UNIT TESTING

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 189
VERIFICATION OF SOFTWARE
SAFETY REQUIREMENTS –
ACTIVITIES AND AIMS

 ISO 26262-6, section 10, Activities during this phase:


– Demonstrate that the embedded software fulfils the software safety
requirements

ASIL
Methods Comments
A B C D
1a Hardware-in-the-loop + + ++ ++ HIL-Test
a
1b Electronic control unit network environments ++ ++ ++ ++ Simulation
1c Vehicles ++ ++ ++ ++ Tests in the vehicle
a Examples include test benches partially or fully integrating the electrical systems of a vehicle, “lab-cars” or “mule” vehicles,
and “rest of the bus” simulations.

ISO 26262-6, Table 16 — Test environments for conducting the software safety requirements verification (add-ons by SGS TÜV Saar)

 Verification tests are typically done together with software integration tests
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 190
SUMMARY OF DAY 3
SOFTWARE DEVELOPMENT (1)

 Functional Safety in the software development is mainly


based on the use of processes, techniques and
methods (best practices)
 Initiation of software development means to plan
activities and select tools, techniques and methods along
with following Functional Safety rules of the company
 Software Safety Requirements are derived from safety
requirements at component (system) level
 The use of notations shall avoid systematic faults by
provides a clear interpretation
 Safety Analysis at the software architectural level is
required by ISO 26262 (Edt 1), but the method is not
described in detail  Improved in Second Edition

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 191
SUMMARY OF DAY 3
SOFTWARE DEVELOPMENT (2)

 Static code analysis can be seen as “state of the art”


at software unit level
 Integration means a stepwise build up of the planned
software architecture
 Requirements based testing means to have minimally
one test case per safety requirement
 There are no target values given for the structural test
coverage.
 This means 100% is the target
 Verification tests are typically done together with
software integration tests on the targeted HW

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 192

You might also like