You are on page 1of 250

Applied Reliability, Usability,

and Quality for Engineers

Global competition is forcing reliability and other professionals to work closely


during the product design and manufacturing phase. Because of this collaboration,
reliability, usability, and quality principles are being applied across many diverse
sectors of the economy. This book offers the principles, methods, and procedures for
these areas in one resource.
This book brings together the areas of reliability, usability, and quality for those
working in diverse areas to allow them to be exposed to activities that can help them
perform their tasks more effectively. This is the only book that covers these areas
together in this manner and written in such a way that no previous knowledge is
required to understand it. The sources of the material presented are included in the
reference section at the end of each chapter along with examples and solutions to
test reader comprehension.
Applied Reliability, Usability, and Quality for Engineers is useful to design, man-
ufacturing, and systems engineers, as well as manufacturing managers, reliability,
usability and, quality specialists. It can also be helpful to graduate, senior under-
graduate students, and instructors.
Applied Reliability,
Usability, and Quality
for Engineers

B.S. Dhillon

BK-TandF-DHILLON_9781032287997-220362-FM.indd 3 07/07/22 2:52 PM


First edition published 2023
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742

and by CRC Press


4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN

CRC Press is an imprint of Taylor & Francis Group, LLC

© 2023 Taylor & Francis Group, LLC

Reasonable efforts have been made to publish reliable data and information, but the author and publisher
cannot assume responsibility for the validity of all materials or the consequences of their use. The authors
and publishers have attempted to trace the copyright holders of all material reproduced in this publication and
apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright
material has not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, trans-
mitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter
invented, including photocopying, microfilming, and recording, or in any information storage or retrieval
system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, access www.copyright.com
or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-
750-8400. For works that are not available on CCC please contact mpkbookspermissions@tandf.co.uk

Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.

Library of Congress Cataloging‑in‑Publication Data

Names: Dhillon, B. S. (Balbir S.), 1947- author.


Title: Applied reliability, usability, and quality for engineers / B.S. Dhillon.
Description: First edition. | Boca Raton : CRC Press, 2023. | Includes bibliographical
references. | Summary: “Global competition is forcing reliability and other professionals
to work closely during the product design and manufacturing phase. Because of this
collaboration, reliability, usability, and quality principles are being applied across many
diverse sectors of the economy. This book offers the principles, methods, and procedures
for these areas in one resource. Applied Reliability, Usability, and Quality for Engineers is
useful to design, manufacturing, and systems engineers, as well as manufacturing managers,
reliability, usability and, quality specialists. It can also be helpful to graduate, senior
undergraduate students, and instructors”– Provided by publisher.
Identifiers: LCCN 2022015635 (print) | LCCN 2022015636 (ebook) | ISBN 9781032287997
(hardback) | ISBN 9781032288024 (paperback) | ISBN 9781003298571 (ebook)
Subjects: LCSH: Reliability (Engineering)
Classification: LCC TA169 .D4335 2023 (print) | LCC TA169 (ebook) |
DDC 620/.00452–dc23/eng/20220701
LC record available at https://lccn.loc.gov/2022015635
LC ebook record available at https://lccn.loc.gov/2022015636

ISBN: 978-1-032-28799-7 (hbk)


ISBN: 978-1-032-28802-4 (pbk)
ISBN: 978-1-003-29857-1 (ebk)

DOI: 10.1201/9781003298571

Typeset in Times
by KnowledgeWorks Global Ltd.
This book is affectionately dedicated to my son,
Mark, for challenging me to write 50 books.
Contents
Preface....................................................................................................................xvii
Author Biography.....................................................................................................xxi

Chapter 1 Introduction........................................................................................... 1
1.1 Reliability, Usability, and Quality History................................. 1
1.2 Need of Reliability, Usability, and Quality
in Product Design.......................................................................1
1.3 Terms and Definitions................................................................ 2
1.4 Useful Sources for Obtaining Information on Reliability,
Usability, and Quality................................................................. 4
1.4.1 Journals and Magazines................................................ 4
1.4.2 Conference Proceedings................................................ 4
1.4.3 Books.............................................................................4
1.4.4 Standards....................................................................... 5
1.4.5 Data Sources..................................................................6
1.5 Scope of the Book...................................................................... 6
1.6 Problems..................................................................................... 7
References............................................................................................. 7

Chapter 2 Basic Mathematical Concepts............................................................... 9


2.1 Introduction................................................................................ 9
2.2 Arithmetic Mean, Mean Deviation, and Standard
Deviation....................................................................................9
2.2.1 Arithmetic Mean...........................................................9
2.2.2 Mean Deviation........................................................... 10
2.2.3 Standard Deviation...................................................... 11
2.3 Boolean Algebra Laws............................................................. 11
2.4 Probability Definition and Properties....................................... 13
2.5 Mathematical Definitions......................................................... 14
2.5.1 Cumulative Distribution Function............................... 14
2.5.2 Probability Density Function...................................... 14
2.5.3 Expected Value............................................................ 14
2.5.4 Laplace Transform...................................................... 14
2.5.5 Laplace Transform: Final-Value Theorem.................. 16
2.6 Probability Distributions.......................................................... 16
2.6.1 Binomial Distribution.................................................. 16
2.6.2 Exponential Distribution............................................. 17
2.6.3 Rayleigh Distribution.................................................. 18
2.6.4 Weibull Distribution.................................................... 18
2.6.5 Normal Distribution.................................................... 19
2.6.6 Bathtub Hazard Rate Curve Distribution.................... 19

vii
viii Contents

2.7 Solving First-Order Differential Equations Using


Laplace Transforms..................................................................20
2.8 Problems................................................................................... 21
References........................................................................................... 22

Chapter 3 Reliability Basics, Human Factors Basics for Usability,


and Quality Basics...............................................................................25
3.1 Introduction..............................................................................25
3.2 Bathtub Hazard Rate Concept..................................................25
3.3 General Reliability Analysis Associated Formulas..................26
3.3.1 Failure (or Probability) Density Function...................26
3.3.2 Hazard Rate Function.................................................. 27
3.3.3 General Reliability Function.......................................28
3.3.4 Mean Time to Failure..................................................28
3.4 Reliability Networks................................................................. 30
3.4.1 Series Network............................................................ 30
3.4.2 Parallel Network.......................................................... 32
3.4.3 k-out-of-n Network......................................................34
3.4.4 Standby System........................................................... 36
3.4.5 Bridge Network........................................................... 37
3.5 Human Factors Basics for Usability......................................... 39
3.5.1 Comparison of Humans’ and Machines’
Capabilities and Limitations....................................... 39
3.5.2 Typical Human Behaviours......................................... 39
3.5.3 Human Sensory Capacities......................................... 41
3.5.3.1 Noise (Hearing)........................................... 41
3.5.3.2 Sight............................................................. 41
3.5.3.3 Touch........................................................... 42
3.6 Quality Goals and Quality Assurance System Elements......... 42
3.7 Products’ and Services’ Quality Affecting Factors
and Total Quality Management (TQM)................................... 43
3.7.1 TQM Elements and Goals for TQM Process
Success........................................................................44
3.7.2 Deming Approach to TQM......................................... 45
3.7.3 Obstacles to TQM Implementation.............................46
3.7.4 Organisations that Promote the TQM Concept
and Selected Books on TQM......................................46
3.7.4.1 Organisations...............................................46
3.7.4.2 Books...........................................................46
3.8 Problems................................................................................... 47
References........................................................................................... 47

Chapter 4 Reliability, Usability, and Quality Analysis Methods......................... 49


4.1 Introduction.............................................................................. 49
4.2 Failure Modes and Effect Analysis (FMEA)........................... 49
Contents ix

4.3 Fault Tree Analysis (FTA)........................................................ 51


4.3.1 Fault Tree Probability Evaluation................................ 52
4.3.2 Benefits and Drawbacks of the Fault Tree
Analysis....................................................................... 54
4.4 Markov Method........................................................................ 55
4.5 Cognitive Walkthroughs........................................................... 58
4.6 Task Analysis............................................................................ 58
4.7 Probability Tree Analysis......................................................... 59
4.8 Cause and Effect Diagram (CAED)......................................... 61
4.9 Quality Control Charts: The P-Charts..................................... 62
4.9.1 The P-Charts............................................................... 62
4.10 Problems...................................................................................64
References...........................................................................................64

Chapter 5 Medical Equipment Reliability........................................................... 67


5.1 Introduction.............................................................................. 67
5.2 Medical Equipment Reliability-Associated Facts
and Figures............................................................................... 67
5.3 Medical Devices and Medical Equipment/Devices
Classifications........................................................................... 68
5.4 Medical Equipment Reliability Improvement Methods
and Procedures......................................................................... 69
5.4.1 Failure Modes and Effect Analysis (FMEA).............. 69
5.4.2 Parts Count Method.................................................... 69
5.4.3 Fault Tree Analysis...................................................... 70
5.4.4 Markov Method........................................................... 70
5.4.5 General Approach....................................................... 70
5.5 Human Error in Medical Equipment........................................ 71
5.5.1 Important Medical Device/Equipment
Operator Errors........................................................... 71
5.5.2 Medical Devices with High Incidence
of Human Error........................................................... 72
5.6 Useful Guidelines for Reliability and Healthcare
Professionals for Improving Medical Equipment
Reliability................................................................................. 72
5.7 Medical Equipment Maintainbility and Maintenance............. 73
5.7.1 Medical Equipment Maintainability........................... 73
5.7.1.1 Aspect I: Reasons for the Application
of Maintainability Principles....................... 74
5.7.1.2 Aspect II: Maintainability Design
Factors.......................................................... 74
5.7.1.3 Aspect III: Maintainability
Measures...................................................... 74
5.7.2 Medical Equipment Maintenance............................... 75
5.7.2.1 Indices.......................................................... 76
5.7.2.2 Mathematical Models.................................. 77
x Contents

5.8 Sources for Obtaining Medical Equipment Reliability-


Associated Data........................................................................ 78
5.9 Problems................................................................................... 79
References........................................................................................... 79

Chapter 6 Robot Reliability................................................................................. 83


6.1 Introduction.............................................................................. 83
6.2 Terms and Definitions.............................................................. 83
6.3 Robot Failure Categories, Causes, and Corrective Measures......84
6.4 Robot Reliability-Associated Survey Results and
Robot Effectiveness Dictating Factors..................................... 85
6.5 Robot Relaibility Measures...................................................... 86
6.5.1 Robot Reliability......................................................... 86
6.5.2 Robot Hazard Rate...................................................... 87
6.5.3 Mean Time to Robot-Related Problems...................... 88
6.5.4 Mean Time to Robot Failure....................................... 88
6.6 Reliability Analysis of Hydraulic and Electric Robots............90
6.6.1 Reliability Analysis of the Hydraulic Robot...............90
6.6.2 Reliability Analysis of the Electric Robot.................. 93
6.7 Models for Conducting Robot Reliability and
Maintenance Studies................................................................96
6.7.1 Model I........................................................................96
6.7.2 Model II.......................................................................97
6.7.3 Model III..................................................................... 98
6.8 Problems................................................................................. 101
References......................................................................................... 102

Chapter 7 Computer and Internet Reliability..................................................... 103


7.1 Introduction............................................................................ 103
7.2 Computer Failure-Related Causes and Issues in
Computer System Reliability.................................................. 103
7.3 Computer Failure Categories, Hardware and Software
Error Sources, and Computer Reliability-Related
Measures................................................................................. 105
7.4 Comparisons Between Computer Hardware and
Software Reliability................................................................ 106
7.5 Fault Masking......................................................................... 106
7.5.1 Triple Modular Redundancy (TMR)......................... 107
7.5.1.1 TMR System Maximum Reliability
with Perfect Voter...................................... 108
7.5.1.2 TMR System with Voter Time-
Dependent Reliability and Mean
Time to Failure.......................................... 109
7.5.2 N-Modular Redundancy (NMR)............................... 111
Contents xi

7.6 Software Reliability Assessment Methods............................. 111


7.6.1 Category I: Analytical Methods................................ 111
7.6.2 Category II: Software Reliability Models................. 112
7.6.2.1 Musa Model............................................... 112
7.6.2.2 Mills Model............................................... 113
7.6.3 Category III: Software Metrics................................. 114
7.6.3.1 Code and Unit Test Phase
Measure..................................................... 114
7.6.3.2 Design Phase Measure............................... 115
7.7 Internet Facts, Figures, Failure Examples, and
Reliability-Related Observations............................................ 115
7.8 Internet Outage Classifications and an Approach
for Automating Fault Detection in Internet-Related
Servces.................................................................................... 116
7.9 Mathematical Models for Conducting Internet
Reliability and Availability Analysis..................................... 117
7.9.1 Model I...................................................................... 117
7.9.2 Model II..................................................................... 119
7.10 Problems................................................................................. 121
References......................................................................................... 121

Chapter 8 Power System Reliability.................................................................. 125


8.1 Introduction............................................................................ 125
8.2 Power System Reliability-Associated Terms
and Definitions....................................................................... 125
8.3 Loss of Load Probability........................................................ 126
8.4 Power System Service Performance-Related:
Indices.................................................................................... 126
8.4.1 Index I....................................................................... 127
8.4.2 Index II...................................................................... 127
8.4.3 Index III..................................................................... 128
8.4.4 Index IV.................................................................... 128
8.4.5 Index V...................................................................... 128
8.4.6 Index VI..................................................................... 128
8.5 Availability Analylsis of Transmission and
Associated Systems................................................................ 129
8.5.1 Model I...................................................................... 129
8.5.2 Model II..................................................................... 131
8.5.3 Model III................................................................... 133
8.6 Availability Analysis of a Single Generator Unit................... 136
8.6.1 Model I...................................................................... 136
8.6.2 Model II..................................................................... 138
8.6.3 Model III................................................................... 140
8.7 Problems................................................................................. 143
References......................................................................................... 143
xii Contents

Chapter 9 Medical Device Usability.................................................................. 145


9.1 Introduction............................................................................ 145
9.2 Medical Device Users, User Interfaces, Use
Descriptions, and Use Environments..................................... 145
9.3 Medical Devices with High Incidence of User/Human
Error and a General Approach for Developing Medical
Devices’ Effective User Interfaces......................................... 147
9.4 Useful Guidelines for Making Interfaces of Medical
Device More User-Friendly.................................................... 148
9.5 Designing Medical Devices for Old Users............................. 150
9.6 Cumulative Trauma Disorder (CTD) Implications
in Medical Device Design...................................................... 151
9.7 Useful Documents for Improving Medical
Device Usability..................................................................... 152
9.8 Problems................................................................................. 154
References......................................................................................... 154

Chapter 10 Software Usability............................................................................. 157


10.1 Introduction............................................................................ 157
10.2 Need for Considering Usability During the Software
Development Process and the Human-Computer
Interface Fundamntal Principles............................................ 157
10.3 Software Usability Engineering Process................................ 158
10.4 Steps to Improve Software Product Usability........................ 159
10.5 Software Usability Inspection Methods and
Considerations for Their Selection......................................... 160
10.6 Software Usability Testing Methods and Important
Factors with Respect to Such Methods.................................. 162
10.7 Useful Guidelines to Perform Software
Usability Testing..................................................................... 163
10.8 Problems................................................................................. 164
References......................................................................................... 164

Chapter 11 Web Usability.................................................................................... 167


11.1 Introduction............................................................................ 167
11.2 Web Usability-Associated Facts and Figures......................... 167
11.3 Common Web Design-Related Errors.................................... 168
11.4 Web Page Design.................................................................... 168
11.4.1 Page Size................................................................... 169
11.4.2 Font Usage................................................................. 170
11.4.3 Textual Element Usage.............................................. 170
11.4.4 Image Usage.............................................................. 171
11.4.5 Help Users................................................................. 171
Contents xiii

11.5 Website Design....................................................................... 172


11.5.1 Site Organisation....................................................... 173
11.5.2 Shared Elements of Site Pages.................................. 173
11.5.3 Site Testing and Maintenance................................... 173
11.6 Navigation AIDS.................................................................... 174
11.6.1 Link Usage................................................................ 174
11.6.2 Menus and Menu Bar Usage..................................... 175
11.6.3 Navigation Bar Usage................................................ 175
11.7 Web Usability Evaluation Tools............................................. 176
11.7.1 Web SAT................................................................... 176
11.7.2 Max........................................................................... 177
11.7.3 NetRaker................................................................... 177
11.7.4 Lift............................................................................. 178
11.8 Questions for Evaluating Website Message
Communication Effectiveness................................................ 178
11.8.1 Concept...................................................................... 178
11.8.2 Content...................................................................... 178
11.8.3 Text............................................................................ 179
11.8.4 Mechanics................................................................. 179
11.8.5 Design........................................................................ 179
11.8.6 Navigation................................................................. 180
11.9 Problems................................................................................. 180
References......................................................................................... 180

Chapter 12 Quality in Health Care...................................................................... 183


12.1 Introduction............................................................................ 183
12.2 Health Care Quality-Related Terms and
Definitions and Reasons for the Rising Cost
of Health Care........................................................................ 183
12.3 Comparisons of Traditional Quality Assurance and
Total Quality Management (TQM) in Regard to
Health Care and Quality Assurance Versus Quality
Improvement in Health Care Institutions............................... 184
12.4 Assumptions for Guiding the Development of
Quality-Related Strategies in Health Care and Health
Care-Associated Quality Goals and Strategies...................... 185
12.5 Steps for Quality Improvement in Health Care and
Physician Reactions to Total Quality..................................... 188
12.6 Quality Tools for Use in Health Care..................................... 188
12.6.1 Cost-Benefit Analysis................................................ 189
12.6.2 Brainstorming........................................................... 189
12.6.3 Check Sheets............................................................. 190
12.6.4 Multivoting................................................................ 190
12.6.5 Force Field Analysis.................................................. 190
xiv Contents

12.7 Implementation of Six Sigma Methodology in Hospitals


and Its Potential Benefits and Implementation Barriers......... 190
12.8 Problems................................................................................. 192
References......................................................................................... 192

Chapter 13 Medical Device Quality Assurance.................................................. 195


13.1 Introduction............................................................................ 195
13.2 Regulatory Compliance of Medical Device
Quality Assurance.................................................................. 195
13.2.1 Procedure for Satisfying GMP Regulation
and ISO 9000 Requirements in Regard to
Quality Assurance..................................................... 195
13.3 Medical Device Design Quality Assurance Programme....... 196
13.3.1 Organization.............................................................. 197
13.3.2 Specifications............................................................ 197
13.3.3 Design Review........................................................... 198
13.3.4 Reliability Assessment.............................................. 198
13.3.5 Parts and Materials Quality Assurance.................... 199
13.3.6 Software Quality Assurance..................................... 199
13.3.7 Labelling................................................................... 199
13.3.8 Design Transfer.........................................................200
13.3.9 Certification...............................................................200
13.3.10 Test Instrumentation..................................................200
13.3.11 Personnel...................................................................200
13.3.12 Quality Monitoring After the Design Phase.............200
13.4 Tools for Assuring Medical Device Quality.......................... 201
13.4.1 Cause-and-Effect Diagram........................................ 201
13.4.2 Quality Function Deployment................................... 201
13.4.3 Pareto Diagram......................................................... 203
13.4.4 Flowcharts................................................................. 203
13.4.5 Scatter Diagram........................................................203
13.4.6 Control Charts...........................................................204
13.4.7 Histogram..................................................................204
13.5 Quality Indices.......................................................................204
13.5.1 Quality Inspector Accuracy Index............................204
13.5.2 Vendor Rating Programme Index.............................205
13.5.3 Quality Inspector Inaccuracy Index..........................206
13.5.4 Quality Cost Index....................................................206
13.6 Problems.................................................................................207
References.........................................................................................207

Chapter 14 Software Quality...............................................................................209


14.1 Introduction............................................................................209
14.2 Software Quality-Related Terms and Defintions...................209
Contents xv

14.3 Software Quality Factors and Their Categories..................... 210


14.3.1 Product Operation Factors......................................... 210
14.3.2 Product Revision Factors........................................... 211
14.3.3 Product Transition Factors........................................ 211
14.4 Useful Quality Methods for Use During the Software
Development Process.............................................................. 212
14.4.1 Run Charts................................................................ 212
14.4.2 Pareto Diagram......................................................... 212
14.5 Quality-Related Measures During the Software
Development Life Cycle......................................................... 213
14.5.1 Stage I: Requirements Analysis................................ 213
14.5.2 Stage II: Systems Design........................................... 214
14.5.3 Stage III: Systems Development............................... 214
14.5.4 Stage IV: Testing....................................................... 214
14.5.5 Stage V: Implementation and Maintenance.............. 214
14.6 Software Quality-Associated Metrics.................................... 215
14.6.1 Metric I...................................................................... 215
14.6.2 Metric II.................................................................... 216
14.6.3 Metric III................................................................... 216
14.6.4 Metric IV................................................................... 216
14.6.5 Metric V.................................................................... 216
14.6.6 Metric VI................................................................... 217
14.6.7 Metric VII................................................................. 217
14.6.8 Metric VIII................................................................ 217
14.6.9 Metric IX................................................................... 217
14.6.10 Metric X.................................................................... 218
14.7 Software Quality Assurance Manager’s
Responsibilities and a Succcessful Software
Quality Assurance Program’s Elements................................. 218
14.8 Software Quality-Related Cost............................................... 219
14.9 Software Quality Assurance Standards and Benefits............. 220
14.10 Problems................................................................................. 221
References......................................................................................... 221
Index....................................................................................................................... 225
Preface
Today, billions of dollars are being spent annually worldwide to develop reliable,
easily usable, and good quality systems and products. Global competition and other
factors are forcing manufacturers and others to produce highly reliable, easily usable;
and good quality systems, products, and services. Needless to say, nowadays reliabil-
ity, usability, and quality principles are being applied across many diverse sectors
of economy and each of these sectors has tailored reliability, usability, and quality
principles, methods, and procedures to satisfy its specific need. Some examples of
these sectors are robotics, healthcare, power generation, internet, and software.
It means that there is a definite need for reliability, usability, and quality profes-
sionals working in diverse areas to know about each other’s work activities because
this may help them, directly or indirectly, to perform their tasks effectively. At pres-
ent to the best of author’s knowledge, there is no book that covers applied reliability,
usability, and quality within its framework. It means, at present, to gain knowledge
of each other’s specialities, these specialists must study various books, articles, or
reports on each of the areas in question. This approach is time consuming and rather
difficult because of the specialised nature of the material involved.
Thus, the main objective of this book is to meet the need for a single volume that
combines applied areas of reliability, usability, and quality. The material covered is
treated in such a manner that the reader requires no previous knowledge to under-
stand it. The sources of most the material presented are given in the reference sec-
tion at the end of each chapter. This will be useful to readers if they desire to delve
more deeply into a specific area or topic. At appropriate places, the book contains
examples along with their solutions, and at the end of each chapter there are numer-
ous problems to test the reader’s comprehension in the area.
The book is composed of 14 chapters. Chapter 1 presents various introductory
aspects of applied reliability, usability, and quality including useful sources for
obtaining information on reliability, usability and quality. Chapter 2 reviews math-
ematical concepts considered useful to understand subsequent chapters. Some of the
topics covered in the chapter are arithmetic mean, mean deviation, standard devia-
tion, Boolean algebra laws, probability properties, probability distributions, and
useful mathematical definitions. Chapter 3 presents various introductory aspects of
reliability, usability, and quality.
Chapter 4 presents a number of methods considered useful to perform reliability,
usability, and quality analysis. These methods are failure modes and effect analysis,
fault tree analysis, Markov method, cognitive walkthroughs, task analysis, probabil-
ity tree analysis, cause and effect diagram (CAED), quality function deployment
(QFD), and quality control charts: the P-charts. Chapter 5 presents various important
aspects of medical equipment reliability. Some of the topics covered in the chapter
are medical equipment reliability-associated facts and figures, medical equipment
reliability improvement methods and procedures, human error in medical equipment,
useful guidelines for reliability and healthcare professionals for improving medi-
cal equipment reliability, and medical equipment maintainability and maintenance.

xvii
xviii Preface

Chapter 6 is devoted to robot reliability. Some of the topics covered in the chapter are
robot failure categories, causes, and corrective measures; robot reliability measures,
reliability analysis of hydraulic and electric robots, and models for conducting robot
reliability and maintenance studies.
Chapter 7 presents various important aspects of computer and internet reliabil-
ity. Some of the topics covered in the chapter are computer failure-related causes
and issues in computer system reliability, comparisons between computer hardware
and software reliability, fault masking, software reliability assessment methods,
internet outage classifications and an approach for automating fault detection in
internet-related services, and mathematical models for conducting internet reli-
ability and availability analysis. Chapter 8 is devoted to power system reliability.
Some of the topics covered in the chapter are loss of load probability, power system
service performance indices, availability analysis of transmission and associated
systems, and availability analysis of a single generator unit. Chapter 9 presents
various important aspects of medical device usability. Some of the topics covered
in the chapter are medical devices with high incidence of user/human error and
general approach for developing medical devices’ effective user interfaces, useful
guidelines for making interfaces of medical device more user-friendly, designing
medical devices for old users, and cumulative trauma disorder (CTD) implications
in medical device design.
Chapter 10 is devoted to software usability. Some of the topics covered in the
chapter are need for considering usability during the software development process,
software usability engineering process, steps to improve software usability, software
usability inspection methods, software usability testing methods, and useful guide-
lines to perform software usability testing.
Chapter 11 presents various important aspects of web usability. Some of the top-
ics covered in the chapter are common web design-related errors, web page design,
website design, navigation aids, and web usability evaluation tools. Chapter 12 is
devoted to quality in health care. Some of the topics covered in the chapter are
comparisons of tradition quality assurance and total quality management (TQM)
in regard to health care and quality assurance versus quality improvement in health
care institutions, steps for quality improvement in health care and physician reac-
tions to total quality, and quality tools for use in health care. Chapter 13 presents
various important aspects of medical device quality assurance. Some of the topics
covered in the chapter are regulatory compliance of medical device quality assur-
ance, medical device design quality assurance programme, tools for assuring medi-
cal device quality and quality indices.
Finally, Chapter 14 is devoted to software quality. Some of the topics covered in
the chapter are software quality factors and their categories, useful quality methods
for use during the software development process, quality-related measures during
the software development life cycle, software quality-associated metrics, and soft-
ware quality-related cost.
This book will be useful to many individuals including reliability engineers,
design engineers, usability and quality control professionals, system engineers, engi-
neering administrators, graduate and senior undergraduate students of engineering,
researchers and instructors of reliability, usability, and quality, and engineers-at-large.
Preface xix

The author is deeply indebted to many individuals including family members,


colleagues, friends, and students for their inputs. The invisible contributions of my
children are also appreciated. Last, but not the least, I thank my wife, Rosy, my other
half and friend, for typing this entire book and for timely help in proofreading.

B.S. Dhillon
University of Ottawa
Author Biography
Dr. B.S. Dhillon is a professor of Engineering Management in the Department of
Mechanical Engineering at the University of Ottawa. He has served as a Chairman/
Director of Mechanical Engineering Department/Engineering Management Progra­
mme for over 10 years at the same institution. He is the founder of the probability
distribution named Dhillon Distribution/Law/Model by statistical researchers in
their publications around the world. He has published over 377 (i.e., 224 [70 single
authored + 154 co-authored] journal and 153 conference proceedings) articles on
reliability engineering, maintainability, safety, engineering management, etc. He is
or has been on the editorial boards of 14 international scientific journals. In addi-
tion, Dr. Dhillon has written 50 books on various aspects of health care, engineering
management, design, reliability, safety, and quality published by Wiley (1981), Van
Nostrand (1982), Butterworth (1983), Marcel Dekker (1984), Pergamon (1986), etc.
His books are being used in over 100 countries and many of them are translated into
languages such as German, Russian, Chinese, and Persian (Iranian).
He has served as General Chairman of two international conferences on reliabil-
ity and quality control held in Los Angeles and Paris in 1987. Prof. Dhillon has also
served as a consultant to various organisations and bodies and has many years of
experience in the industrial sector. At the University of Ottawa, he has been teach-
ing reliability, quality, engineering management, design, and related areas and he
has also lectured in over 50 countries, including keynote addresses at various inter-
national scientific conferences held in North America, Europe, Asia, and Africa.
In March 2004, Dr. Dhillon was a distinguished speaker at the Conf./Workshop
on Surgical Errors (sponsored by White House Health and Safety Committee and
Pentagon), held at the Capitol Hill (One Constitution Avenue, Washington, DC).
Professor Dhillon attended the University of Wales where he received a BS in
electrical and electronic engineering and an MS in mechanical engineering. He
received a PhD in industrial engineering from the University of Windsor.

xxi
1 Introduction

1.1 RELIABILITY, USABILITY, AND QUALITY HISTORY


The history of the reliability discipline goes back to the early 1930s, when probability
concepts were applied to problems concerning electric power generation [1, 2]. During
World War II, Germans applied the basic reliability concepts for improving the reliabil-
ity of their V1 and V2 rockets. During the period of 1945–1950, the U.S. Department
of Defense performed various studies concerning electronic equipment failure, equip-
ment maintenance, etc. As the result of these studies, in 1950, it formed an ad hoc com-
mittee on reliability, and in 1952, the committee was transformed to a permanent body:
Advisory Group on the Reliability of Electronic Equipment (AGREE) [3]. Additional
information on the history of the reliability discipline is available in Ref. [4].
The emergence of the usability engineering field is deeply embedded in the disci-
pline of human factors. The importance of human factors/usability in the engineer-
ing systems’ design goes back to 1901; the Army Signal Corps contract document
for the development of the Wright Brothers’ airplane clearly stated that the aircraft
should be “simple to operate and maintain” [5]. Nonetheless, human factors as a
technical discipline emerged only after World War II, basically due to the military
systems’ increasing complexity, as well as the critical human role in operating them.
In 1957, the Human Factors Society of America was incorporated, and over two
decades later in 1983, the Association for Computing Machinery (ACM) Special
Interest Group on Computer and Human Interaction (SIGCHI) was formed [6, 7].
The term “usability engineering” was coined in the mid-1980s [8, 9]. Additional
information on the usability engineering history is available in Ref. [7].
Although the history of the quality field may be traced back to the ancient times,
in the modern times (i.e., 1907), the Western Electric Company was the first to use
basic quality principles in design, manufacturing, and installation. In 1916, C.N.
Frazee of Telephone Laboratories successfully applied statistical approaches to
inspection-associated problems, and in 1917, G.S. Radford coined the term “quality
control” [10]. In 1924, Walter A. Shewhart of Western Electric Company developed
quality control chart, and in 1944, the journal “Industrial Quality Control” was
jointly published by the University of Buffalo and the Buffalo Chapter of the Society
of Quality Control Engineers. In 1946, the American Society for Quality Control
(ASQC) was formed, and this journal became its official voice.
Additional information on the quality field history is available in Refs. [11, 12].

1.2 NEED OF RELIABILITY, USABILITY, AND


QUALITY IN PRODUCT DESIGN
There have been many factors directly or indirectly responsible for the consideration
of reliability in product design including product complexity, the past system fail-
ures, awareness of cost effectiveness, insertion of reliability-associated clauses in
DOI: 10.1201/9781003298571-1 1
2 Applied Reliability, Usability, and Quality for Engineers

design specifications, competition, and public demand. The first two of these factors
are described below in detail.
Even if we consider the increase in the product complexity in regard to parts
alone, there has been a phenomenal growth of some products. For example, a typical
Boeing 747 jumbo jet airplane was made up of around 4.5 million parts, including
fasteners. Even for relatively simpler products, there has been a quite significant
increase in complexity in regard to parts. For example, in 1935 a farm tractor was
made up of 1200 critical parts and in 1990 the number increased to around 2900.
In regard to the past system failures, various studies have revealed that design-
associated problems are generally the greatest causes for product failures. For exam-
ple, a study conducted by the U.S. Navy concerning electronic equipment failure
causes indicated 43% to design, 30% to operation and maintenance, 20% to manu-
facturing, and 7% to miscellaneous factors [13].
Well-publicised system failures such as Space Shuttle Challenger Disaster, Chernobyl
Nuclear Reactor Explosion, and Point Pleasant Bridge Disaster may have also contrib-
uted to more serious consideration of reliability in product design [14–16].
Usability engineering is an effective approach to product design and development
and is specifically based on customer feedback and data. For example, over 30% of all
software development projects are cancelled prior to completion primarily because of
inadequate user design-related inputs, resulting in a loss of over $100 billion annually
to the United States economy. Moreover, some studies clearly indicate that around
80% of product maintenance is due to unmet or unforeseen user requirements.
All in all, it may be added that the key challenge in designing new products using
modern technologies is how best to take advantage of all potential users’ skills in
creating the most effective work environment; this may simply be referred to as the
usability engineering challenge.
Nowadays, a vast sum of money is spent annually worldwide to design and develop
good quality products. Global competition and other factors are forcing manufactur-
ers to design and produce good quality products. Needless to say, quality principles
are being applied across many diverse sectors of the economy; each of these sectors
has tailored quality principles, methods, and procedures to satisfy its product design-
related needs. Some examples of these sectors are robotics, electric power genera-
tion, software, and the Internet.
As a result, there is a definite need for quality professionals working in diverse
areas such as these to know about each other’s work activities because this may help
them to perform their tasks more effectively. In turn, this will result in better quality
of end products.

1.3 TERMS AND DEFINITIONS


There are a large number of terms and definitions used in the area of reliability,
usability, and quality. Some of these are as follows [17–21]:

• Reliability. This is the probability that an item will perform its stated mis-
sion satisfactorily for the specified time period when used under the stated
conditions.
Introduction 3

• Quality. This is the degree to which an item, function, or process satisfies


the needs of users and customers.
• Usability. The quality of an interactive system with regard to factors such
as ease of use, ease of learning, and ease of satisfaction.
• Failure. This is the inability of an item to function within the stated
guidelines.
• Availability. This is the probability that the equipment is operating satisfac-
torily at time t when used according to the specified conditions, where the
total time considered includes active repair time, operating time, logistic
time, and administrative time.
• Redundancy. This is the existence of more than one means to accomplish
a stated function.
• Hazard rate (instantaneous failure rate). This is the rate of change of number
of items that have failed over the number of items that have survived at time t.
• Usability engineering. Iterative design and evaluation for providing cus-
tomer feedback on the usefulness and usability of a product’s or system’s
design and functionality throughout the development phase.
• User interface. The physical representations and procedures for viewing
and interacting with the product or system functionality.
• Usability inspection. An analytical approach in which usability specialists
and experts evaluate the user interaction needed for carrying out pivotal
or crucial tasks with an interactive system or product for determining the
problematic aspects of user input or system response.
• User task. A desired result of activities that the product or system user
would like to accomplish.
• Usability evaluation. Any analytical or empirical activity directed at
assessing or understanding the usability of an interactive product/system.
• Quality control. This is a management function, whereby control of raw
materials’ and manufactured items’ quality is exercised to stop the produc-
tion of defective items.
• Control chart. This is the chart that contains control limits.
• Quality assurance. This is a planned and systematic sequence of all actions
appropriate for providing satisfactory confidence that the product/item con-
forms to established technical requirements.
• Quality management. This is the totality of functions involved in achieving
and determining quality.
• Mission time. This is the time during which the item is performing its speci-
fied mission.
• Useful life. This is the length of time an item operates within an acceptable
level of failure rate.
• Downtime. This is the time period during which the item is not in a condi-
tion to carry out its stated mission.
• Quality plan. This is the documented set of procedures that covers the in
process and final inspection of the product.
• User-centred design. This is an early and continuous involvement of users
in the product design process.
4 Applied Reliability, Usability, and Quality for Engineers

• User reaction survey. This is a questionnaire completed by usability test


participants during or after interaction with a specified product/system.
• Human factors. This is a body of scientific facts concerning the human char-
acteristics (the term includes all psychosocial and biomedical considerations).

1.4 USEFUL SOURCES FOR OBTAINING INFORMATION


ON RELIABILITY, USABILITY, AND QUALITY
There are many sources for obtaining information, directly or indirectly, concerned
with systems reliability, usability, and quality. Some of the sources considered most use-
ful are presented in the following sections, classified in a number of distinct categories:

1.4.1 Journals and Magazines


• IEEE Transactions on Reliability
• Microelectronics and Reliability
• Engineering Failure Analysis
• Reliability Engineering and System Safety
• International Journal of Reliability, Quality, and Safety Engineering
• Journal of Usability Studies
• Journal of Multimodel User Interfaces
• User Modeling and User-Adapted Interaction (UMUAI)
• Interacting with Computers
• Human-Computer Interaction
• International Journal of Quality and Reliability Management
• Quality and Reliability Engineering International

1.4.2 Conference Proceedings


• Proceedings of the Annual Reliability and Maintainability Symposium
• Proceedings of the ISSAT International Conferences on Reliability and
Quality in Design
• Proceedings of the Conferences on Advances in Usability Engineering
• Proceedings of the Human Factors and Ergonomics Society Annual Meetings

1.4.3 Books
• Shooman, M.L., Probabilistic Reliability: An Engineering Approach,
McGraw-Hill Book Company, New York, 1968.
• Dhillon, B.S., Design Reliability: Fundamentals and Applications, CRC
Press, Boca Raton, Florida, 1999.
• Evans, J.W., Evans, J.Y., Productivity Integrity and Reliability in Design,
Springer-Verlag, New York, 2001.
• Dhillon, B.S., Computer System Reliability: Safety and Usability, CRC
Press, Boca Raton, Florida, 2013.
Introduction 5

• Nielsen, J., Usability Engineering, Academic Press, Boston, 1993.


• Mayhew, D.J., The Usability Engineering Lifecycle: A Practitioner’s
Handbook for User Interface Design, Morgan Kaufmann Publishers, San
Francisco, 1999.
• Dhillon, B.S., Engineering Usability: Fundamentals, Applications, Human
Factors, and Human Error, American Scientific Publishers, Stevenson
Ranch, California, 2004.
• Rubin, J., Handbook of Usability Testing: How to Plan, Design, and Conduct
Effective Tests, John Wiley and Sons, New York, 1994.
• Backford, J., Quality, Routledge, New York, 2002.
• Vardeman, S., Jobe, J.M., Statistical Quality Assurance Methods for
Engineers, John Wiley and Sons, New York, 1999.
• Gryna, F.N., Quality Planning and Analysis, McGraw Hill Book Company,
New York, 2001.
• Galin, D., Software Quality Assurance, Pearson Education Limited, New
York, 2004.
• Kemp, K.W., The Efficient Use of Quality Control Data, Oxford University
Press, New York, 2001.

1.4.4 Standards
• MIL-STD-721, Definitions of Terms for Reliability and Maintainability,
U.S. Department of Defense, Washington, DC.
• MIL-HDBK-217, Reliability Prediction of Electronic Equipment, U.S.
Department of Defense, Washington, DC.
• MIL-STD-1629, Procedures for Performing Failure Mode, Effects and
Criticality Analysis, U.S. Department of Defense, Washington, DC.
• MIL-STD-785, Reliability Program for Systems and Equipment, Development
and Production, U.S. Department of Defense, Washington, DC.
• MIL-HDBK-338, Electronics Reliability Design Handbook, U.S. Department
of Defense, Washington, DC.
• ISO 9241-11 (1998), Ergonomic Requirements for Office Work with
Visual Display Terminals (VDTs): Guidance on Usability, International
Organization for Standardization (ISO), Geneva, Switzerland.
• ISO 9241-13 (1998), Ergonomics Requirements for Office Work with Visual
Display Terminals (VDTs): User Guidance, International Organization for
Standardization (ISO), Geneva, Switzerland.
• ETSI ETR 095, Human Factors: Guide for Usability Evaluations of
Telecommunications Systems and Services, European Telecommunications
Standardization Institute (ETSI), Sophia Antipolis, France.
• ETSI ETR 198, User Trials User Control Procedures in ISDN Video Telephony,
European Telecommunications Standards Institute (ETSI), Sophia, Antipolis,
France.
• MIL-STD-1472D, Human Engineering Design Criteria for Military Systems,
Equipment and Facilities, Department of Defense, Washington, DC.
6 Applied Reliability, Usability, and Quality for Engineers

• ANSI/ASQC A3, Quality Systems Terminology, American National Standards


Institute (ANSI), New York.
• MIL-HDBK-53, Guide for Sampling Inspection, U.S. Department of
Defense, Washington, DC.
• ANSI/ASQC B1, Guide for Quality Control, American National Standards
Institute (ANSI), New York.
• MIL-STD-52779, Software Quality Assurance Program Requirements,
U.S. Department of Defense, DC.
• ANSI/ASQC A1, Definitions, Symbols, Formulas, and Table for Quality
Charts, American National Standards Institute (ANSI), New York.
• ANSI/ASQC B2, Control Chart Method for Analyzing Data, American
National Standards Institute (ANSI), New York.

1.4.5 Data Sources
• Reliability Analysis Center, Rome Air Development Center (RADC),
Griffiss Air Force Base, Rome, NY.
• Government Industry Data Exchange Program (GIDEP), GIDEP Operations
Center, U.S. Department of Navy, Corona, CA.
• American National Standards Institute (ANSI), New York.
• National Technical Information Service (NTIS), United States Department
of Commerce, Springfield, VA.
• Defense Technical Information Center, DTIC-FDAC, Fort Belvoir, VA.

1.5 SCOPE OF THE BOOK


Nowadays, engineering systems are an important element of the global economy,
and each year, billions of dollars are spent for developing, manufacturing, operat-
ing, and maintaining various types of engineering systems. The reliability, usabil-
ity, and quality of these systems have become more important than ever because of
their increasing sophistication, non-specialist users, complexity, etc. Over the years,
a large number of journal and conference proceeding articles, technical reports, and
other publications on reliability, usability, and quality of engineering systems have
appeared in the literature. However, to the best of the author’s knowledge, there is no
book that covers the topics of reliability, usability, and quality within its framework.
This is a significant impediment to information seekers on these three topics because
they have to consult various sources.
Thus, the main objectives of this book are (i) to eliminate the need for pro-
fessionals and others concerned with engineering system reliability, usability,
and quality to consult diverse sources in obtaining the desired information, and
(ii) to provide up-to-date information on the topic. This book will be useful to
many individuals, including design engineers, system engineers, reliability spe-
cialists, usability specialists, quality specialists, human factors and ergonomics
specialists, computer-interface specialists, engineering undergraduate and gradu-
ate students, researchers and instructors in the area of reliability, usability, and
quality, and engineers at large.
Introduction 7

1.6 PROBLEMS
1. Discuss the need for reliability, usability, and quality in product design.
2. Write an essay on the history of reliability, usability, and quality.
3. Define the following three terms:
i. Reliability
ii. Quality
iii. Usability
4. List eight of the most important journals or magazines for obtaining infor-
mation on reliability, usability, or quality.
5. List at least four books considered quite useful to obtain information on
usability.
6. Define the following four terms:
i. Downtime
ii. User-centred design
iii. Quality assurance
iv. Hazard rate
7. List at least four standards that are directly or indirectly concerned with
usability.
8. List four most useful standards concerned with reliability.
9. List at least four standards concerned with quality.
10. List at least four data information sources.

REFERENCES
1. Lyman, W.J., Fundamental Consideration in Preparing a Master System Plan, Electrical
World, Vol. 101, 1933, pp. 778–792.
2. Smith, S.A., Service Reliability Measured by Probabilities of Outage, Electrical World,
Vol. 103, 1934, pp. 371–374.
3. Coppola, A., Reliability Engineering of Electronic Equipment: A Historical Perspective,
IEEE Transactions on Reliability, Vol. 33, 1984, pp. 29–35.
4. Dhillon, B.S., Design Reliability: Fundamentals and Applications, CRC Press, Boca
Raton, Florida, 1999.
5. AMCP 706-133, Engineering Design Handbook: Maintainability Engineering Theory
and Practice, Department of Defense, Washington, DC, 1976.
6. Shackel, B., Richardson, S., Human Factors for Informatics Usability: Background
and Overview, in Human Factors for Informatics Usability, edited by Shackel, B.,
Richardson, S., Cambridge University Press, Cambridge, UK, 1991, pp. 1–19.
7. Dhillon, B.S., Engineering Usability: Fundamentals, Applications, Human Factors, and
Human Error, American Scientific Publishers, Stevenson Ranch, California, 2004.
8. Butler, K.A., Usability Engineering Turns Ten, Interactions, 1996, pp. 59–75.
9. Rosson, M.B., Carroll, J.M., Usability Engineering: Scenario-Based Development of
Human-Computer Interaction, Academic Press, San Francisco, California, 2002.
10. Radford, G.S., Quality Control (Control of Quality), Industrial Management, Vol. 54,
1917, p. 100.
11. Golomski, W.A., Quality Control: History in the Making, Quality Progress, Vol. 9, No. 7,
July 1976, pp. 16–18.
12. Krismann, C., Quality Control: An Annotated Bibliography, The Kraus Organization
Limited, White Plains, New York, 1990.
8 Applied Reliability, Usability, and Quality for Engineers

13. Niebel, B.W., Engineering Maintenance Management, Marcel Dekker, New York,
1994.
14. Dhillon, B.S., Engineering Design: A Modern Approach, Richard D. Irwin, Chicago,
Illinois, 1996.
15. Elsayed, E.A., Reliability Engineering, Addison Wesley Longman, Reading, MA,
1996.
16. Dhillon, B.S., Advanced Design Concepts for Engineers, Technomic Publishing
Company, Lancaster, PA, 1998.
17. Omdahl, T.P., ed., Reliability, Availability, Maintainability (RAM) Dictionary, ASQC
Quality Press, Milwaukee, Wisconsin, 1988.
18. ANSI/ASQC A3-1978, Quality Systems Terminology, American Society for Quality
Control, Milwaukee, Wisconsin, 1978.
19. Naresky, J.J., Reliability Definitions, IEEE Transactions on Reliability, Vol. 19, 1970,
pp. 198–200.
20. Glossary of Terms Used in Usability Engineering, Available online at http://www.ucc.
ie/hfrg/baseline/glossary.html.
21. User-Centered Design Process for Interactive Systems, ISO 13407 1999, International
Organization for Standardization (ISO), Geneva, Switzerland, 1999.
2 Basic Mathematical
Concepts
2.1 INTRODUCTION
Just like in the development of other areas of science and engineering, mathematics has
also played an important role in the development of reliability, usability, and quality
fields. Although the origin of the word “mathematics” may be traced back to the ancient
Greek word “mathema”, which means “science, knowledge, or learning”, the history of
current number symbols, sometimes referred to as the “Hindu-Arabic numeral system”,
goes back to around 250 BCE, to the stone columns erected by the Scythian emperor of
India named Asoka [1]. The evidences of the use of these number symbols are notches
found on the stone columns.
The history of probability goes back to the gambler’s manual written by Girolamo
Cardano (1501–1576), in which he considered a number of interesting issues on prob-
ability [1, 2]. However, Blaise Pascal (1623–1662) and Pierre Fermat (1601–1665)
were the first two individuals who independently and correctly solved the problem of
dividing the winnings in a game of chance. Pierre Fermat also introduced the idea
of “differentiation”.
Laplace transforms, frequently used for finding solutions to a set of differential
equations, were developed by Pierre-Simon Laplace (1749–1827). Additional informa-
tion on the history of mathematics, including probability, is available in Refs. [1, 2].
This chapter presents various mathematical concepts considered useful to understand
subsequent chapters of this book.

2.2 ARITHMETIC MEAN, MEAN DEVIATION,


AND STANDARD DEVIATION
A set of given reliability, usability, or quality data is useful only if it is analysed
effectively. More specifically, there are certain characteristics of the data that are
useful for describing the nature of a given data set, thus enabling better decisions
associated to the data. This section presents three statistical measures considered
useful in the area of applied reliability, usability, and quality.

2.2.1 Arithmetic Mean
Often, the arithmetic mean is simply referred to as mean and is defined by
k

∑x
i =1
i

m= (2.1)
k

DOI: 10.1201/9781003298571-2 9
10 Applied Reliability, Usability, and Quality for Engineers

where
m is the mean value (i.e., arithmetic mean).
xi is the data value i, for i = 1, 2, …, k.
k is the number of data values.

Example 2.1

Assume that the inspection department of an engineering systems manufacturing


company inspected six identical systems and found 4, 6, 8, 10, 12, and 14 defects in
each system. Calculate the average number of defects per system (arithmetic mean).
By inserting the given data values into Equation (2.1), we get

4 + 6 + 8 + 10 + 12 + 14
m= =9
6
Thus, the average number of defects per system is 9. In other words, the arithmetic
mean of the data set is 9.

2.2.2 Mean Deviation


This is a quite commonly used measure of dispersion, which indicates the degree to
which data tend to spread about a mean value. Mean deviation is defined by
k

∑ DV − m
i =1
i

MD = (2.2)
k
where
MD is the mean deviation.
DVi is the data value i, for i = 1, 2, 3, …, k.
k is the number of data values.
m is the mean value of the given data set.
DVi − m is the absolute value of the deviation of DVi from m.

Example 2.2

Calculate the mean deviation of the data set provided in Example 2.1
By using the data set from Example 2.1 and the calculated mean value (i.e., m = 9
defects per system) in Equation (2.2), we obtain

4 − 9 + 6 − 9 + 8 − 9 + 10 − 9 + 12 − 9 + 14 − 9
MD =
6
5 + 3 + 1+ 1+ 3 + 5
=
6
=3
Thus, the mean deviation of the Example 2.1 data set is 3.
Basic Mathematical Concepts 11

2.2.3 Standard Deviation
Standard deviation is a quite widely used measure of dispersion of data in a given
data set about the mean and is defined by

1/2
 k



∑ i =1
( DVi − m)2 

σ=  (2.3)
k
 
 
 

where
σ is the standard deviation.
DVi is the data value i, for i = 1, 2, 3, …, k.
m is the mean value.
k is the number of data values.

The following three standard deviation’s properties are associated with the widely
used normal distribution:

• 68.27% of the all data values are included between m − σ and m + σ.


• 95.45% of the all data values are included between m − 2σ and m + 2σ.
• 99.73% of the all data values are included between m − 3σ and m + 3σ.

Example 2.3

Calculate the standard deviation of the data set given in Example 2.1.
Using the Example 2.1 data set and the calculated mean value (m = 9) in
Equation (2.3), we obtain

1/ 2
 (4 − 9)2 + (6 − 9)2 + (8 − 9)2 + (10 − 9)2 + (12 − 9)2 + (14 − 9)2 
σ= 
 6 
1/ 2
 25 + 9 + 1+ 1+ 9 + 25 
= 
 6
= 3.41

Thus, the standard deviation of the Example 2.1 data set is 3.41.

2.3 BOOLEAN ALGEBRA LAWS


Boolean algebra plays a very important role in various types of applied reliability,
usability, and quality studies and is named after George Boole (1813–1864), a math-
ematician. Some of the Boolean algebra laws are presented below [3, 4].
12 Applied Reliability, Usability, and Quality for Engineers

• Idempotent law:

A + A = A(2.4)

A.A = A(2.5)

where
A is an arbitrary set or event.
Dot(.) denotes the interaction of sets. It is to be noted that Equation (2.5)
sometimes is written without the dot (e.g., AA), but it still conveys
the same meaning.
+ denotes the union of sets.

• Commutative law:

A + B = B + A(2.6)

A.B = B.A(2.7)

where
B is an arbitrary set or event.

• Distributive law:

( A + B)( A + C ) = A = BC(2.8)

A ( B + C ) = AB + AC(2.9)

where
C is an arbitrary set or event.

• Associative law:

( AB) C = A ( BC )(2.10)

( A + B) + C = A + ( B + C )(2.11)

• Absorption law:

A ( A + B) = A(2.12)

A + ( AB) = A(2.13)
Basic Mathematical Concepts 13

2.4 PROBABILITY DEFINITION AND PROPERTIES


Probability is defined by [5]
N
P(C ) = lim   (2.14)
n→∞  n 

where
P(C) is the probability of occurrence of event C.
N is the number of times event C occurs in the n repeated experiments.

Some of the probability properties are as follows [5, 6]:

• The probability of occurrence of event, say A, is

O ≤ P( A) ≤ 1(2.15)

• Probability of the sample space S is

P ( S) = 1(2.16)

• Probability of the negation of the sample space S is

P( S ) = 0(2.17)

where
S is the negation of the sample space S.

• The probability of occurrence and nonoccurrence of an event, say A, is always

P( A) + P( A) = 1(2.18)

where
P(A) is the probability of occurrence of event A.
P( A) is the probability of nonoccurrence of event A.

• The probability of the union of n independent events is


n

P( A1 + A2 + − − − − + An ) = 1 − ∏(1 − P( A ))(2.19)
i =1
i

where
P( Ai ) is the probability of occurrence of event Ai, for i = 1, 2, 3,…, n.

• The probability of the union of n mutually exclusive events is


n

P( A1 + A2 + − − − − + An ) = ∑P( A )(2.20)
i =1
i

• The probability of an intersection of n independent events is


P( A1 A2 A3 .... An ) = P( A1 ) P( A2 ) P( A3 )....P( An )(2.21)
14 Applied Reliability, Usability, and Quality for Engineers

2.5 MATHEMATICAL DEFINITIONS


This section presents a number of mathematical definitions considered useful for
performing various types of applied reliability, usability, and quality studies.

2.5.1 Cumulative Distribution Function


For continuous random variables, this is defined by [5]
t

F (t ) =
−∞
∫ f (x)dx (2.22)

where
x is a continuous random variable.
t is time.
f(x) is the probability density function.
F(t) is the cumulative distribution function.

For t = ∞ , Equation (2.22) becomes


F ( ∞) =
∫ f (x)dx
−∞
(2.23)

=1
It means that the total area under the probability density curve is equal to unity.

2.5.2 Probability Density Function


This is defined by [5, 7]
dF (t )
f (t ) = (2.24)
dt

2.5.3 Expected Value
The expected value of a continuous random variable is defined by


E (t ) = tf (t )dt
−∞
(2.25)

where
E(t) is the expected value (i.e., mean value) of the continuous random variable t.

2.5.4 Laplace Transform
The Laplace transform of the function, f(t), is defined by

f (s ) =
∫ f (t)e
0
− st
dt (2.26)
Basic Mathematical Concepts 15

where
s is the Laplace transform variable.
t is time variable.
f(s) is the Laplace transform of function f(t).

Example 2.4

Obtain the Laplace transform of the following function:

f (t ) = e −λt (2.27)

where
λ is a constant.

By inserting Equation (2.27) into Equation (2.26), we obtain



f ( s ) = e −λt e − st dt
0



= e −( s +λ )t dt
0
1
=
s+λ (2.28)
Laplace transforms of some frequently occurring functions used in the area of applied
reliability, usability, and quality are presented in Table 2.1 [8, 9].

TABLE 2.1
Laplace Transforms of Some Frequently Occurring Functions
in Applied Reliability, Usability, and Quality Work

f(t) f(s)
e −λt 1
s+λ
k, a constant k
s
t n , n = 0,1,2,3,... n!
s n +1
tf (t ) df (s)

ds
df (t ) sf(s)-f(0)
dt
θ1 f1 (t ) + θ2 f2 (t ) θ1 f1 (s) + θ2 f2 (s)
te −λt 1
(s + λ ) 2
T 1
s2
16 Applied Reliability, Usability, and Quality for Engineers

2.5.5 Laplace Transform: Final-Value Theorem


If the following limits exist, then the final-value theorem may be defined as

lim f (t ) = lim [ sf (s) ] (2.29)


t →∞ s→ 0

Example 2.5

Prove by using the following equation that the left-hand side of Equation (2.29) is
equal to its right-hand side:

µ λ
f (t ) = + e −( λ+µ )t(2.30)
(λ + µ ) (λ + µ )
where
λ and µ are constants.

By substituting Equation (2.30) into the left-hand side of Equation (2.29), we obtain

 µ λ  µ
lim  + e − ( λ+µ )t  = (2.31)
t →∞ (λ + µ ) (λ + µ )
  λ+µ

Using Table 2.1 and Equation (2.30), we obtain

µ λ 1
f (s ) = + . (2.32)
s (λ + µ ) (λ + µ ) (s + λ + µ )

By substituting Equation (2.32) into the right-hand side of Equation (2.29), we obtain

 sµ sλ 1  µ
lim  − + .  = (λ + µ ) (2.33)
s→ 0
 s ( λ + µ ) ( λ + µ ) ( s + λ + µ ) 

The right-hand sides of Equations (2.31) and (2.33) are the same. Thus, it proves that
the left-hand side of Equation (2.29) is equal to its right-hand side.

2.6 PROBABILITY DISTRIBUTIONS


This section presents a number of probability distributions considered useful for
performing various types of studies in the area of applied reliability, usability, and
quality [10].

2.6.1 Binomial Distribution


This discrete random variable distribution is employed in circumstances where one
is concerned with the probabilities of outcome such as the number of occurrences
(e.g., failures) in a sequence of n trials. More specifically, each trial has two possible
outcomes (e.g., success or failure), but the probability of each trial remains constant
Basic Mathematical Concepts 17

or unchanged. It is to be noted that this distribution is also known as the Bernoulli


distribution, after its founder Jakob Bernoulli (1654–1705) [1].
The binomial probability density function, f(x), is defined by

()
n
f ( x ) = i p x q n − x , for x = 0,1, 2,…, n (2.34)

where
(i ) = i!(nn−! i)!
n

x is the number of nonoccurrences (e.g., failures) in n trials.


p is the single trial probability of occurrence (e.g., success).
q is the single trial probability of nonoccurrence (e.g., failure).

The cumulative distribution function is given by


x

F (x) = ∑(n) p q
i=0
i
i n−i
(2.35)

where
F(x) is the cumulative distribution function or the probability of x or fewer nonoc-
currences (e.g., failures) in n trials.

2.6.2 Exponential Distribution
This is a continuous random variable distribution that is widely used in the industrial
sector, particularly in conducting reliability studies [11]. The probability density func-
tion of the distribution is defined by

f (t ) = αe −αt , t ≥ 0, α > 0 (2.36)

where
f(t) is the probability density function.
t is time.
α is the distribution parameter.

By inserting Equation (2.36) into Equation (2.22), we obtain the following equation
for the cumulative distribution function:

F (t ) = 1 − e −αt (2.37)

Using Equations (2.36) and (2.25), we get the following equation for the distribution
mean value:

1
E (t ) = m = (2.38)
α
where
m is the mean value.
18 Applied Reliability, Usability, and Quality for Engineers

2.6.3 Rayleigh Distribution
This continuous random variable distribution is named after John Rayleigh (1842–1919),
its founder [1]. The probability density function of the distribution is defined by
2
t
2  − 
f (t ) =  2  te  α  , t ≥ 0, α > 0 (2.39)
α 

where
α is the distribution parameter.

Substituting Equation (2.39) into Equation (2.22), we obtain the following cumula-
tive distribution function:
2
t
−  
 α
F (t ) = 1 − e (2.40)

Using Equations (2.39) and (2.25), we obtain the following expression for the distri-
bution mean value:

3
E (t ) = m = αΓ   (2.41)
 2
where
Γ(.) is gamma function, which is defined by


Γ () = t n −1e − t dt , for n > 0
0
(2.42)

2.6.4 Weibull Distribution


This continuous random variable distribution was developed by W. Weibull, a
Swedish mechanical engineering professor, in the early 1950s [12]. The distribution
can be used for representing many different physical phenomena, and its probability
density function is defined by
c
t
ct t −1 −  
f (t ) = c e  θ  , t ≥ 0, θ > 0, c > 0 (2.43)
θ
where
θ and c are the distribution scale and shape parameters, respectively.

By inserting Equation (2.43) into Equation (2.22), we obtain the following equation
for the cumulative distribution function:
t c
−  
 θ
F (t ) = 1 − e (2.44)
Basic Mathematical Concepts 19

It is to be noted that exponential and Rayleigh distributions are the special cases
of this distribution for c = 1 and c = 2, respectively.
Using Equations (2.43) and (2.25), we get the following equation for the distribu-
tion mean value:

1
E (t ) = m = θΓ  1 +  (2.45)
 c

2.6.5 Normal Distribution
This continuous random variable distribution is widely used, and sometimes it called
the Gaussian distribution after Carl Friedrich Gauss (1777–1855), a German math-
ematician. The probability density function of the distribution is defined by

1  (t − µ)2 
f (t ) = exp  − 2 
, −∞ < t < +∞ (2.46)
σ 2π  2σ 

where
µ and σ are the distribution parameters (i.e., mean and standard deviation,
respectively).

Using Equations (2.22) and (2.46), we get the following cumulative distribution
function:
t
1  (t − µ)2 
F (t ) =
σ 2π ∫
−∞
exp  −
 2σ 2 
dx (2.47)

Inserting Equation (2.46) into Equation (2.25) yields the following equation for the
distribution mean value:

1  (t − µ)2 
E (t ) = m =
σ 2π ∫
−∞
t exp  −
 2σ 2 
dx (2.48)

2.6.6 Bathtub Hazard Rate Curve Distribution


This is another continuous random variable distribution and it can represent bathtub
shaped, decreasing and increasing hazard rates. The distribution was developed in
1981 [13], in the published literature by authors around the world, it is generally
referred to as the Dhillon distribution/law/model [14–33].
The probability density function of the distribution is defined by [13]

f (t ) = cθ(θt )c −1 e
{ c
} , for t ≥ 0, θ > 0, c > 0
− e( θt ) − ( θt )c −1
(2.49)

where
c and θ are the distribution shape and scale parameters, respectively.
20 Applied Reliability, Usability, and Quality for Engineers

By inserting Equation (2.49) into Equation (2.22), we get the following equation
for cumulative distribution function:

F (t ) = 1 − e
{ c
}
− e( θt ) −1
(2.50)

It is to be noted that for c = 0.5, this probability distribution gives the bathtub-shaped
hazard rate curve, and for c = 1, it gives the extreme value probability distribution.
In other words, the extreme value probability distribution is the special case of this
probability distribution at c = 1.

2.7 SOLVING FIRST-ORDER DIFFERENTIAL EQUATIONS


USING LAPLACE TRANSFORMS
Often, Laplace transforms are used to find solutions to linear first-order differential
equations in reliability and usability analysis studies of engineering systems. An
example presented below demonstrates the finding of solutions to a set of differential
equations describing an engineering system.

Example 2.6

Assume that an engineering system can be in any of the three states: operating
normally, failed due to a hardware failure, or failed due to a usability error. The fol-
lowing three first-order linear differential equations describe the engineering system
under consideration:

dP0 (t )
+ (λ + λ u )P0 (t ) = 0(2.51)
dt

dP1(t )
− λP0 (t ) = 0 (2.52)
dt

dP2 (t )
− λ u P0 (t ) = 0 (2.53)
dt
where
λ is the engineering system constant hardware failure rate.
λ s is the engineering system constant usability error rate.
Pi (t ) is the probability that the engineering system is in state i at time t, for i = 0
(operating normally), i = 1 (failed due to a hardware failure), and i = 2
(failed due to a usability error).
At time t = 0, P0 (0) = 1, P1(0) = 0, and P2 (0) = 0.

Solve differential Equations (2.51), (2.52), and (2.53) by using Laplace transforms.
Using Table 2.1, the stated initial conditions, and Equations (2.51)–(2.53), we
obtain

sP0 (s) − 1 + (λ + λ u ) P0 (s) = 0 (2.54)


Basic Mathematical Concepts 21

sP1 (s) − λP0 (s) = 0 (2.55)

sP2 (s) − λ u P0 (s) = 0 (2.56)

Solving Equations (2.54)–(2.56), we get

1
P0 (s) = (2.57)
(s + λ + λ u )

λ
P1 (s) = (2.58)
(s + λ + λ u )

λu
P2 (s) = (2.59)
(s + λ + λ u )

Taking the inverse Laplace transforms of Equations (2.57)–(2.59), we obtain

P0 (t ) = e − ( λ+λu )t (2.60)

λ
P1 (t ) = 1 − e − ( λ+λu )t  (2.61)
(λ + λ u ) 

λu
P2 (t ) = 1 − e − ( λ+λu )t  (2.62)
(λ + λ u ) 

Thus, Equations (2.60)–(2.62) are the solutions to differential Equations (2.51)–(2.53).

2.8 PROBLEMS
1. Assume that the quality control department of an engineering systems man-
ufacturing company inspected eight identical systems and discovered 5, 4,
8, 11, 2, 9, 10, and 3 defects in each system. Calculate the average number
of defects per system.
2. Calculate the mean deviation of the data set given in question 1.
3. Calculate the standard deviation of the data set given in question1.
4. What is idempotent law?
5. Define probability and expected value of a continuous random variable.
6. Define the following two items:
• Cumulative distribution function
• Laplace transform
7. Write down the probability density functions of the following two
distributions:
• Rayleigh distribution
• Exponential distribution
8. Write down probability density and cumulative distribution functions for
normal distribution.
22 Applied Reliability, Usability, and Quality for Engineers

9. What are the special case distributions of the bathtub hazard rate curve and
Weibull distributions?
10. Prove Equations (2.60)–(2.62) by using Equations (2.57)–(2.59).

REFERENCES
1. Eves, H., An Introduction to the History of Mathematics, Rinehart and Winston,
New York, 1976.
2. Owen, D.B., ed., On the History of Statistics and Probability, Marcel Dekker, New
York, 1976.
3. Fault Tree Handbook, Report No. NUREG-0492, U.S. Nuclear Regulatory Commission,
Washington, D.C., 1981.
4. Lipschutz, S., Set Theory, McGraw-Hill, New York, 1964.
5. Mann, N.R., Schefer, R.E., Singpurwalla, N.D., Methods for Statistical Analysis of
Reliability and Life Data, John Wiley and Sons, New York, 1974.
6. Llipschutz, S., Probability, McGraw-Hill, New York, 1965.
7. Shooman, M.L., Probabilistic Reliability: An Engineering Approach, McGraw-Hill,
New York, 1968.
8. Spiegel, M.R., Laplace Transforms, McGraw-Hill, New York, 1965.
9. Oberhettinger, F., Badic, L., Tables of Laplace Transforms, Springer-Verlag, New York,
1973.
10. Patel, J.K., Kapadia, Owen, D.B., Handbook of Statistical Distributions, Marcel Dekker,
New York, 1976.
11. Davis, D.J., An Analysis of Some Failure Data, The Journal of the American Statistical
Association, 1952, pp. 113–150.
12. Weibull, W., A Statistical Distribution of Wide Applicability, The Journal of Applied
Mechanics, Vol. 18, 1951, pp. 293–297.
13. Dhillon, B.S., Life Distributions, IEEE Transactions on Reliability, Vol. 30, 1981,
pp. 457–460.
14. Baker, R.D., Non-parametric Estimation of the Renewal Function, Computers Opera­
tions Research, Vol. 20, No. 2, 1993, pp. 167–178.
15. Cabana, A., Cabana, E.M., Goodness-of-fit to the Exponential Distribution, Focused
on Weibull Alternatives, Communications in Statistics-Simulation and Computation,
Vol. 34, 2005, pp. 711–723.
16. Grane, A., Fortiana, J., A Directional Test of Exponentiality Based on Maximum
Correlations, Metrika, Vol. 73, 2011, pp. 711–723.
17. Henze, N., Meintnis, S.G., Recent and Classical Tests for Exponentiality: A Partial
Review with Comparisons, Metrika, Vol. 61, 2005, pp. 29–45.
18. Jammalamadaka, S.R., Taufer, E., Testing Exponentiality by Comparing the Empirical
Distribution Function of the Normalized Spacings with that of the Original Data,
Journal of Nonparametric Statistics, Vol. 15, No. 6, 2003, pp. 719–729.
19. Hollander, M., Laird, G., Song, K.S., Non-parametric Interference for the Proportionality
Function in the Random Censorship Model, Journal of Nonparametric Statistics,
Vol. 15, No. 2, 2003, pp. 151–169.
20. Jammalamadaka, S.R., Taufer, E., Use of Mean Residual Life in Testing Departures
from Exponentiality, Journal of Nonparametric Statistics, Vol. 18, No. 3, 2006,
pp. 277–292.
21. Kunitz, H., Pamme, H., The Mixed Gamma Ageing Model in Life Data Analysis,
Statistical Papers, Vol. 34, 1993, pp. 303–318.
22. Kunitz, H., A New Class of Bathtub-shaped Hazard Rates and its Application in
Comparison of Two Test-statistics, IEEE Transactions on Reliability, Vol. 38, No. 3,
1989, pp. 351–354.
Basic Mathematical Concepts 23

23. Meintanis, S.G., A Class of Tests for Exponentiality Based on a Continuum of Moment
Conditions, Kybernetika, Vol. 45, No. 6, 2009, pp. 946–959.
24. Morris, K., Szynal, D., Goodness-of-fit Tests Based on Characterizations Involving
Moments of Order Statistics, International Journal of Pure and Applied Mathematics,
Vol. 38, No. 1, 2007, pp. 83–121.
25. Na, M.H., Spline Hazard Rate Estimation Using Censored Data, Journal of KSIAM,
Vol. 3, No. 2, 1999, pp. 99–106.
26. Morris, K., Szynal, D., Some U-statistics in Goodness-of-fit Tests Derived from Chara­
cterizations via Record Values, International Journal of Pure and Applied Mathematics,
Vol. 4, No. 4, 2008, pp. 339–414.
27. Nam, K.H., Park, D.H., Failure Rate for Dhillon Model, Proceedings of the Spring
Conference of the Korean Statistical Society, 1997, pp. 114–118.
28. Nimoto, N., Zitikis, R., The Atkinson Index, The Moran Statistic, and Testing Expo­
nentiality, Journal of the Japan Statistical Society, Vol. 38, No. 2, 2008, pp. 187–205.
29. Nam, K.H., Chang, S.J., Approximation of the Renewal Function for Hjorth Model and
Dhillon Model, Journal of the Korean Society for Quality Management, Vol. 34, No. 1,
2006, pp. 34–39.
30. Noughabi, H.A., Arghami, N.R., Testing Exponentiality Based on Characterizations
of the Exponential Distribution, Journal of Statistical Computation and Simulation,
Vol. 1, 2011, pp. 1–11.
31. Szynal, D., Goodness-of-fit Tests Derived from Characterizations of Continuous
Distributions, Stability in Probability, Banach Center Publications, Vol. 90, Institute of
Mathematics, Polish Academy of Sciences, Warszawa, Poland, 2010, pp. 203–223.
32. Szynal, D., Wolynski, W., Goodness-of-fit Tests for Exponentiality and Rayleigh
Distribution, International Journal of Pure and Applied Mathematics, Vol. 78, No. 5,
2013, pp. 751–772.
33. Nam, K.H., Park, D.H., A Study on Trend Changes for Certain Parametric Families,
Journal of the Korean Society for Quality Management, Vol. 23, No. 3, 1995, pp. 93–101.
3 Reliability Basics,
Human Factors Basics
for Usability, and
Quality Basics
3.1 INTRODUCTION
Nowadays, the reliability of engineering systems has become a challenging issue during
the design process due to the increasing dependence of our daily lives and schedules on
these systems’ proper functioning. Some examples of these systems are automobiles,
computers, aircraft, nuclear power generating reactors, and space satellites.
The emergence of usability engineering is deeply embedded in the discipline of
human factors. The main reason for the existence of the discipline of human factors is
that humans keep making errors while using machines/systems. Otherwise, it would
be difficult to justify the discipline’s existence.
The importance of quality in business and industry is increasing rapidly. Today,
our day-to-day lives and schedules are more dependent than ever before on the sat-
isfactory functioning of products and services (e.g., automobiles, computers, and
a continuous supply of electricity). Needless to say, factors such as competition,
product sophistication, and growing demand from customers for better quality have
played a very important role in increasing the importance of quality.
This chapter presents the reliability basics, human factors basics for usability, and
quality basics considered useful to understand the subsequent chapters of this book.

3.2 BATHTUB HAZARD RATE CONCEPT


The bathtub hazard rate concept is widely used for representing failure behaviour
of many engineering items. The term “bathtub” stems from the fact that the shape of
the hazard rate curve resembles a bathtub shown in Fig. 3.1. As shown in Fig. 3.1, the
curve is divided into three distinct parts: the burn-in period, the useful-life period,
and the wear-out period.
During the burn-in period, the engineering system/item hazard rate decreases with
time t. Some of the reasons for the occurrence of failures during this period are poor
manufacturing methods, inadequate debugging, poor workmanship and substandard
materials, inadequate processes, poor quality control, and human errors [1, 2].
During the useful life period, the item/system hazard rate remains constant with
respect to time t. Some of the reasons for the occurrence of failures during this
period are undetectable defects, higher random stress than expected, natural failures,
low safety factors, abuse, and human errors.

DOI: 10.1201/9781003298571-3 25
26 Applied Reliability, Usability, and Quality for Engineers

FIGURE 3.1 Bathtub hazard rate curve.

Finally, during the wear-out period, the item/system hazard rate increases with time
t. Some of the reasons for the occurrence of failures during this period are wear due to
friction, corrosion, and creep; poor maintenance, wear due to aging, short designed-in
life of the item/system under consideration; and incorrect overhaul practices.
Mathematically, the following equation can be used to represent the bathtub
hazard rate curve shown in Fig. 3.1 [3]:
θ
λ(t ) = γθ( γt )θ−1 e( γt ) (3.1)

where
λ(t ) is hazard rate (time-dependent failure rate).
t is time.
γ is the scale parameter.
θ is the shape parameter.

At θ = 0.5, Equation (3.1) gives the shape of the bathtub curve hazard rate curve
shown in Fig. 3.1.

3.3 GENERAL RELIABILITY ANALYSIS ASSOCIATED FORMULAS


There are a number of general formulas for conducting various types of reliability
analysis. Four of these formulas are presented in Sections 3.3.1–3.3.4.

3.3.1 Failure (or Probability) Density Function


This is defined by [1]

dR(t )
f (t ) = − (3.2)
dt
where
f(t) is the item/system failure (or probability) density function.
R(t) is the item/system reliability at time t.
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 27

Example 3.1

Assume that the reliability of a system is defined by

Rs (t ) = e −λ st  (3.3)

where
Rs (t ) is the system reliability at time t.
λ s is the system constant failure rate.

Obtain an expression for the failure (probability) density function of the system by
using Equation (3.2).
By substituting Equation (3.3) into Equation (3.2), we obtain

de −λ st
f (t ) = −
dt 
−λ s t
= λse (3.4)

Thus, Equation (3.4) is the expression for the failure (probability) density function
of the system.

3.3.2 Hazard Rate Function


This is defined by

f (t )
λ(t ) = (3.5)
R(t )
where
λ(t ) is the item/system hazard rate (i.e., time-dependent failure rate).

By substituting Equation (3.2) into Equation (3.5), we obtain

1 dR(t )
λ(t ) = − . (3.6)
R(t ) dt

Example 3.2

Obtain an expression for the system hazard rate by using Equations (3.3) and (3.6).
By inserting Equation (3.3) into Equation (3.6), we obtain

1 de −λ s t
λ(t ) = − −λ s t .
e dt 
= λs (3.7)
Thus, the system hazard rate is given by Equation (3.7). It is to be noted that the
right-hand side of this equation is not a function of time t. In other words, it is
constant. Generally, it is referred to as the constant failure rate of an item/system
because it does not depend on time t.
28 Applied Reliability, Usability, and Quality for Engineers

3.3.3 General Reliability Function


This can be obtained by using Equation (3.6). Thus, by using Equation (3.6), we get
1
−λ(t )dt = .dR(t ) (3.8)
R(t )
By integrating both sides of Equation (3.8) over the time interval [0, t], we obtain
t R (t )
1

− λ(t )dt =
0
∫ R(t) dR(t)
1
(3.9)

At time t = 0, R(t) = 1.
By evaluating the right side of Equation (3.9) and rearranging, we obtain
t

ln R(t ) = − λ(t )dt



0
(3.10)

Thus, from Equation (3.10), we obtain


t


− λ ( t ) dt
R(t ) = e 0
(3.11)
Thus, Equation (3.11) is the general expression for the reliability function. It can be
used to obtain reliability function of an item/system when its times to failure follow any
time-continuous probability distribution (e.g., Weibull, Rayleigh, and Exponential).

Example 3.3

Assume that the hazard rate of an engineering system is expressed by Equation (3.1).
Obtain an expression for the reliability function of the engineering system by using
Equation (3.11).
By inserting Equation (3.1) into Equation (3.11), we get
t
 θ
−  γθ( γt )θ−1 e( γt ) dt
∫ 
R(t ) = e 0

 θ 
− e( γt ) −1
=e   (3.12)
Thus, Equation (3.12) is the expression for the reliability function of the engineer-
ing system.

3.3.4 Mean Time to Failure


The mean time to failure of a system/item can be obtained by using any of the
following three equations [1, 4]:

MTTF = R(t )dt


∫ 0
(3.13)
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 29

or

MTTF = lim R(s) (3.14)


s→ 0

or
t

MTTF = E (t ) = tf (t )dt

0
(3.15)

where
MTTF is the mean time to failure.
s is the Laplace transform variable.
R(s) is the Laplace transform of the reliability function R(t).
E(t) is the expected value.

Example 3.4

Prove by using Equation (3.3) that Equations (3.13) and (3.14) yield the same result
for the system mean time to failure.
By inserting Equation (3.3) into Equation (3.13), we obtain


MTTFs = e −λ st dt
0
1
=
λs (3.16)
where
MTTFs is the system mean time to failure.

By taking the Laplace transform of Equation (3.3), we get



Rs ( s ) = e − st e −λ st dt
0
1
=
s + λs (3.17)
where
Rs ( s ) is the Laplace transform of the system reliability function Rs (t ).

By inserting Equation (3.17) into Equation (3.14), we get

1
MTTFs = lim
s→0 (s + λ s )
1
=
λs (3.18)

Equations (3.16) and (3.18) are identical, which proves that Equations (3.13) and
(3.14) yield the same result for the system mean time to failure.
30 Applied Reliability, Usability, and Quality for Engineers

3.4 RELIABILITY NETWORKS


An engineering system can form various configurations in conducting reliability
analysis. Thus, this section is concerned with the reliability evaluation of such
commonly occurring networks/configurations.

3.4.1 Series Network
This is the simplest reliability network, and its block diagram is shown in Fig. 3.2.
The diagram represents an m-unit system, and each block in the diagram denotes a
unit. If any one of the m units malfunctions or fails, the series network/system fails.
In other words, for the successful operation of the series network/system, all the m
network/system units must operate normally.
The series network/system shown in Fig. 3.2, reliability is expressed by

Rss = P( E1E2 E3 ...Em ) (3.19)

where
Rss is the series system reliability.
Ei is the successful operation (i.e., success event) of unit i; for i = 1, 2, 3, …, m.
P( E1E2 E3 … Em ) is the occurrence probability of events E1E2 E3 … Em .

For independently failing units, Equation (3.19) becomes

Rss = P( E1 ) P( E2 ) P( E3 )… P( Em ) (3.20)

where
P( Ei ) is the probability of occurrence of event Ei, for i = 1, 2, 3, …, m.

If we let Ri = P( Ei ) for i = 1, 2, 3, …, m, Equation (3.20) becomes

Rss = R1 R2 R3 ... Rm
m

= ∏R
i =1
i
(3.21)

where
Ri is the unit i reliability, for i = 1, 2, 3, …, m.

FIGURE 3.2 Block diagram of a series network with m units.


Reliability Basics, Human Factors Basics for Usability, & Quality Basics 31

For constant failure rate, λ i , of unit i from Equation (3.11), we get


t


− λ i dt
Ri (t ) = e 0

= e −λi t (3.22)
where
Ri (t ) is the reliability of unit i at time t.

By substituting Equation (3.22) into Equation (3.21), we get


m
− ∑λi t (3.23)
Rss (t ) = e i =1

where
Rss (t ) is the series system reliability at time t.

By substituting Equation (3.23) into Equation (3.13), we get the following expression
for the mean time to failure of the series system/network:
m

− ∑λi t
MTTFss = e

0
i =1
dt

1
= m (3.24)
∑λ
i =1
i

where
MTTFss is the series system/network mean time to failure.

By inserting Equation (3.23) into Equation (3.6), we get the following expression for
the series system/network hazard rate:
m
 m
 − ∑λi t
λ ss (t ) = −

1
m
∑λi t
−

∑ i =1
λ i  e i =1

e i =1

= ∑λ
i =1
i (3.25)

where
λ ss (t ) is the series system/network failure rate (hazard rate).

It is to be noted that the right side of Equation (3.25) is independent of time t. Thus,
the left side of Equation (3.25) is simply λ ss , the failure rate of the series system/
network. It means that whenever we add up failure rates of independent units/items,
32 Applied Reliability, Usability, and Quality for Engineers

we automatically assume that these units/items form a series network, a worst-case


design scenario in regard to reliability.

Example 3.5

Assume that an engineering system is composed of four independent and identi-


cal subsystems, and the constant rate of a subsystem is 0.0004 failures per hour.
All these subsystems must function normally for the engineering system to operate
successfully.
Calculate the engineering system reliability for a 120-hour mission period,
mean time to failure, and failure rate.
By substituting the given data values into Equation (3.23), we obtain

Rss (120) = e −(0.0004 + 0.0004 + 0.0004 + 0.0004)(120)


= 0.8253

By inserting the specified data values into Equation (3.24), we get

1
MTTFss =
(0.0004 + 0.0004 + 0.0004 + 0.0004)
= 625 hours

Using the given data values in Equation (3.25) yields

λ ss = (0.0004 + 0.0004 + 0.0004 + 0.0004)


= 0.0016 failures per hour

Thus, the engineering system reliability, mean time to failure, and failure rate are
0.8253, 625 hours, and 0.0016 failures per hour, respectively.

3.4.2 Parallel Network


This network represents a system with m units operating simultaneously. For the
successful operation of the system, at least one of the m units must operate normally.
The block diagram of an m-unit parallel network/system is shown in Fig. 3.3, and
each block in the diagram represents a unit.
The failure probability of the parallel network/system shown in Fig. 3.3 is
expressed by

Fps = P( y1 y2 y3 … ym ) (3.26)

where
Fps is the failure probability of the parallel network/system.
yi is the failure (i.e., failure event) of unit i, for i = 1, 2, 3, …, m.
P( y1 y2 y3 … ym ) is the occurrence probability of events y1 , y2 , y3 ,…, and ym .
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 33

FIGURE 3.3 Block diagram of a parallel network/system with m units.

For independently failing parallel units, Equation (3.26) becomes

Fps = P( y1 ) P( y2 ) P( y3 )… P( ym ) (3.27)

where
P( yi ) is the occurrence probability of failure event yi , for i = 1, 2, 3, …, m.

If we let Fi = P( yi ), for i = 1, 2, 3, …, m, then Equation (3.27) becomes


m

Fps = ∏F
i =1
i (3.28)

where
Fi is the failure probability of unit i, for i = 1, 2, 3, …, m.

By subtracting Equation (3.28) from unity, we get


m

Rps = 1 − ∏Fi =1
i (3.29)

where
Rps is the parallel network/system reliability.

For constant failure rate λ i of unit i, subtracting Equation (3.22) from unity and then
inserting it into Equation (3.29) yields
m

Rps (t ) = 1 − ∏ (1 − e )
i =1
−λ i t
(3.30)

where
Rps (t ) is the parallel network/system reliability at time t.
34 Applied Reliability, Usability, and Quality for Engineers

For identical units, Equation (3.30) becomes

Rps (t ) = 1 − (1 − e −λt )m (3.31)

where
λ is the unit constant failure rate.

By substituting Equation (3.31) into Equation (3.13), we get the following equation
for the parallel network/system mean time to failure:

MTTFps =
∫ 1 − (1 − e
0
)  dt
−λt m

=
1
λ ∑ 1i
i =1
(3.32)

where
MTTFps is the identical units parallel network/system mean time to failure.

Example 3.6

Assume that an engineering system is composed of three independent, identi-


cal, and active units. At least one of the units must function normally for the
engineering system to operate successfully. The failure rate of a unit is 0.0005
failures per hour.
Calculate the engineering system reliability for a 100-hour mission and mean
time to failure.
By inserting the given data values into Equation (3.31), we obtain

3
Rps (100) = 1− 1− e −(0.0005)(100) 
= 0.9998

Substituting the specified data values into Equation (3.32) yields

1  1 1
MTTFps = 1+ +
(0.0005)  2 3 
= 3666.66 hours

Thus, engineering system reliability and mean time to failure are 0.9998 and
3666.66 hours, respectively.

3.4.3 k- out- of-n Network


In this case, the network/system is composed of n active units, at least k units out of
n active units must operate normally for the successful system operation. The block
diagram of a k-out-of-n unit network/system is shown in Fig. 3.4, and each block in
the diagram represents a unit. It is to be noted that parallel and series networks are
special cases of this network for k = 1 and k = n, respectively.
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 35

FIGURE 3.4 Block diagram of a k-out-of-n unit network/system.

By using the binomial distribution, for independent and identical units, we write
the following equation for reliability of k-out-of-n unit network shown in Fig. 3.4:

∑ ( j ) R (1 − R)
n
n n− j
Rk /n = j
(3.33)
j= k

where

( j) = (n −nj!)! j!
n
(3.34)

Rk / n is the k-out-of-n network/system reliability.


R is the unit reliability.

For constant failure rates of the identical units, by using Equations (3.11) and (3.33),
we get
n

Rk /n (t ) = ∑  n e (1 − e )
j= k
j
− jλt −λt n − j
(3.35)

where
Rk / n (t ) is the k-out-of-n network/system reliability at time t.
λ is the unit constant failure rate.

By inserting Equation (3.35) into Equation (3.13), we obtain



 n 
∫ ∑  n e
MTTFk / n =  
 − jλt
(1 − e −λt )n − j  dt
 j 
0  j= k 
n

=
1
λ ∑ 1j
j= k
(3.36)

where
MTTFk /n is the k-out-of-n network/system mean time to failure.
36 Applied Reliability, Usability, and Quality for Engineers

Example 3.7

Assume that an engineering system has four active, identical, and independent
units in parallel. At least three units must function normally for the successful
operation of the engineering system. Calculate the engineering system mean time
to failure if the unit constant failure rate is 0.0006 failures per hour.
By inserting the given data values into Equation (3.36), we get
4

MTTF3/ 4 =
1
(0.0006) ∑ 1j
3

1  1 1
= +
(0.0006)  3 4 
= 972.22 hours

Thus, the engineering system mean time to failure is 972.22 hours.

3.4.4 Standby System
This is another reliability network/system in which only one unit functions and n
units are kept in their standby mode. The system is composed of (n + 1) units, as soon
as the functioning unit fails, the switching mechanism detects the failure and turns
on one of the standby units. The system fails when all its standby units fail.
The block diagram of a standby system with one functioning and n standby units
is shown in Fig. 3.5. Each block in the diagram represents a unit. By utilising Fig. 3.5,
for independent and identical units, perfect switching mechanism and standby units,
and time-dependent unit failure rate, we obtain the following equation for the standby
system reliability [1, 5]:

 t  − λ (t ) dt 
j t

  λ(t )dt  e ∫0
n

∑ ∫  


j =1   0  
Rss (t ) = (3.37)
j!

where
Rss (t ) is the standby system reliability at time t.
λ(t ) is the unit time-dependent failure rate or hazard rate.
n is the number of standby units.

For constant unit failure rate (i.e., λ(t ) = λ), Equation (3.37) becomes
n

∑(λt) e
j=0
j −λt

Rss (t ) = (3.38)
j!
where
λ is the unit constant failure rate.
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 37

FIGURE 3.5 Block diagram of a standby system with one functioning and n standby units.

By substituting Equation (3.38) into Equation (3.13), we get

 n 
∞ 
 j=0
∑(λt ) j e −λt 

MTTFss = 
0 
∫ j!
 dt

 
 
(n + 1) (3.39)
=
λ

where
MTTFss is the standby system mean time to failure.

Example 3.8

Assume that an engineering system is composed of a standby system having four


identical and independent units: one operating and other three on standby. The unit
constant failure rate is 0.0004 failures per hour.
Calculate the standby system mean time to failure if the switching mechanism
is perfect and the standby units remain as good as new in their standby modes.
By substituting the specified data values into Equation (3.39), we get

(3 + 1)
MTTFss =
(0.0004)
= 10,000 hours
Thus, the standby system mean time to failure is 10,000 hours.

3.4.5 Bridge Network


Sometimes parts/units in engineering systems may form a bridge network, as shown
in Fig. 3.6. Each block shown in Fig. 3.6 diagram represents a part/unit, and all parts/
units are labelled with numerals.
38 Applied Reliability, Usability, and Quality for Engineers

FIGURE 3.6 Five units bridge network.

For five independent units/parts, the bridge network shown in Fig. 3.6, reliability
is expressed by [6]
Rbn = 2 R1 R2 R3 R4 R5 + R1 R3 R5 + R2 R3 R4 + R2 R5 + R1 R4 − R1 R2 R3 R4
− R1 R2 R3 R5 − R2 R3 R4 R5 − R1 R2 R4 R5 − R3 R4 R5 R1 (3.40)

where
Rbn is the bridge network reliability.
R j is the unit j reliability, for j = 1, 2, 3, 4, 5.

For identical units, Equation (3.40) becomes

Rbn = R5 − 5 R 4 + 2 R3 + 2 R 2 (3.41)

where
R is the unit reliability.

For constant failure rates of all five units, and using Equations (3.11) and (3.41), we obtain
Rbn (t ) = 2e −5λt − 5e −4 λt + 2e −3λt + 2e −2 λt (3.42)

where
Rbn (t ) is the bridge network reliability at time t.
λ is the unit constant failure rate.

By inserting Equation (3.42) into Equation (3.13), we obtain


MTTFbn =
∫ ( 2e
0
−5 λt
)
− 5e −4 λt + 2e −3λt + 2e −2 λt dt

49
= (3.43)
60 λ
where
MTTFbn is the bridge network mean time to failure.
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 39

Example 3.9

Assume that an engineering system with five identical and independent units form
a bridge network. Calculate the bridge network’s reliability for a 100-hour mission
and mean time to failure, if the constant failure rate of each unit is 0.0002 failures
per hour.
By inserting the given data values into Equation (3.42), we obtain

Rbn (100) = 2e −5(0.0002)(100) − 5e −4(0.0002)(100) + 2e −3(0.0002)(100) + 2e −2(0.0002)(100)


= 0.9992

Similarly, by inserting the specified data values into Equation (3.43), we get

49
MTTFbn =
60(0.0002)
= 4083.33 hours

Thus, the bridge network’s reliability and mean time to failure are 0.9992 and
4083.33 hours, respectively.

3.5 HUMAN FACTORS BASICS FOR USABILITY


Human factors basics play a very important role in the usability of engineering
systems. This section presents three of these basics in Sections 3.5.1–3.5.3.

3.5.1 Comparison of Humans’ and Machines’ Capabilities and Limitations


During engineering systems’ design process sometimes decisions may have to be made
whether to allocate certain functions to machines or to humans. In such situations, an
effective understanding of machines’ and humans’ capabilities and limitations is very
important; otherwise, the correct decisions may not be made.
Table 3.1 presents 19 comparisons of humans’ and machines’ capabilities and
limitations [7, 8].

3.5.2 Typical Human Behaviours


Over the years, many researchers in the area of human factors have conducted
extensive research on predicting human behaviour. They have fully highlighted a
large number of typical human behaviours. Ten of these behaviours, along with the
proposed design-associated measures in parentheses, are as follows [7, 8]:

• Typical behaviour I: People tend to regard all manufactured items as being


safe (Design items/products in such a manner so that they cannot be used
improperly. If this is not possible, then design in an appropriate mechanism
for making all potential users clearly aware of possible hazards.).
• Typical behaviour II: People’s attention is drawn to items such as bright
lights, loud noises, bright and vivid colours, and flashing lights (Ensure that
40 Applied Reliability, Usability, and Quality for Engineers

stimuli of adequate intensity are appropriately designed for when attention


needs to be attracted.).
• Typical behaviour III: People expect electrically powered switches to move
upward, to the right, etc., to activate power (Ensure that such devices are
appropriately designed according to human expectations.).
• Typical behaviour IV: People expect that faucet and valve handles rotate
counter clockwise for increasing the flow of a liquid, steam, or gas (Ensure
that all devices are designed to conform to human expectations.).

TABLE 3.1
A Comparison of Humans’ and Machines’ Capabilities and Limitations

No. Human Capability/Limitation Machine Capability/Limitation


1. Quite unsuitable to conduct tasks such as data coding, Extremely useful to conduct such tasks
amplification, or transformation
2. Excellent memory Very costly to have same memory
capability as humans
3. Quite capable to interpret an input signal even in Generally performs well only under ideal
noisy, distractive, and similar conditions environments (i.e., noise-free, clean, etc.)
4. Quite capable in conducting time-contingency analyses Very poor at this aspect
and predicting events in unfamiliar environments
5. Prone to stress as the result of interpersonal or other Independent of such problems
associated problems
6. Quite capable of conducting under transient overload Operation stops under overload conditions
(performance degrades gracefully) and usually fails at once
7. Prone to factors such as disorientation, coriolis Free from such factors
effects, and motion sickness
8. A high degree of tolerance to factors such as Limited in tolerance to such factors
uncertainty, vagueness, and ambiguity
9. Subject to physiological, ecological, and Subject to ecological needs only
psychological needs
10. Subject to degradation in performance because of Subject to degradation in performance
boredom and fatigue because of wear or lack of calibration
11. Extremely limited short-term memory for factual Short-term memory can be expanded to
matters any desirable and affordable level
12. Performance efficiency is affected by the anxiety Performance efficiency is unaffected by
factor anxiety
13. Highly capable of making inductive decisions in Little or no induction capability
novel conditions
14. Quite adversely affected by high g-forces Independent of g-forces
15. Very flexible in regard to task performance Relatively inflexible
16. Optimum strategy may not be followed all the time Designed strategy is executed all the time
17. Subject to social environment Free of social environment
18. Limited channel capacity Channel capacity can be expanded to
satisfy the need
19. Relatively easy maintenance Increase in complexity leads to serious
maintenance-related problems
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 41

• Typical behaviour V: Generally, humans know very little about their physi-
cal shortcomings (First, learn effectively about all human limitations and
then develop the design accordingly.).
• Typical behaviour VI: Generally, humans use their hands first for testing
or exploring (First, pay special attention to the handling aspect during the
item/product design process. Otherwise, recommend strongly and clearly
that the item/product use requires a device supplied for eliminating the
need to use the hands.).
• Typical behaviour VII: Humans have a tendency to hurry (Design items/
products in such a way that effectively takes into consideration the element
of hurry by humans.).
• Typical behaviour VIII: Humans get easily confused with unfamiliar items/
products (Avoid designing items/products that are totally unfamiliar to all
potential users.).
• Typical behaviour IX: During loss of balance, humans instinctively reach
for and grab the closest item/object (Develop the design of the item/product
in such a manner that it appropriately incorporates satisfactory emergency
supports.).
• Typical behaviour X: Humans have become very accustomed to specific
meanings of colour (Strictly observe current colour-coding standards dur-
ing the design process.).

3.5.3 Human Sensory Capacities


Humans possess many useful senses: hearing, sight, touch, smell, and taste. A good
understanding of these sensors can be quite useful for reducing various types of
usability-associated problems. First three of them (i.e., hearing, sight, and touch) are
described in Sections 3.5.3.1–3.5.3.3 [8, 9].

3.5.3.1 Noise (Hearing)


The hearing sense may be expressed simply as sounds that lack coherence, and reac-
tion of the human to noise extends well beyond the auditory systems (i.e., to feelings
such as fatigue, irritability, well-being, or boredom). Excessive noise can result in
various types of problems, including loss of hearing if exposed for long periods,
reduction in worker efficiency, and adverse effects on tasks requiring a high degree
of muscular coordination and precision or intense concentration.
The human ear can detect sounds with frequencies ranging from 20 Hz to 20,000 Hz
and is most sensitive to frequencies in the range of 600–900 Hz. Finally, it is to be noted
that humans exposed to noise with frequencies between 4,000 Hz and 6,000 Hz for long
period can suffer major loss of hearing [9, 10].

3.5.3.2 Sight
The sense of sight is stimulated by electromagnetic radiation of certain wavelengths,
often referred to as the electromagnetic spectrum. The parts of the spectrum, as
seen by the human eye, appear to vary in brightness. According to a number of stud-
ies conducted over the years, in daylight, the eyes of humans are most sensitive to
42 Applied Reliability, Usability, and Quality for Engineers

greenish-yellow light with a wavelength of around 5,500 Å [9]. Moreover, the eyes
see differently from different angles.
Some of the important sight-related guidelines are as follows:

• Avoid relying on colour as much as possible (where critical tasks may be


conducted by fatigued persons).
• Choose colours in such a way so that colour-weak individuals do not get
confused.
• Aim to use red filters with wavelengths greater than 6,500 Å.

3.5.3.3 Touch
This is quite closely related to humans’ ability for interpreting visual and auditory
stimuli. The sensory cues received by the skin and muscles can be utilised for sending
messages to the brain. In turn, this helps to relieve a part of the load from eyes and ears.
This human quality can be utilised quite successfully in various areas of engineering
usability. For example, in situations when the users of an item/product is expected to
rely totally on his/her touch, different shapes of knob could be considered for use.
Finally, it is to be noted that the use of touch in various technical areas is not new;
it has been utilised for many centuries by artisans for detecting surface irregularities
and roughness in their work. In fact, past experiences over the years clearly highlight
that the detection accuracy of surface irregularities improves dramatically when the
involved individual moves an intermediate piece of paper or thin cloth over the sur-
face of the object under consideration instead of just bare fingers [11].

3.6 QUALITY GOALS AND QUALITY ASSURANCE


SYSTEM ELEMENTS
Generally in organisations attainable goals for quality are developed first, and then
efforts are directed for meeting these goals/objectives. Some organisations may
group their quality goals under the following two classifications [12]:

• Classification I: Goals for breakthrough. These goals are basically concerned


with improving the existing quality of products/services. There could be vari-
ous reasons for establishing such goals including the three presented below:
• Enhancing company image in the market, dissatisfied customers and
others with the present products/services.
• Retaining or attaining quality leadership.
• Losing market share because of failure to compete with similar products/
services provided by others.
• Classification II: Goals for control. These goals are concerned with main-
taining the quality of products/services to the existing level for a given
period. Some of the reasons for such goals are as follows:
• Insignificant number of customers or other complaints about the quality
of products/services.
• Improvements are uneconomical.
• Acceptable competitiveness at present quality levels.
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 43

All in all, quality-related goals should be developed by following steps such as


those presented below [13].

• Highlighting potential goals


• Quantifying potential goals
• Setting goal priorities

The main objective of a quality assurance system is to maintain the specified level of
quality. Its important elements/tasks are as follows [14]:

• Monitor supplier quality assurance.


• Evaluate, plan, and control product quality.
• Assure accuracy of quality measuring equipment.
• Consider the quality and reliability needs during the product design and
development process.
• Evaluate and control product quality in use environment.
• Manage the total quality assurance system and conduct special quality studies.
• Develop personnel.
• Feedback quality-related information to management.
• Perform special quality studies.

3.7 PRODUCTS’ AND SERVICES’ QUALITY AFFECTING


FACTORS AND TOTAL QUALITY MANAGEMENT (TQM)
There are many factors that affect products’ and services’ quality. The seven important
factors that directly influence the products’ and services’ quality are as follows [15, 16]:

• Factor I: Management
• Factor II: Machine used in manufacturing
• Factor III: Money, manpower, and materials
• Factor IV: Motivation of employees
• Factor V: Modern information methods
• Factor VI: Market for product and services
• Factor VII: Mounting product requirements

The term total quality management (TQM) was coined by Nancy Warren, a behav-
ioural scientist, in 1985 [17]. It is composed of three words, each of which is described
below separately in detail.

• Total: This calls for an effective team effort of all involved parties for sat-
isfying customers. There are many factors that play a very important role
in developing a successful supplier-customer relationship. Some of these
factors are as follows:
• Customer-supplier relationships’ development on the basis of mutual
trust and respect.
• Customers developing their all internal needs.
44 Applied Reliability, Usability, and Quality for Engineers

• Customers making suppliers understand their obligations or needs


effectively.
• Monitoring of suppliers’ products and processes by customers on a
regular basis.
• Quality: There are many definitions of quality. Nonetheless, quality must
be viewed from the customer perspective. This factor is further reinforced
by the result of a survey conducted by the Conference Board of Canada, in
which over 80% of the respondents clearly stated that quality is defined by
the customer and not by the supplier [18].
• Management: An effective approach to management is very important in
determining company ability for attaining corporate objectives and to allo-
cate resources in an effective manner. The TQM approach needs an effec-
tive involvement of employees in company decision making because their
participation and contribution are viewed as critical to hold all areas of
business in providing high-quality products and services to customers.

In order to practice TQM concept in an effective manner, it is absolutely essential to


clearly understand fundamental differences between TQM and the traditional qual-
ity assurance management (TQAM). Table 3.2 presents comparisons in seven areas
between TQSM and TQM [19, 20].

3.7.1 TQM Elements and Goals for TQM Process Success


TQM is composed of many elements. The important ones are as follows [21]:

• Management commitment and leadership


• Supplier participation

TABLE 3.2
Comparisons between Traditional Quality Assurance Management and the
Total Quality Management

Traditional Quality Assurance


No. Area Management Total Quality Management
1. Cost Improvements in quality result in Better quality decreases cost and
higher cost increases productivity
2. Customer Ambiguous understanding of A well-defined approach to comprehend
customer or consumer requirements and satisfy customer requirements
3. Quality Quality control group/inspection All people in the organisation involved
responsibility centre
4. Decision making Practiced usual top-down method Practiced an effective team approach
with team of employees
5. Objective Discover errors Prevent the occurrence of errors
6. Definition Product-driven Customer-driven
7. Quality defined Products satisfy specifications Products suitable for consumer applications
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 45

• Statistical approaches
• Customer service
• Team work
• Quality cost
• Training

For TQM process success, there are many goals that must be fulfilled properly. Some
of these goals are as follows [22]:

• Establishment of incentives and rewards for employees when process


control and customer satisfaction final results are attained.
• Meeting of control guidelines per customer needs by all concerned systems
and processes.
• Clear understanding of internal and external customer requirements by all
company employees.
• Use of a system to continuously improve processes that better satisfy cus-
tomers’ present and future needs.

3.7.2 Deming Approach to TQM


There have been many individuals who over the years directly or indirectly have
contributed to TQM. One of these contributors was W.E. Deming, a graduate in
engineering, mathematics, and physics. His fourteen step approach for improving
quality is as follows [19, 23–25]:

• Step I: Develop constancy of purpose for enhancing services/products.


More specifically, this requires the development of a mission statement
addressing issues such as quality philosophy, investors, long term corporate
objectives, employees, and growth plans.
• Step II: Lead to promote change. More specifically, this means that the
existing acceptable levels of delays, defects, or mistakes are unacceptable
and all concerned personnel/bodies are alerted to determine factors for
their (i.e., delays, defects, or mistakes) existence. Subsequently, everyone
concerned work together for rectifying highlighted problems.
• Step III: Stop depending on mass inspection and build quality into services/
products.
• Step IV: Stop awarding business or contracts on the basis of price and
develop long-term relationships on the basis of performance.
• Step V: Improve product, quality, and service continuously.
• Step VI: Institute training measures that include modern/latest techniques,
methods, and approaches.
• Step VII: Practice latest/modern supervisory methods and approaches.
• Step VIII: Eliminate the element of fear altogether.
• Step IX: Break down existing barriers between departments/units/groups
and emphasise team effort.
• Step X: Eliminate numerical goals, posters, and slogans because they create
adversarial relationships.
46 Applied Reliability, Usability, and Quality for Engineers

• Step XI: Eradicate numerical quotas and the practice of management by


objectives (MBO).
• Step XII: Eradicate all existing obstacles to employee pride in workmanship.
• Step XIII: Encourage dynamic education and self-improvement programmes.
• Step XIV: Make the transformation everyone’s task and force all concerned
to work on it an effective manner.

3.7.3 Obstacles to TQM Implementation


Over the years, individuals involved with the implementation of TQM have experi-
enced many obstacles. Knowledge of all these obstacles is considered very important
prior to embarking on the TQM implementation process. Some of these obstacles in
the form of questions are presented in Table 3.3 [26].

3.7.4 Organisations that Promote the TQM Concept


and Selected Books on TQM

There are many organisations and books that promote the TQM concept. This section
lists some of these organisations and books separately.

3.7.4.1 Organisations
• American Society for Quality Control, 611 East Wisconsin Avenue, P.O.
Box 3005, Milwaukee, WI.
• American Productivity and Quality Center, 123 North Post Oak Lane,
Houston, Texas.
• Quality and Productivity Management Association, 300 Martingale Road,
Suite 230, Schamburg, IL.

3.7.4.2 Books
• Oakland, J.S., Total Quality Management: Text with Cases, Butterworth-
Heinemann, Burlington, MA, 2003.
• Besterfield, D.H., et al., Total Quality Management, Prentice Hall, Upper
Saddle River, NJ, 2003.

TABLE 3.3
Some TQM Obstacles in the Form of Questions

No. Obstacle-Related Questions


1. Is there adequate time available for implementing TQM programme in an effective manner?
2. Is it possible to obtain effective support of managers and their subordinates possessing an
“independent” attitude?
3. Will upper management support the introduction of the TQM programme?
4. How to convince all involved people that TQM is different?
5. Does management clearly understand TQM purpose?
6. Who will set the TQM vision?
7. Is it possible to quantify customer needs? If so, how?
8. How to convince individuals of the need to change?
Reliability Basics, Human Factors Basics for Usability, & Quality Basics 47

• Rampersad, H.K., Total Quality Management, Springer-Verlag, New York,


2000.
• Stein, R.E., The Next Phase of Total Quality Management, Marcel Dekker,
New York, 1994.
• Tenner, R.R., Detoro, I.J., Total Quality Management: Three Steps to
Continuous Improvement, Addison-Wesley, MA, 1992.
• Mizuno, S., Company-Wide Total Quality Control, Asian Productivity
Organization, Tokyo, 1989.
• Gevirtz, C.D., Developing New Products with TQM, McGraw-Hill, New
York, 1994.
• Spenley, P., World Class Performance Through Total Quality, Chapman
Hall, London, 1992.

3.8 PROBLEMS
1. Describe the bathtub hazard rate concept.
2. Define the following functions:
• Failure density function
• Hazard rate function
• General reliability function
3. Write down three general formulas that can be used to obtain system mean
time to failure.
4. Assume that an engineering system has five active, identical, and independent
units in parallel. At least two units must operate normally for the successful
operation of the engineering system. Calculate the engineering system mean
time to failure if the unit constant failure rate is 0.0008 failures per hour.
5. Assume that an engineering system with five independent and identical
units form a bridge network. Calculate the bridge network’s reliability for
a 200-hour mission and mean time to failure, if the constant failure rate of
each unit is 0.0004 failures per hour.
6. Compare humans’ and machines’ capabilities and limitations.
7. Describe the following two humans senses:
• Hearing
• Touch
8. What are the products’ and services’ quality effecting factors?
9. List at least seven TQM elements.
10. Describe Deming approach to TQM.

REFERENCES
1. Dhillon, B.S., Design Reliability: Fundamentals and Applications, CRC Press, Boca
Raton, Florida, 1999.
2. Kapur, K.C., Reliability and Maintainability, in Handbook of Industrial Engineering,
edited by Salvendy, G., John Wiley and Sons, New York, 1982, pp. 8.5.1–8.5.34.
3. Dhillon, B.S., Life Distributions, IEEE Transactions on Reliability, Vol. 30, No. 5,
1981, pp. 457–460.
4. Shooman, M.L., Probabilistic Reliability: An Engineering Approach, McGraw-Hill,
New York, 1968.
48 Applied Reliability, Usability, and Quality for Engineers

5. Sandler, G.H., System Reliability Engineering, Prentice Hall, Englewood Cliffs,


New Jersey, 1963.
6. Lipp, J.P., Topology of Switching Elements Versus Reliability, Transactions on IRE
Reliability and Quality Control, Vol. 7, 1957, pp. 21–34.
7. Woodson, W.E., Human Factors Design Handbook, McGraw-Hill, New York, 1981.
8. Dhillon, B.S., Engineering Usability: Fundamentals, Applications, Human Factors, and
Human Error, American Scientific Publishers, Stevenson Ranch, California, 2004.
9. Engineering Design Handbook: Maintainability Guide for Design, AMCP 706-134,
Prepared by the United States Army Material Command, 5001 Eisenhower Avenue,
Alexandria, VA, 1972.
10. Engineering Design Handbook: Maintainability Engineering Theory and Practice,
AMCP 706-133, Prepared by the United States Army Material Command, 5001
Eisenhower Avenue, Alexandria, VA, 1976.
11. Lederman, S., Heightening Tactile Impression of Surface Texture, in Active Touch,
edited by Gordon, G., Pergamon Press, New York, 1978, pp. 40–45.
12. Juran, J.M., Gryna, F.M., Bingham, R.S., Quality Control Handbook, McGraw-Hill,
New York, 1979.
13. Evans, J.R., Lindsay, W.M., The Management and Control of Quality, West Publishing
Company, New York, 1996.
14. The Quality World of Allis-Chalmers, Quality Assurance, Vol. 9, 1970, pp. 13–17.
15. Meigenbaum, A.V., Total Quality Control, McGraw-Hill, New York, 1983.
16. Dhillon, B.S., Quality Control, Reliability, and Engineering Design, Marcel Dekker,
New York, 1985.
17. Walton, M., Deming Management at Work, Putnam, New York, 1990.
18. Farquhar, C.R., Johnston, C.G., Total Quality Management: A Competitive Imperative,
Report No. 60-90-E, 1990. Available from the Conference Board of Canada, 255 Smyth
Road, Ottawa, Ontario, Canada.
19. Schmidt, W.H., Finnigan, J.P., The Race Without a Finish Line: America’s Quest for
Total Quality, Jossey-Bass Publishers, San Francisco, California, 1992.
20. Madu, C.N., Chu-hua, K., Strategic Total Quality Management (STQM), in Management
of New Technologies for Global Competitiveness, edited by Madu, C.N., Quorum
Books, Westport, CT, 1993, pp. 3–25.
21. Burati, J.L., Matthews, M.F., Kalidindi, S.N., Quality Management Organization and
Techniques, Journal of Construction Engineering and Management, Vol. 118, March
1992, pp. 112–128.
22. Dhillon, B.S., Advanced Design Concepts for Engineers, Technomic Publishing Company,
Lancaster, PA, 1998.
23. Heizer, J., Render, B., Production and Operations Management, Prentice Hall, Upper
Saddle River, NJ, 1995.
24. Mears, P., Quality Improvement Tools and Techniques, McGraw-Hill, New York, 1995.
25. Goetsch, D.L., Davis, S., Implementing Total Quality, Prentice Hall, Englewood Cliffs,
NJ, 1995.
26. Klein, R.A., Achieve Total Quality Management, Chemical Engineering Progress,
November 1991, pp. 83–86.
4 Reliability, Usability, and
Quality Analysis Methods
4.1 INTRODUCTION
Just like in the case of other areas of engineering, over the years many methods
to perform reliability, usability, and quality analysis of engineering systems have
been developed [1–6]. The main objective of these methods is to improve reliability,
usability, and quality of engineering systems and products. Some of these methods
can be used for performing reliability, usability, and quality analysis. The others are
more confined to a specific area (i.e., reliability, usability, or quality).
Two examples of these methods that can be used in the reliability, usability, and
quality areas are fault tree analysis (FTA) and failure modes and effect analysis
(FMEA). FTA was developed in the early 1960s for analysing the safety of rocket
launch control systems. Similarly, FMEA was developed in the early 1950s for ana-
lysing the reliability of engineering systems. Nowadays, both FTA and FMEA are
being used across many diverse areas for analysing various types of problems.
This chapter presents a number of methods considered useful to perform engineering
systems reliability, usability, and quality analysis studies.

4.2 FAILURE MODES AND EFFECT ANALYSIS (FMEA)


This is a quite widely used method for analysing the reliability of engineering sys-
tems, and it may simply be described as an approach for analysing the effects of
potential failure modes in the system [1]. The history of FMEA goes back to the early
years of 1950s with the development of flight control systems, when the U.S. Navy’s
Bureau of Aeronautics developed a requirement called “failure analysis” for estab-
lishing a mechanism for reliability control over the detail design-related efforts [7].
Eventually, the term was changed to FMEA.
Generally, the following seven steps are followed to perform FMEA [1, 8]:

• Step I: Define system boundaries and detailed requirements.


• Step II: List all system parts/components and subsystems.
• Step III: List each part’s/component’s identification, description, and failure
modes.
• Step IV: Assign a failure occurrence probability/rate to each part/component
failure mode.
• Step V: List each failure mode effect/effects on subsystems, system, and
plant.
• Step VI: Enter appropriate remarks for each failure mode.
• Step VII: Review each critical failure mode and take necessary actions.

DOI: 10.1201/9781003298571-4 49
50 Applied Reliability, Usability, and Quality for Engineers

There are many factors that must be explored carefully prior to the implementa-
tion of FMEA. Four of these factors are as follows [9, 10]:

• Factor I: Each conceivable failure mode’s examination by all the involved


professionals.
• Factor II: Obtaining approval and support of the engineer.
• Factor III: Measuring the benefits and costs.
• Factor IV: Making all decisions based on the risk priority number.

Over the years, professionals directly or indirectly involved with reliability analysis
have established certain facts and guidelines concerning FMEA. Four of these facts
and guidelines are shown in Fig. 4.1 [8, 10].
There are many advantages of conducting FMEA. Some of the main ones are
presented below [1, 8–10]:

• A useful approach that starts from the detailed level and works upward.
• A visibility tool for management that reduces product development time
and cost.
• A useful approach for comparing designs and highlighting safety-related
concerns.
• A quite helpful tool for safeguarding against repeating the same mistakes
in the future.
• A useful approach for reducing engineering-related changes and improving
the efficiency of test planning.
• A useful tool for improving communications among design interface
personnel.
• A useful tool for understanding and improving customer satisfaction.
• A systematic tool for categorizing and classifying hardware failures.

FIGURE 4.1 FMEA-associated facts/guidelines.


Reliability, Usability, and Quality Analysis Methods 51

4.3 FAULT TREE ANALYSIS (FTA)


This is a widely used method in the industrial sector to evaluate the engineering
systems reliability during their design and development phase, particularly in the
area of nuclear power generation. A fault tree may simply be described as a logical
representation of the relationship of basic events that result in a specified undesir-
able event, called the top event. The fault tree is depicted using a tree structure, with
AND, OR, and other logic gates.
This method was developed in the early years of 1960s at the Bell Telephone
Laboratories for performing analysis of the Minuteman Launch Control System [1, 2].
Some of the main objectives of carrying out FTA are presented below [1, 8]:

• To comprehend the functional relationship of system failures.


• To highlight critical areas and cost-effective improvements.
• To comprehend the degree of protection that the design concept provides
against failures’ occurrence.
• To confirm the system’s ability to satisfy its imposed safety-associated
requirements.
• To satisfy jurisdictional-associated requirements.

It is to be noted that there are many prerequisites associated with FTA. Some of the
main ones are presented below [1, 8]:

• A clear definition of what constitutes system/item failure (i.e., the undesir-


able event).
• Thorough understanding of design, operation, and maintenance aspects of
system/item under consideration.
• A comprehensive review of a system’s/item’s operational experience.
• Clearly defined a system’s/item’s physical bounds and interfaces.
• Clearly defined analysis objectives and scope.
• Clear identification of all associated assumptions.

FTA starts by highlighting the top event, which is associated with a system/item under
consideration. Fault events that can cause the top event’s occurrence are generated and
connected by logic operators such as AND and OR. The AND gate provides a true out-
put (i.e., fault) when all the inputs are true. Similarly, the OR gate provides a true output
(i.e., fault) when one or more inputs are true.
The construction of a fault tree proceeds by generating fault events in a successive
manner until the fault events need not be developed any further. These fault events
are known as primary/basic events. A fault tree relates the top event to the primary/
basic fault events. During a fault tree’s construction process, the following question
is successively asked:

• How could this fault event occur?


Four basic symbols used to construct fault trees are shown in Fig. 4.2. The
meanings of symbols/gates AND and OR, shown in Fig. 4.2, have already
52 Applied Reliability, Usability, and Quality for Engineers

been discussed earlier. The remaining two symbols (i.e., circle and rectangle)
are described below:
• Circle: It represents a primary/basic fault event (e.g., the failure of an
elementary component/part), and the primary/basic fault-event param-
eters are failure rate, failure probability, unavailability, and repair rate.
• Rectangle: It represents a resultant event that occurs from the combi-
nation of fault events through the input of a logic gate such as OR and
AND.

Example 4.1

Assume that a windowless room contains three light bulbs and one switch.
Develop a fault tree for the undesired fault event (i.e., top fault event) “Dark room”,
if the switch can only fail to close.
In this case, there can be no light in the room (i.e., dark room) only if all the
three light bulbs burn out, if there is no incoming electricity, or if the switch fails
to close. Using the all four symbols shown in Fig. 4.2, a fault tree for the example
is shown in Fig. 4.3. The single capital letters in the fault tree diagram represent
corresponding fault events (e.g., A: dark room, B: three bulbs burned out, and C:
power failure).

4.3.1 Fault Tree Probability Evaluation


When basic/primary fault events’ probabilities are known, the occurrence probability
of the top fault event can be calculated. This can be obtained only by first calculating
the occurrence probabilities of the output fault events of all the involved intermediate
and lower logic gates (e.g., the OR and AND gates. Thus, the occurrence probability
of the OR gate output fault event (say X) is defined by [1, 8]

FIGURE 4.2 Basic fault tree symbols: (i) circle, (ii) rectangle, (iii) AND gate, and (iv) OR gate.
Reliability, Usability, and Quality Analysis Methods 53

FIGURE 4.3 A fault tree for the top fault event: dark room.

P( X ) = 1 − ∏ {1 − P(x )}
j =1
j (4.1)

where
P(X) is the occurrence probability of the OR gate output fault event X.
m is the number of OR gate input independent fault events.
P( x j ) is the probability of occurrence of the OR gate input fault event x j , for
j = 1, 2, 3, …, m.

Similarly, the occurrence probability of the AND gate output fault event (say Y) is
expressed by [1, 8]
n

P(Y ) = ∏P( y )
j =1
j (4.2)

where
P(Y) is the occurrence probability of the AND gate output fault event Y.
n is the number of AND gate input independent fault events.
P( y j ) is the probability of occurrence of the AND gate input fault event y j , for
j = 1, 2, 3, …, n.

Example 4.2

Assume that in Fig. 4.3, the occurrence probabilities of independent fault events
C, D, F, G, H, and I are 0.08, 0.07, 0.06, 0.05, 0.04, and 0.03, respectively.
Calculate the probability of occurrence of the top fault event A (Dark room) by
using Equations (4.1) and (4.2).
54 Applied Reliability, Usability, and Quality for Engineers

By substituting the given occurrence probability values of fault events I and C


into Equation (4.1), we get

P( X ) = 1− {1− P(I )}{1− P(C )}


= 1− {1− 0.03}{1− 0.08}
= 0.1076
where
P(X) is the occurrence probability of fault event X (i.e., no electricity).

Similarly, by substituting the given occurrence probability values of the fault events
F, G, and H into Equation (4.2), we get

P ( Y ) = P (F )P (G)P (H)
= ( 0.06)( 0.05)( 0.04 )
= 0.00012
where
P(Y) is the occurrence probability of fault event Y (i.e., three bulbs burned out).

By substituting these calculated values and the given data value into Equation (4.1),
we get

{ }{ }{
P ( A ) = 1− 1− P ( X ) 1− P ( Y ) 1− P (D) }
= 1− {1− 0.1076}{1− 0.00012}{1− 0.07}
= 0.1701

Thus, the probability of occurrence of the top fault event A (Dark room) is 0.1701.

4.3.2 Benefits and Drawbacks of the Fault Tree Analysis


There are many benefits and drawbacks of FTA. Some of its benefits are as follows [1, 8]:

• Is quite helpful in providing options for management personnel and others


to conduct either quantitative or qualitative reliability analysis.
• Requires the involved analyst to understand the system under consideration
thoroughly prior to starting the analysis.
• Is a useful way for providing insight into the system behaviour.
• Allows the analyst to handle complex systems more easily.
• Serves as a very useful graphic aid for system management.
• Is a very useful way for highlighting failures deductively.
• Allows concentration on one specific failure at a time.

In contrast, some of the drawbacks of FTA are as follows [1, 8]:

• A costly and a time-consuming method.


• Considers components and parts in either an operational state or a failed
state (i.e., partial-failure states of the components and parts are quite difficult
to handle).
• Results quite difficult to check.
Reliability, Usability, and Quality Analysis Methods 55

4.4 MARKOV METHOD


This is a frequently used method to model engineering systems with constant failure
and repair rates and is named after Russian mathematician Andrei Andreyevich Markov
(1856–1922). It is subjected to the following three assumptions [1, 11]:

• The transitional probability from one system state to another in the finite
time interval ∆t is given by θ∆t, where θ is the transition rate (e.g., failure or
repair rate) from one system state to another.
• The probability of more than one transition occurrence taking place in the
finite time interval ∆t is negligible (e.g., (θ∆t )(θ∆t ) → 0).
• All occurrences are independent of each other.

The application of this method is demonstrated through the following example:

Example 4.3

Assume that an engineering system can be in either operating or failed state, and
its constant failure and repair rates are λ and µ , respectively. The system state
space diagram is shown in Fig. 4.4, the numerals in circle and rectangle denote
the engineering systems states. Obtain expressions for the engineering system’s
time-dependent and steady-state availabilities and unavailabilities, reliability, and
mean time to failure by using the Markov method.
Using the Markov method, we write down the following equations for states 0
and 1, shown in Fig. 4.4, respectively:

P0 (t + ∆t ) = P0 (t )(1− λ∆t ) + P1(t )µ∆t (4.3)

P1(t + ∆t ) = P1(t )(1− µ∆t ) + P0 (t )λ∆t (4.4)

where
t is time.
P0 (t + ∆t ) is the probability of the engineering system being in operating state 0
at time (t + ∆t )
P1(t + ∆t ) is the probability of the engineering system being in failed state 1 at
time (t + ∆t ).
Pi (t ) is the probability that the engineering system is in state i at time t, for i = 0,1.
λ∆t is the probability of the engineering system failure in finite time interval ∆t.

FIGURE 4.4 Engineering system state space diagram.


56 Applied Reliability, Usability, and Quality for Engineers

µ∆t is the probability of the engineering system repair in finite time interval ∆t .
(1− λ∆t ) is the probability of no failure in finite time interval ∆t .
(1− µ∆t ) is the probability of there being no repair in finite time interval ∆t .

From Equation (4.3), we get

P0 (t + ∆t ) = P0 (t ) − P0 (t )λ∆t + P1(t )µ∆t  (4.5)

From Equation (4.5), we write

P0 (t )(t + ∆t ) − P0 (t )
lim = −P0 (t )λ + P1(t )µ (4.6)
∆t → 0 ∆t

From Equation (4.6), we obtain

dP0 (t )
+ P0 (t )λ = P1(t )µ (4.7)
dt

Similarly, using Equation (4.4), we get

dP1(t )
+ P1(t )µ = P0 (t )λ(4.8)
dt

At time t=0, P0 (0) = 1, and P0 (0) = 0.


By solving Equations (4.7) and (4.8), we obtain [1]

µ λ
P0 (t ) = + e −( λ+µ )t (4.9)
(λ + µ ) λ + µ

λ λ
P1(t ) = − e −( λ+µ )t (4.10)
(λ + µ ) (λ + µ )

Thus, the time-dependent availability and unavailability of the engineering system,


respectively, are

µ λ
AV (t ) = P0 (t ) = + e −( λ+µ )t (4.11)
(λ + µ ) (λ + µ )

λ λ
UA(t ) = P1(t ) = − e −( λ+µ )t (4.12)
(λ + µ ) (λ + µ )

where
AV(t) is the time-dependent availability of the engineering system.
UA(t) is the time-dependent unavailability of the engineering system.

By letting time t go to infinity in Equations (4.11) and (4.12), we get

µ
AV = lim AV (t ) = (4.13)
t →∞ (λ + µ )
Reliability, Usability, and Quality Analysis Methods 57

and

λ
UA = lim UA(t ) = (4.14)
t →∞ (λ + µ )

where
AV is the steady-state availability of the engineering system.
UA is the steady-state unavailability of the engineering system.

For µ = 0 , from Equation (4.9), we get

R(t ) = P0 (t ) = e −λt (4.15)

where
R(t) is the engineering system reliability at time t.

By integrating Equation (4.15) over the time interval [0, ∞], we get the following
equation for the mean time to failure of the engineering system [1]:


MTTF = e −λt dt
0

1
= (4.16)
λ

where
MTTF is the mean time to failure of the engineering system.

Thus, the engineering system’s time-dependent and steady-state availabilities and


unavailabilities, reliability, and mean time to failure are given by Equations (4.11),
(4.12), (4.13), (4.14), (4.15), and (4.16), respectively.

Example 4.4

Assume that an engineering system’s constant failure and repair rates are 0.0005
failures/hour and 0.0008 repairs/hour, respectively. Calculate the engineering
system’s steady-state availability and availability during a 60 hours mission.
By substituting the given data values into Equations (4.13) and (4.11), we get

0.0008
AV = = 0.6153
(0.0005 + 0.0008)

and

0.0008 0.0005
AV (60) = + e −(0.0005+ 0.0008)(60)
(0.0005 + 0.0008) (0.0005 + 0.0008)
= 0.9711

Thus, the engineering system’s steady-state availability and availability during a


60 hours mission are 0.6153 and 0.9711, respectively.
58 Applied Reliability, Usability, and Quality for Engineers

4.5 COGNITIVE WALKTHROUGHS


Cognitive walkthroughs are an approach that can be utilised for evaluating pro-
totype products/systems. The basic idea behind them is first, to walk through an
interface’s operation with all involved personnel, and then to highlight problems
within that system. Generally, the approach/method incorporates checklists for
use by involved developers for highlighting possible problems with an interface.
This essentially provides a framework to involved developers for use to check
the system from a cognitive perspective. The framework addresses issues such as
presented below [12]:

• The linking of the interface object-related components to the actions to be


executed by users.
• Assumptions with regard to user knowledge.
• Actions to be executed by users at each point in an interaction.

It is to be noted that for this method to be used effectively, it is essential to have a


clear understanding of the characteristics of all potential users.
Some of the benefits and drawbacks of cognitive walkthroughs are as follows [3]:

• Benefits
• Are relatively fast to administer and lead directly to diagnostic and
prescriptive information.
• Are quite useful for facilitating communication among design person-
nel, particularly when they are divided into developers and requirement
analysts.
• Are quite useful to comprehend the user’s environment.
• Drawbacks
• A very high degree of reliance on the investigator’s judgement.
• Lack of guidance in selecting the appropriate tasks for evaluation.

Additional information on the method is available in Refs. [3, 12–14].

4.6 TASK ANALYSIS


This method is largely used during the product design’s specification phase. It may
simply be expressed as the study of what a user of a product is expected to do, with
regard to actions and cognitive processes, to achieve a task objective effectively [3].
In general, it may be added that the term “task analysis” refers to a methodology that
can be performed using many specific methods. These methods are basically utilised
to evaluate the interactions between the users and the systems and products.
Task analysis helps to break down the methods for carrying out tasks with a
system/product under consideration into a series of steps. Consequently, the methods
can be utilised for predicting whether the performance of tasks in question will be
easy or difficult, as well as the degree of effort likely to be needed. The final result
of basic task analyses provides a list of the physical steps that the user must carry
Reliability, Usability, and Quality Analysis Methods 59

out effectively for completing a specific task. However, it to be noted that complex
task analyses also, directly or indirectly, consider the cognitive steps associated
with a task.
The number of steps required for accomplishing a task may be considered as an
elementary measure of task complexity. The principle may simply be stated as “the
simpler the task, the fewer the steps”. Individuals such as potential system/product
users, experienced system/product designers, and domain experts can be quite valu-
able informants in conducting task analysis.
Some of the benefits and drawbacks of this method (i.e., task analysis) are as
follows [3, 4]:

• Benefits
• Is quite useful with regard to prescribing potential solutions to usability-
related problems.
• Is quite useful for highlighting the elements of the design of the system/
product that causes the inconsistencies.
• Requires the involved investigator to follow a specific procedure
because of the standardisation of task analysis notations.
• Drawbacks
• The assumption of “expert” performance with the system under
consideration.
• Problems with the measure of task complexity (i.e., simply counting the
number of steps involved in performing a task).

Additional information on this method is available in Refs. [15, 16].

4.7 PROBABILITY TREE ANALYSIS


Probability tree analysis can be an excellent method for conducting usability-related
task analysis, by diagrammatically representing human actions and other associated
events. Diagrammatic task analysis is represented by the branches of the probability
tree. The tree’s branching limbs represent the outcome of each event (i.e., success or
failure), and each branch is assigned a probability of occurrence [17].
Some of the benefits of probability tree analysis are a visibility tool, simpli-
fied mathematical computations, and flexibility for incorporating (i.e., with some
modifications) factors such as interaction stress, interaction effects, and emotional
stress.
The method is demonstrated by solving the example presented below.

Example 4.5

Assume that a person has to perform two independent and distinct tasks (x and y)
to operate or use an engineering system. Task x is performed before task y and each
of these tasks can be performed correctly or incorrectly. Furthermore, assume that
the probabilities of the person not performing tasks x and y correctly are 0.05 and
0.1, respectively.
60 Applied Reliability, Usability, and Quality for Engineers

FIGURE 4.5 A probability tree for performing tasks x and y.

Develop a probability tree and obtain an expression for the probability of not
successfully accomplishing the mission (i.e., not operating the engineering system
correctly). Also, calculate the probability of correctly operating/using the engi-
neering system by the person.
In this example, the person first performs task x correctly or incorrectly, and
then proceeds to perform task y. This whole scenario is depicted by a probability
tree shown in Fig. 4.5.
The symbols used in Fig. 4.5 are defined below:

• x denotes the event that task x is performed correctly.


• x denotes the event that task x is performed incorrectly.
• y denotes the event that task y is performed correctly.
• y denotes the event that task y is performed incorrectly.

In Fig. 4.5, the term xy denotes operating the engineering system successfully
(i.e., overall mission success). Thus, the occurrence probability of event xy is [4, 18]

P( xy ) = Px Py (4.17)

where
Px is the probability of performing task x correctly.
Py is the probability of performing task y correctly.

Similarly, in Fig. 4.5, the terms xy , xy + x y denote three distinct possibilities of not
operating the engineering system correctly. Thus, the probability of not success-
fully accomplishing the overall mission is

Pns = P( xy + xy + x y ) 
= Px Py + Px Py + Px Py (4.18)
where
Pns is the probability of not successfully accomplishing the overall mission
(i.e., the probability of not operating the engineering system correctly).
Px is the probability of performing task x incorrectly.
Py is the probability of performing task y incorrectly.
Reliability, Usability, and Quality Analysis Methods 61

For the given values of Px = 0.05 and Py = 0.1, because Py + Py = 1 and Px + Px = 1,


and using Equation (4.17), we get

P( xy ) = Px Py
= (1− Px )(1− Py )
= (1− 0.05)(1− 0.1)
= 0.855

Thus, the probability of correctly operating/using the engineering system by the


person is 0.855.

4.8 CAUSE AND EFFECT DIAGRAM (CAED)


This method was developed by a Japanese professor, Kaoru Ishikawa, in the early
1950s for use in the area of quality control. The method can also be used to study
usability-related problems. It is to be noted that cause and effect diagram (CAED) is
also known as the Ishikawa diagram or fishbone diagram because of its resemblance
to a fish’s skeleton. The right side of the diagram (i.e., the “fish head”) represents the
effect, and it’s left side shows all the possible causes, which connected to the central
line, called the “fish spine”.
The CAED’s main objective is to act as a first step in problem solving by generat-
ing a comprehensive list of expected potential causes. In turn, this can result in the
identification of main causes, and thus possible appropriate remedial measures. At a
minimum, the CAED’s application will result in better comprehension of the problem
under consideration.
A CAED can be developed by following the four steps presented below:

• Step 1: Establish a problem statement and brainstorm for highlighting all


possible causes.
• Step 2: Establish categories of the main causes by stratifying them into
natural groupings and process steps.
• Step 3: Develop the diagram by connecting the causes under appropriate
process steps and fill in the problem or the effect in the diagram box (i.e.,
the fish head).
• Step 4: Refine the cause categories by asking questions such as follows:
• What caused this?
• What is the main reason for the existence of this condition?

Some of the advantages of the CAED method are as follows [4]:

• A quite effective tool for presenting an orderly arrangement of theories.


• A quite useful tool in guiding further inquiry.
• Very useful for generating new ideas.
• Quite useful for highlighting root causes.

Additional information on this method is available in Refs. [5, 6, 19].


62 Applied Reliability, Usability, and Quality for Engineers

FIGURE 4.6 Cause and effect diagram for Example 4.6.

Example 4.6

Assume that an item/product is being designed by an organisation for use in an


engineering system, and an investigation clearly revealed that it may experience
usability-related problems due to the following four main causes:
• Cause I: Poor considerations to human factors during the design process.
• Cause 2: Inadequate design and development time.
• Cause 3: Poorly written operating instructions.
• Cause 4: Poor user training.

Construct a CAED.
The CAED for this example is shown in Fig. 4.6.

4.9 QUALITY CONTROL CHARTS: THE P-CHARTS


A control chart may simply be described as a graphical method utilised to deter-
mine whether a process is in a “state of statistical control” or out of control [20]. The
history of control charts goes back to a memorandum written by Walter Shewhart
on May 16, 1924, in which he presented the idea of a control chart [21]. Nonetheless,
the control charts’ construction is based on statistical principles and distributions
and a chart is basically composed of three elements: average or standard value of
the characteristic under consideration, lower control limit (LCL), and upper control
limit (UCL).
There are many types of quality control charts: the P-charts, the X charts, the
C-charts, the R-charts, etc. [22, 23]. The first one is described below.

4.9.1 The P-Charts
These charts are also called the control charts for attributes, in which the data popu-
lation is grouped under two classifications (e.g., good or bad, pass or fail). More
clearly, parts/components without defects and parts/components with defects. Thus,
attributes control charts make use of pass-fail information for charting and a p-chart
basically is a single chart that tracks the proportion of nonconforming items/parts in
each sample taken from representative population.
Reliability, Usability, and Quality Analysis Methods 63

UCL and LCL of p-charts are established by utilising the binomial distribution;
thus are expressed by

UCL p = µ b + 3σ b (4.19)

LCL p = µ b − 3σ b (4.20)

where
UCL p is the upper control limit of the p-chart.
LCL p is the lower control limit of the p-chart.
µ b is the mean of the binomial distribution.
σ b is the standard deviation of the binomial distribution.

The distribution’s mean, µ b, is expressed by

M
µb = (4.21)

where
M is the total number of failures/defectives in classification.
m is the sample size.
γ is the number of samples.

Similarly, the distribution’s standard deviation, σ b , is expressed by

σ b = [µ b (1 − µ b )/m]1/ 2 (4.22)

Example 4.7

Assume that ten samples were taken from the production line of a company
manufacturing certain mechanical parts for use in a nuclear power plant. Each
sample contained 80 parts. The inspection process revealed that samples 1, 2,
3, 4, 5, 6, 7, 8, 9, and 10 contain 2, 4, 6, 8, 5, 9, 10, 3, 12, and 7 defective parts,
respectively.
Calculate the UCL and LCL of the p-chart and determine if the fractions of
defective parts of all these samples fall within the UCL and LCL of the p-charts.
By substituting the given data values into Equation (4.21), we obtain

(2 + 4 + 6 + 8 + 5 + 9 + 10 + 3 + 12 + 7)
µb =
(80)(10)
= 0.0825

By inserting the above calculated value and the other given data value into
Equation (4.22), we obtain

σ b = [(0.0825)(1− 0.0825)/(80)]1/ 2
= 0.0307
64 Applied Reliability, Usability, and Quality for Engineers

The fraction of defectives, p, in sample 1 is given by

2
p= = 0.025
80
Similarly, the fractions of defective parts in samples 2, 3, 4, 5, 6, 7, 8, 9, and 10
are 0.05, 0.075, 0.1, 0.0625, 0.1125, 0.125, 0.0375, 0.15, and 0.0875, respectively.
By substituting the above calculated values for µ b and σ b into Equations (4.19)
and (4.20), we obtain

UCLp = 0.0825 + 3(0.0307) = 0.1746

LCLp = 0.0825 − 3(0.0307) = −0.0096 = 0

As all the above sample fractions are within the UCL and LCL, it means that there
is no abnormality in the ongoing production process.

4.10 PROBLEMS
1. Describe FMEA.
2. What are the main benefits of conducting FMEA?
3. What are the main objectives, benefits, and drawbacks of performing FTA?
4. What are the four basic symbols used for constructing fault trees? Describe
each of these symbols.
5. Assume that a windowless room contains two light bulbs and one switch.
Develop a fault tree for the undesired fault event (i.e., top fault event) “Dark
room”, if the switch can only fail to close.
6. Prove Equations (4.9) and (4.10) by using Equations (4.7) and (4.8).
7. What are the benefits and drawbacks of cognitive walkthroughs and task
analysis?
8. Compare probability tree analysis with FTA.
9. Describe CAED and its advantages.
10. Assume that six samples were taken from the production line of a com-
pany manufacturing certain mechanical parts for use in a nuclear power
plant. Each sample contained 50 parts. The inspection process revealed
that samples 1, 2, 3, 4, 5, and 6 contain 4, 6, 2, 5, 1, and 3 defective parts,
respectively.

Calculate the UCL and LCL of the p-chart and determine if the fractions of defective
parts of all these samples fall within the UCL and LCL of the p-chart.

REFERENCES
1. Dhillon, B.S., Design Reliability: Fundamentals and Applications, CRC Press, Boca
Raton, Florida, 1999.
2. Dhillon, B.S., Singh, C., Engineering Reliability: New Techniques, John Wiley and
Sons, New York, 1981.
3. Jordon, P.W., An Introduction to Usability, Taylor and Francis Ltd, London, 1998.
Reliability, Usability, and Quality Analysis Methods 65

4. Dhillon, B.S., Engineering Usability: Fundamentals, Applications, Human Factors, and


Human Error, American Scientific Publishers, Stevenson Ranch, California, 2004.
5. Mears, P., Quality Improvement Tools and Techniques, McGraw-Hill, New York, 1995.
6. Kanji, G.K., Asher, M., 100 Methods for Total Quality Management, Sage Publications
Ltd, London, 1996.
7. General Specification for Design, Installation, and Test of Aircraft Flight Control
Systems, MIL-F-18372 (Aer), Bureau of Naval Weapons, Department of the Navy,
Washington, DC.
8. Dhillon, B.S., Transportation Systems Reliability and Safety, CRC Press, Boca Raton,
Florida, 2011.
9. McDermott, R.E., Mikulak, R.J., Beauregard, M.R., The Basics of FMEA, Quality
Resources, New York, 1996.
10. Palady, P., Failure Modes and Effects Analysis, PT Publications, West Palm Beach,
Florida, 1995.
11. Shooman, M.L., Probabilistic Reliability: A Engineering Approach, McGraw-Hill,
New York, 1968.
12. Wharton, C., Bradford, J., Jeffries, R., Franzke, M., Applying Cognitive Walkthroughs
to More Complex User Interfaces: Experiences, Issues, and Recommendations,
Proceedings of the ACM Conference on Human Factors in Computing Systems, 1992,
pp. 381–388.
13. Karat, C.M., Campbell, R., Fiegel, T., Comparison of Empirical Testing and Walkthrough
Methods in User Interface Evaluation, Proceedings of the ACM Conference on Human
Factors in Computing Systems, 1992, pp. 397–404.
14. Wahrton, C., Rieman, J., Lewis, C., Polson, P., The Cognitive Walkthrough: A Practitioner’s
Guide, in Usability Inspection Methods, edited by Nielsen, J., Mack, R.L., John Wiley and
Sons, New York, 1994, pp. 80–100.
15. Kirwan, B., Ainsworth, L.K., eds., A Guide to Task Analysis, Taylor and Francis Ltd,
London, 1992.
16. Drury, C.G., Task Analysis Methods in Industry, Applied Ergonomics, Vol. 14, No. 1,
1983, pp. 19–28.
17. Swain, A.D., A Method for Performing a Human Factors Reliability Analysis, Report
No. SCR-685, Sandia Corporation, Albuquerque, NM, August 1963.
18. Dhillon, B.S., Human Reliability: with Human Factors, Pergamon Press, New York,
1986.
19. Ishikawa, K., Guide to Quality Control, Asian Productivity Organization, Tokyo, 1982.
20. Rosander, A.C., Applications of Quality Control in the Service Industries, Marcel
Dekker, New York, 1985.
21. Juran, J.M., Early SQC: A Historical Supplement, Quality Progress, Vol. 30, No. 9,
1997, pp. 73–81.
22. Ryan, T.P., Statistical Methods for Quality Improvements, John Wiley and Sons,
New York, 2000.
23. Besterfield, D.H., Quality Control, Prentice Hall, Upper Saddle River, New Jersey,
2001.
5 Medical Equipment
Reliability
5.1 INTRODUCTION
The history of medical devices’ earliest use may be traced back to ancient Etruscans
and Egyptians using various types of dental devices [1]. Today, medical devices and
equipment are widely used around the globe. In fact, in 1988, the world medical equip-
ment production was estimated to be about $36 billion [1] and in 1997, the world mar-
ket for medical devices was valued at approximately $120 billion [2].
In modern times, the beginning of the medical equipment/device reliability field
may be traced back to the latter part of the 1960s, when a number of publications
on the topic appeared in journals and conference proceedings [3–7]. These publi-
cations covered topics such as “Reliability of ECG Instrumentation”, “Safety and
Reliability in Medical Electronics”, and “Some Instrument Induced Errors in the
Electrocardiogram” [3–7]. In 1980, an article presented a comprehensive list of,
directly or indirectly, medical equipment reliability-associated publications [8] and in
2000, a book entitles “Medical Device Reliability and Associated Areas” provided a
comprehensive list of publications on the topic [9].
This chapter presents various important aspects of medical equipment reliability.

5.2 MEDICAL EQUIPMENT RELIABILITY-ASSOCIATED


FACTS AND FIGURES
There are many, directly or indirectly, medical equipment reliability-associated facts
and figures. Some of these are as follows:

• In 1990, a study conducted by the U.S. Food and Drug Administration


(FDA) revealed that about 44% of the quality-related problems that resulted
in voluntary recall of medical devices for the period October 1983 to
September 1989 were the result of deficiencies/errors that could have been
prevented through effective design controls [10].
• In 1969, the special committee of the United States Department of Health,
Education, and Welfare reported that over a 10 year period, about 10,000
injuries were associated with medical equipment/devices and 731 resulted
in fatalities [11, 12].
• A study reported that about 100,000 Americans die each year due to human
errors, and their financial impact on the United States economy was esti-
mated to be between $17 billion and $29 billion [13].
• Due to faulty medical instrumentation about 1,200 fatalities per year occur
in the United States [14, 15].

DOI: 10.1201/9781003298571-5 67
68 Applied Reliability, Usability, and Quality for Engineers

• The Emergency Care Research Institute (ECRI) tested samples of 15,000


products used in hospitals and found that about 4%–6% of these products
were sufficiently dangerous to warrant immediate correction [16].
• A study reported that over 50% of all technical medical equipment-related
problems were due to operator errors [16].
• As per Ref. [17], in 1997, there were a total of 10,420 registered medical
device manufacturers in the United States.

5.3 MEDICAL DEVICES AND MEDICAL EQUIPMENT/


DEVICES CLASSIFICATIONS
Today, there are over 5,000 different types of medical devices being utilised in a modern
hospital and they range from a simple tongue depressor to a complex pacemaker [1, 9].
Thus, the criticality of their reliability varies from one device to another. Nonetheless,
past experiences over the years indicate that the medical devices’ failure has been very
costly in terms of fatalities, injuries, dollars, cents, etc. Needless to say, modern medical
equipment and devices have become very complex and sophisticated and are expected
to operate effectively under stringent environments.
Electronic equipment/devices being used in the health care system may be cat-
egorised under the following three classifications [6]:

• Classification I: This classification includes those devices/equipment that


are not critical to a patient’s welfare or life but serve as convenience devices/
equipment. Three examples of such devices/equipment are as follows:
• Bedside television sets
• Electric beds
• Wheel chairs
• Classification II: This classification includes those medical devices/
equipment that are used for routine or semi-emergency therapeutic or diag-
nostic purposes. Failure of such devices/equipment is not as critical as those
fall under classification III, because there is time for repair. Six examples of
such devices/equipment are as follows:
• Gas analysers
• Ultrasound equipment
• Spectrophotometers
• Colorimeters
• Diathermy equipment
• Electrocardiograph and electroencephalograph recorders and monitors
• Classification III: This classification includes those medical devices/
equipment that are directly and immediately responsible for patient’s life or
may become so under emergency conditions. When such devices/equipment
fail, there is seldom sufficient time for the repair action. Thus, this type of
devices/equipment must always operate successfully at the moment of need.
Four examples of such devices/equipment are as follows:
• Cardiac pacemakers
• Respirators
Medical Equipment Reliability 69

• Electrocardiographic monitors
• Cardiac defibrillators

Finally, it is to be noted that there could be some overlap between the above three
classifications of devices/equipment, particularly between classifications I and III. An
electrocardiograph monitor or recorder is a typical example of such equipment/devices.

5.4 MEDICAL EQUIPMENT RELIABILITY IMPROVEMENT


METHODS AND PROCEDURES
There are many methods and procedures used to improve medical equipment reli-
ability. Five of these methods are presented below.

5.4.1 Failure Modes and Effect Analysis (FMEA)


This method is widely used to evaluate design at the early stage from the reliability
aspect. This criterion in extremely useful to highlight the needs and effects of design
change. The method requires the listing of all possible failure modes of each part/
component on paper and their effects on the listed subsystems, etc. The method is
referred to as failure modes, effects, and criticality analysis (FMECA) when criti-
calities or priorities are assigned to failure mode effects.
Some of the important characteristics of failure modes and effect analysis (FMEA)
are as follows [18]:

• It is an effective tool to highlight weak spots in system design and indicate


areas where further or detailed analysis is desirable.
• It is an upward approach that starts at the detailed level.
• By examining failure effects of all parts/components, the entire system is
screened completely.

Additional information on FMEA is available in Chapter 4 and in Refs. [18–20].

5.4.2 Parts Count Method


This method is used to predict system/equipment failure during the bid proposal and
early design stages [21]. The method requires information on the following three areas:

• Area I: Equipment use environment.


• Area II: The generic part type and their quantities.
• Area III: Part quality levels.

The method calculates the equipment or system failure rate under the single-use
environment by using the following equation [21]:
m

λe = ∑α ( λ
i =1
i gc Qgc )i (5.1)
70 Applied Reliability, Usability, and Quality for Engineers

where
λ e is the equipment/system failure rate expressed in failures/106 hours.
m is the number of different generic component/part classifications.
λ gc is the generic component/part failure rate expressed in failures/106 hours.
α i is the generic component/part quantity for classification i.
Qgc is the generic component/part quality factor.

The values Qgc and λ gc are tabulated in Ref. [21], and additional information on the
method is available in Refs. [21, 22].

5.4.3 Fault Tree Analysis


Fault tree analysis (FTA) starts by highlighting an undesirable event, called the top
event, associated with a system under consideration [23]. Fault events which could
cause the occurrence of the top event are generated and connected by logic operators
such as OR and AND. The OR gate provides a TRUE (failure) output when only one
or more of its inputs are true (failures). In contrast, the AND gate provides a TRUE
(failure) output when all its inputs are TRUE (failures). All in all, the fault tree con-
struction proceeds by generation of fault events in a successive manner until the fault
events need not be developed any further.
Additional information on this method is available in Chapter 4 and in Refs. [23, 24].

5.4.4 Markov Method


This method is a very general approach and it can generally handle more cases
than any other technique or method. It can be employed in situations when the
components/parts are independent as well as for equipment/systems involving
dependent failure and repair modes.
The method proceeds by the enumeration of system states. The state probabilities
are then calculated, and the steady-state reliability-related measures can be com-
puted by using the frequency balancing method [25]. Additional information on this
method is available in Chapter 4 and in Refs. [20, 26].

5.4.5 General Approach
This is a 13-step approach and it was developed by Bio-Optronics to produce reliable
and safe medical devices [27]. The approach steps are as follows [27]:

• Step I: Perform analysis of existing medical problems.


• Step II: Develop a product concept to determine a solution to a specific
medical-related problem.
• Step III: Evaluate all possible environments under which the medical device
is operating.
• Step IV: Evaluate all possible individuals expected to operate the device/
product under consideration.
• Step V: Construct a prototype.
• Step VI: Test the prototype under laboratory environment.
Medical Equipment Reliability 71

• Step VII: Test the prototype under the field use environment.
• Step VIII: Make changes to the device/product design for satisfying field
requirements.
• Step IX: Conduct laboratory and field test on the modified version of the
device/product.
• Step X: Build pilot units to conduct necessary tests.
• Step XI: Ask impartial experts to test pilot units under the field use
environments.
• Step XII: Release the device/product design for production.
• Step XIII: Study the device/product field performance and support with
appropriate device/product maintenance.

5.5 HUMAN ERROR IN MEDICAL EQUIPMENT


Human errors are universal and are committed each day around the globe. Past expe-
riences over the years indicate that although most are trivial, some can be quite seri-
ous or fatal. In the health care area, one study revealed that in a typical year about
100,000 Americans die due to human errors [13]. Nonetheless, some of the medical
device/equipment-related, directly or indirectly, human error facts and figures are as
follows:

• Human error, directly or indirectly, is responsible for up to 90% of accidents


both generally and in medical devices [28, 29].
• The Center for Devices and Radiological Health (CDRH) of the FDA
reported that human errors account for about 60% of all medical device-
related deaths or injuries in the United States [30].
• A patient was seriously injured by over-infusion because the attending
nurse incorrectly read the number 7 as 1 [31].
• Over 50% of all technical-related medical equipment problems are due to
operator errors [16].
• A fatal radiation overdose accident involving the Therac radiation therapy
device was due to a human error [32].

5.5.1 Important Medical Device/Equipment Operator Errors


There are many types of operator-associated errors that occur during medical device/
equipment operation or maintenance. Some of these are as follows [33]:

• Wrong selection of devices with respect to the clinical objectives and


requirements.
• Departure from following stated instructions and procedures.
• Wrong interpretation of or failure to recognise critical device outputs.
• Incorrect decision-making and actions in critical moments.
• Over-reliance on automatic features of equipment/devices.
• Mistakes in setting device parameters.
• Inadvertent or untimely controls’ activation.
• Misassembly.
72 Applied Reliability, Usability, and Quality for Engineers

5.5.2 Medical Devices with High Incidence of Human Error


Over the years, many studies have been performed to highlight medical devices with
a high occurrence of human error. Consequently, twenty most error-prone medical
devices were highlighted. These twenty devices, in the order of most error prone to
least error prone, are as follows [34]:

1. Glucose meter
2. Balloon catheter
3. Orthodontic bracket aligner
4. Administration kit for peritoneal dialysis
5. Permanent pacemaker electrode
6. Implantable spinal cord simulator
7. Intra-vascular catheter
8. Infusion pump
9. Urological catheter
10. Electrosurgical cutting and coagulation device
11. Non-powered suction apparatus
12. Mechanical/hydraulic impotence device
13. Implantable pacemaker
14. Peritoneal dialysate delivery system
15. Catheter Introducer
16. Catheter guide wire
17. Trans-luminal coronary angioplasty catheter
18. External low-energy defibrillator
19. Continuous ventilator (respirator)
20. Contact lens cleaning and disinfecting solutions

5.6 USEFUL GUIDELINES FOR RELIABILITY AND


HEALTHCARE PROFESSIONALS FOR IMPROVING
MEDICAL EQUIPMENT RELIABILITY
There are a large number of professionals involved in the design, manufacture,
and use of various types of medical devices/equipment. Reliability analysts and
engineers are one of them. Nonetheless, some of the useful guidelines for reli-
ability and other professionals for improving medical equipment reliability are as
follows [20, 35]:

• Reliability professionals
• Keep in mind that manufacturers are fully responsible for reliability
during the device/equipment design and manufacturing phase, and dur-
ing its operational phase it is basically users’ responsibility.
• Use methods such as qualitative FTA, FMEA, parts review, and design
review for obtaining immediate results.
• Focus on critical failures as not all equipment/device failures are
equally important.
Medical Equipment Reliability 73

• Focus on cost effectiveness and always keep in mind that some reliabil-
ity-associated improvement decisions need very small or no additional
expenditure.
• Always aim to use simple and straightforward reliability methods as
much as possible instead of some highly sophisticated approaches used
in the aerospace industry.
• Other professionals
• Compare human body and medical device/equipment failures. Both
of them require appropriate measures from reliability professionals
and doctors for enhancing device/equipment reliability and extending
human life, respectively.
• Recognise that failures are the cause of poor medical device/equipment
reliability, and positive thinking and measures can be very useful for
improving medical device/equipment reliability.
• Keep in mind that the application of reliability principles have success-
fully improved the reliability of systems/equipment used in the aero-
space area, and their applications to medical devices/equipment can
generate similar dividends.
• Remember that the cost of failures is probably the largest single expense
in a business organisation. These failures could be associated with busi-
ness systems, equipment, humans, etc., and a reduction in such failures
can decrease the business cost quite significantly.
• For the total success with respect to equipment/device reliability,
both users and manufacturers must accept their share of related
responsibilities.

5.7 MEDICAL EQUIPMENT MAINTAINBILITY


AND MAINTENANCE
Medical equipment maintainability is the probability that a failed piece of medi-
cal equipment will be restored to its acceptable operating state. Similarly, medical
equipment maintenance may simply be described as all actions necessary for retain-
ing medical equipment in, or restoring to, a stated condition. Both these items (i.e.,
medical equipment maintainability and maintenance) are described below, sepa-
rately [36, 37].

5.7.1 Medical Equipment Maintainability


Past experiences over the years indicate that the application of maintainability-related
principles during designing engineering equipment has helped to produce effectively
maintainable end products. Their proper application in the design of medical equip-
ment can also be quite helpful for producing effectively maintainable end medical
items. This section presents three aspects of maintainability considered quite helpful
to produce effectively maintainable medical equipment.
74 Applied Reliability, Usability, and Quality for Engineers

5.7.1.1 Aspect I: Reasons for the Application of Maintainability Principles


There are many reasons for applying maintainability principles and some of the
main ones are as follows [38]:

• To determine the amount of downtime due to maintenance.


• To determine the number of labour hours and related resources required for
carrying out the projected maintenance.
• To reduce projected maintenance cost through design modifications.
• To lower projected maintenance time.

5.7.1.2 Aspect II: Maintainability Design Factors


There are many maintainability design factors and some of the most frequently
addressed factors are as follows [39]:

• Accessibility
• Labelling and coding
• Handles
• Connectors
• Manuals, checklists, charts, and aids
• Test points
• Mounting and fasteners
• Cases, covers, and doors
• Test equipment
• Controls
• Displays

Additional information on these factors is available in Refs. [9, 30].

5.7.1.3 Aspect III: Maintainability Measures


There are various types of maintainability measures used in performing maintainabil-
ity analysis of engineering systems/equipment. Two of these measures are presented
below [38–40].

• Measure I: Mean Time to Repair


It is defined by
k

∑T λ
i =1
ri i

MTTR = k (5.2)
∑λi =1
i

where
MTTR is the mean time to repair.
k is the number of units.
Tri is the repair time required to repair unit i; for i = 1, 2, 3, …, k.
λ i is the constant failure rate of unit i; for i = 1, 2, 3, …, k.
Medical Equipment Reliability 75

• Measure II: Maintainability Function


This measure is used for predicting the probability that the repair will be
completed in a time t, when it starts on an equipment/item at time t = 0.
Thus, the maintainability function, M(t), is expressed as follows:
t

M (t ) =
∫ f (t)dt(5.3)
0
where
t is time.
f(t) is the probability density function of the repair time.

Equation (5.3) is used for obtaining maintainability functions for various


probability distributions (e.g., Weibull, normal, and exponential) represent-
ing failed equipment/item repair times. Maintainability functions for such
distributions are available in Refs. [39–41].

Example 5.1

Assume that the repair times of a medical equipment/device are exponentially


distributed with a mean value (i.e., MTTR) of 3 hours. Calculate the probability
that a repair will be completed in 9 hours.
Thus, in this case, the probability density function of repair times is expressed by

1  t 
f (t ) = exp  −
MTTR  MTTR 
1  t
= exp  −  (5.4)
3  3

By substituting Equation (5.4) and the specified data value into Equation (5.3) we
obtain
9
1  t
M(9) =
∫ 3 exp  − 3  dt
0

 9
= 1− exp  − 
 3
= 0.9502

Thus, the probability of completing a repair within 9 hours is 0.9502.

5.7.2 Medical Equipment Maintenance


For the purpose of maintenance and repair, medical equipment may be categorised
under the following six categories [42]:

• Category I: Life support and therapeutic equipment. Some examples of


such equipment are ventilators, lasers, and anaesthesia machines.
76 Applied Reliability, Usability, and Quality for Engineers

• Category II: Patient diagnostic equipment. Some examples of such equip-


ment are endoscopes, physiologic monitors, and spiro meters.
• Category III: Laboratory apparatus. Some examples of such equipment are
lab refrigeration equipment, centrifuges, and lab analysers.
• Category IV: Imaging and radiation therapy equipment. Some examples
of such equipment are X-ray machines, ultrasound devices, and linear
accelerators.
• Category V: Patient environmental and transport equipment. Some exam-
ples of such equipment are wheelchairs, patient beds, and patient-room
furniture.
• Category VI: Miscellaneous equipment. This category contains all other
items that are not included in the previous five categories, for example
sterilisers.

5.7.2.1 Indices
Just like in the case of the general maintenance activity, there are many indices that
can be used for measuring the effectiveness of the medical equipment maintenance-
associated activity.
Three of these indices are presented below [42].

• Index I
This index measures how much time elapses from a customer request until
the failed medical equipment is repaired and put back in service. The index
is defined

Tt
β at = (5.5)
n
where
β at is the average turnaround time per repair.
Tt is the total turnaround time.
n is the total number of work orders or repairs.

As per one study, the turnaround time per medical equipment repair ranged
from 35.4 hours to 135 hours [9].
• Index II
This index is a cost ratio and is defined by

Cms
βcr = (5.6)
Cma
where
βcr is the cost ratio.
Cma is the medical equipment acquisition cost.
Cms is the medical equipment service cost. It includes all parts, materi-
als, and labour costs for scheduled and unscheduled service, includ-
ing in-house, vendor, prepaid contracts, and maintenance insurance.
Medical Equipment Reliability 77

For various categories of medical equipment, a range of values for this


index are given in Ref. [9].
• Index III
This index measures how frequently the customer has to request for service
per medical equipment and is expressed by

Rr
βc = (5.7)
k

βc is the number of repair requests completed per medical equipment.


Rr is the total number of repair requests.
k is the number of pieces of medical equipment.

5.7.2.2 Mathematical Models


Over the years, a large number of mathematical models have been developed to
perform engineering equipment maintenance analysis. Some of these models can
equally be used to perform medical equipment maintenance analysis. One of these
models is presented below.

5.7.2.2.1 Model I
This model can be used for determining the optimum time interval between item
replacements. The model is based on the assumption that the equipment/item aver-
age annual cost is made up of average investment, operating, and maintenance costs.
Thus, the average annual total cost of a piece of equipment is expressed by

Cinv (tei − 1)
Cat = C0 f + Cmf + + [ j0 + im ] (5.8)
tei 2

where
Cat is the average annual total cost of a piece of equipment.
tei is the equipment/item life expressed in years.
C0 f is the equipment/item operational cost for the first year.
Cmf is the equipment/item maintenance cost for the first year.
Cinv is the investment cost.
j0 is the amount by which operational cost increases annually.
im is the amount by which maintenance cost increases annually.

By differentiating Equation (5.8) with respect to tei and then equating it to zero, we
obtain

1/2
 2Cinv 
tei* =   (5.9)
 j0 + im 
where
tei* is the optimum time between equipment/item replacements.
78 Applied Reliability, Usability, and Quality for Engineers

Example 5.2

Assume that for a medical equipment, we have the following data values:

j0 = $300

im = $200

Cinv = $600,000

Determine the optimum replacement period for the medical equipment under
consideration.
By inserting the above specified data values into Equation (5.9), we get

1/ 2
 2(600000) 
t* =  
 (300) + (200) 
= 19.36 years

Thus, the optimum replacement period for the medical equipment under consid-
eration is 19.36 years.

5.8 SOURCES FOR OBTAINING MEDICAL EQUIPMENT


RELIABILITY-ASSOCIATED DATA
There are many organisations in the United States from which failure data directly
or indirectly concerned with medical equipment can be obtained. Six of these organ-
isations are as follows:

• Center for Devices and Radiological Health (CDRH), FDA, 1390 Piccard
Drive, Rockville, MD 20850, USA.
• Emergency Care Research Institute (ECRI), 5200 Butler Parkway, Plymouth
Meeting, PA 19462, USA.
• Parts Reliability Information Center (PRINCE) Reliability Office,
George C. Marshall Space Flight Center, National Aeronautics and Space
Administration (NASA), Huntsville, AL 35812, USA.
• Reliability Analysis Center (RAC), Rome Air Development Center (RADC),
Griffis Air Force Base, Department of Defense, Rome, NY 13441, USA.
• National Technical Information Center, 5285 Port Royal Road, Springfield,
VA 22161, USA.

Some of the data banks and documents considered quite useful to obtain failure data
concerning medical equipment are as follows:

• Universal Medical Device Registration and Regulatory Management


System (UMDRMS). This system was developed by Emergency Care
Research Institute (ECRI), 5200 Butler Parkway, Plymouth Meeting, PA
19462, USA.
Medical Equipment Reliability 79

• Medical Device Reporting System (MDRA). This system was developed by


Center for Devices and Radiological Health (CDRH), FDA, 1390 Piccard
Drive, Rockville, MD 20850, USA.
• Hospital Equipment Control System (HECS). This system was developed in
1985 by Emergency Care Research Institute (ECRI), 5200 Butler Parkway,
Plymouth Meeting, PA 19462, USA.
• MIL-HDBK-217. Reliability Prediction of Electronic Equipment, Depart­
ment of Defense, Washington, D.C., USA.
• NUREG/CR-1278. Handbook of Human Reliability Analysis with Emphasis
on Nuclear Power Plant Applications, U.S. Nuclear Regulatory Commission,
Washington, D.C., USA.

5.9 PROBLEMS
1. List at least four facts and figures concerned, directly or indirectly, with
medical devices/equipment reliability.
2. What are the main classifications of electronic equipment/devices used in
the health care system? Discuss at least two of these classifications in detail.
3. Discuss the steps of the approach developed by Bio-optronics for producing
safe and reliable medical devices?
4. Describe the parts count method.
5. List at least four facts and figures concerned, directly or indirectly, with
human error in medical devices/equipment.
6. List at least ten medical devices with a high incidence of human error.
7. Define the following two terms:
• Medical equipment maintenance
• Medical equipment maintainability
8. Assume that the repair times of a medical equipment/device are exponen-
tially distributed with a mean value (i.e., MTTR) of 2 hours. Calculate the
probability that a repair will be completed in 5 hours.
9. Define at least two indices that can be used for measuring the effectiveness
of the medical equipment maintenance-associated activity.
10. List at least six good sources to obtain medical equipment/device reliability-
related data.

REFERENCES
1. Hutt, P.B., A History of Government Regulation of Adulteration and Misbranding
of Medical Devices, in The Medical Device Industry, edited by Estrin, N.F., Marcel
Dekker, Inc, New York, 1990, pp. 17–33.
2. Murray, K., Canada’s Medical Device Industry Faces Cost Pressures, Regulatory
Reform, Medical Device and Diagnostic Industry Magazine, Vol. 19, No. 8, 1997,
pp. 30–39.
3. Meyer, J.L., Some Instrument Induced Errors in the Electrocardiogram, The Journal of
the American Medical Association, Vol. 201, 1967, pp. 351–358.
4. Johnson, J.P., Reliability of ECG Instrumentation in a Hospital, Proceedings of the
Annual Symposium on Reliability, 1967, pp. 314–318.
80 Applied Reliability, Usability, and Quality for Engineers

5. Gechman, R., Tiny Flaws in Medical Design Can Kill, Hospital Topics, Vol. 46, 1968,
pp. 23–24.
6. Crump, J.E., Safety and Reliability in Medical Electronics, Proceedings of the Annual
Symposium on Reliability, 1969, pp. 320–327.
7. Taylor, E.F., The Effect of Medical Test Instrument Reliability on Patient Risks,
Proceedings of the Annual Symposium on Reliability, 1969, pp. 328–330.
8. Dhillon, B.S., Bibliography of Literature on Medical Reliability, Microelectronics and
Reliability, Vol. 20, 1980, pp. 737–742.
9. Dhillon, B.S., Medical Device Reliability and Associated Areas, CRC Press, Boca
Raton, Florida, 2000.
10. Schwartz, A.P., A Call for Real Added Value, Medical Industry Executive, February/
March 1994, pp. 5–9.
11. Medical Devices, Hearing Before the Subcommittee on Public Health and Environment,
U.S. Congress House Interstate and Foreign Commerce, Serial No. 93-61, U.S. Govern­
ment Printing Office, Washington, D.C., 1973.
12. Banta, H.D., The Regulation of Medical Devices, Preventive Medicine, Vol. 19, 1990,
pp. 693–699.
13. Kohn, L.T., Corrigan, J.M., Donaldson, M.S., Editors, To Err Is Human: Building a Safer
Health System, Institute of Medicine Report, National Academy Press, Washington,
D.C., 1999.
14. Micco, L.A., Motivation for the Biomedical Instrument Manufacturers, Proceedings of
the Annual Reliability and Maintainability Symposium, 1972, pp. 242–244.
15. Walter, C.W., Instrumentation Failure Fatalities, Electronic News, January 27, 1969.
16. Dhillon, B.S., Reliability Technology in Health Care Systems, Proceedings of the
IASTED International Symposium on Computers Advanced Technology in Medicine,
Health Care, and Bioengineering, 1990, pp. 84–87.
17. Allen, D., California Home to Almost One-Fifth of U.S. Medical Device Industry,
Medical Device and Diagnostic Industry Magazine, Vol. 19, No. 10, 1997, pp. 64–67.
18. Palady, P., Failure Modes and Effects Analysis, PT Publications, West Palm Beach,
Florida, 1995.
19. MIL-STD-1629, Procedures for Performing a Failure Mode, Effects and Criticality
Analysis, Department of Defense, Washington, D.C.
20. Dhillon, B.S., Design Reliability: Fundamentals and Applications, CRC Press, Boca
Raton, Florida, 1999.
21. MIL-HDBK-217, Reliability Prediction of Electronic Equipment, Department of
Defense, Washington, D.C.
22. RDH-376, Reliability Design Handbook, Reliability Analysis Center, Rome Air
Development Center, Griffiss Air Force Base, Rome, New York, 1976.
23. Dhillon, B.S., Singh, C., Engineering Reliability: New Techniques, John Wiley and
Sons, New York, 1981.
24. Fault Tree Handbook, Report No. NUREG-0492, U.S. Nuclear Regulatory Commission,
Washington, D.C.
25. Singh, C., Reliability Calculations on Large Systems, Proceedings of the Annual
Reliability and Maintainability Symposium, 1975, pp. 188–193.
26. Shooman, M.L., Probabilistic Reliability: An Engineering Approach, McGraw Hill
Book Company, New York, 1968.
27. Rose, H.B., A Small Instrument Manufacturer’s Experience with Medical Equipment
Reliability, Proceedings of the Annual Reliability and Maintainability Symposium,
1972, pp. 251–154.
28. Bogner, M.S., Medical Devices and Human Error, in Human Performance in Automated
Systems: Current Research and Trends, edited by Mouloua, M., Parasuraman, R.,
Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1994, pp. 64–67.
Medical Equipment Reliability 81

29. Novel, J.L., Medical Device Failures and Adverse Effects, Pediatric Emergency Care,
Vol. 17, 1991, pp. 120–123.
30. Bogner, M.S., Medical Devices: A New Frontier for Human Factors, CSERIAC
Gateway, Vol. 4, No. 1, 1993, pp. 12–14.
31. Sawyer, D., Do It by Design: Introduction to Human Factors in Medical Devices,
Center for Devices and Radiological Health (CDRH), Food and Drug Administration,
Washington, D.C, 1996.
32. Casey, S., Set Phasers on Stun: And other True Tales of Design Technology and Human
Error, Aegean, Inc, Santa Barbara, California, 1993.
33. Hyman, W.A., Human Factors in Medical Devices, in Encyclopaedia of Medical
Devices and Instrumentation edited by J.G. Webster, Vol. 3, John Wiley and Sons,
New York, 1988, pp. 1542–1553.
34. Wikland, M.E., Medical Device and Equipment Design, Interpharm Press Inc, Buffalo
Grove, Illinois, 1995.
35. Taylor, E.F., The Reliability Engineer in the Health Care System, Proceedings of the
Annual Reliability and Maintainability Symposium, 1972, pp. 245–248.
36. Norman, J.C., Goodman, L., Acquaintance with and Maintenance of Biomedical
Instrumentation, J. Assoc. Advan. Med. Inst, Vol. 1, September 1966, pp. 8–10.
37. Waits, W., Planned Maintenance, Medical Research Engineering, Vol. 7, No. 12, 1968,
pp. 15–18.
38. Grant-Ireson, W., Coombs, C.F., Moss, R.V., Eds., Handbook of Reliability Engineering
and Management, McGraw Hill Book Company, New York, 1988.
39. AMCP-113, Engineering Design Handbook: Maintainability Engineering Theory and
Practice, Department of Army, Washington, D.C, 1976.
40. Blanchard, B.S., Verma, D., Peterson, E.L., Maintainability, John Wiley and Sons,
New York, 1995.
41. Dhillon, B.S., Engineering Maintainability, Gulf Publishing Company, Houston, Texas,
1999.
42. Cohen, T., Validating Medical Equipment Repair and Maintenance Metrics: A Progress
Report, Biomedical Instrumentation and Technology, Jan./Feb., 1997, pp. 23–32.
6 Robot Reliability

6.1 INTRODUCTION
Nowadays, robots are increasingly being used to perform various types of tasks includ-
ing materials handling, arc welding, spot welding, and routing. A robot may simply
be described as a mechanism guided by automatic controls and the word “robot” is
derived from the Czechoslovakian language, in which it means “worker” [1].
In 1954, George Devol designed and applied for a patent for a programmable
device that could be considered the first industrial robot. Nonetheless, the Planet
Corporation, in 1959, manufactured the first commercial robot [2]. Nowadays, mil-
lions of industrial robots are being used throughout the world [3]. As robots used
electronics, mechanical, hydraulic, pneumatic, and electrical components, their
reliability-related problems are quite challenging because of many different sources
of failures. Although there is no clear-cut definitive point in the beginning of robot
reliability field, a publication by J.F. Engelberger, in 1974, could be regarded as its
starting point [4]. A comprehensive list of publications on robot reliability up to
2002 is available in Refs. [5, 6].
This chapter presents various important aspects of robot reliability.

6.2 TERMS AND DEFINITIONS


There are many robot reliability-associated terms and definitions. Some of the
important ones are as follows [1, 7–11]:

• Robot reliability: This is the probability that a robot will perform its speci-
fied mission according to stated conditions for a given time period.
• Robot mean time to failure: This is the average time that a robot will oper-
ate before failure.
• Robot repair: This is to restore robots and their associated parts or systems
to an operational condition after experiencing failure, damage, or wear.
• Robot mean time to repair: This is the average time that a robot is expected
to be out of operation after failure.
• Robot availability: This is the probability that a robot is available for ser-
vice at the moment of need.
• Fail-safe: This is the failure of a robot/robot part without endangering peo-
ple or damage to equipment or plant facilities.
• Graceful failure: The performance of the manipulator degraded at a slow
pace, in response to overloads, instead of failing catastrophically.
• Error recovery: This is the capability of intelligent robotic systems to reveal
errors and, through programming, to initiate appropriate correction actions
to overcome the impending problem and complete the specified process.

DOI: 10.1201/9781003298571-6 83
84 Applied Reliability, Usability, and Quality for Engineers

• Fault in teach pendent: This is the part failure in the teach pendent of a
robot.
• Erratic robot: A robot moved appreciably off its specified path.
• Robot out of synchronisation: This is when the position of the robot’s arm is
not in line with the robot’s memory of where it is supposed to be.

6.3 ROBOT FAILURE CATEGORIES, CAUSES,


AND CORRECTIVE MEASURES
Robot failures can be classified under the four categories as shown in Fig. 6.1 [11–13].
Category I: Software failures/errors are associated with robot software and in
robots, software failures/errors/faults can take place in the embedded software or
the controlling software and application software. Even though redundancy is expen-
sive, it is probably the best solution for protecting against the occurrence of soft-
ware failures/errors. Also, the application of approaches such as failure modes and
effect analysis (FMEA), fault tree analysis (FTA), and testing can be quite helpful to
reduce the occurrence of software failures/errors. Furthermore, there are a number
of software reliability models that can also be utilised to evaluate reliability when the
software in question is put into operational use [11–13].
Category II: Human errors occur due to the personnel who design, manufacture,
test, operate, and maintain robots. Some of the causes for the human errors’ occur-
rence are as follows:

• Improper tools
• Poor system/equipment design
• Poorly written operating and maintenance-related procedures
• High temperature in the work area
• Inadequate lighting in the work area
• Task complexities
• Poor training of operating and maintenance personnel

FIGURE 6.1 Robot failure categories.


Robot Reliability 85

Thus, human errors may be divided into classifications such as design errors,
assembly errors, inspection errors, operating errors, maintenance errors, and installa-
tion errors. Some of the methods that can be used for reducing human errors’ occur-
rence are error cause removal programme, man-machine systems analysis, quality
circles, and fault tree analysis. The last method (i.e., fault tree analysis) is described
in Chapter 4 and other three methods are described in Ref. [14].
Category III: Random component failures are those failures that occur unpredict-
ably during the components useful life. Some of the reasons for their occurrence
are low safety factors, undetectable defects, unavoidable failures, and unexplained
causes. In order to reduce the occurrence of such failures, the methods presented in
Chapter 4 can be used.
Finally, Category IV: Systematic hardware faults are those failures that can occur
due to unrevealed mechanisms present in the robot system design. Some of the rea-
sons for the occurrence of such failures are failure to make the appropriate environ-
ment-related provisions in the initial design, peculiar wrist orientations, and unusual
joint-to-straight-line mode transition.

6.4 ROBOT RELIABILITY-ASSOCIATED SURVEY RESULTS


AND ROBOT EFFECTIVENESS DICTATING FACTORS
Jones and Dawson [15] reported the results of a study, concerned with robot reli-
ability, that was based on surveys of 37 robots of four different designs used in three
different companies A, B, C; covering 21,932 robot production hours. These three
companies (i.e., A, B, and C) reported 47, 306, and 155 cases of robot reliability-
associated problems, respectively, of which the corresponding 27, 35, and 1 cases did
not contribute, directly or indirectly, to any downtime. More clearly, robot downtime
as a proportion of production time for these three companies (i.e., A, B, and C) was
1.8%, 13.6%, and 5.1%, respectively.
Approximate mean time to robot-associated problems (MTTRP) and mean time
to robot failure (MTTRF) in hours for companies A, B, and C are shown in Fig. 6.2.
It is to be noted that as shown in Fig. 6.2, among these three companies, there is a

FIGURE 6.2 Approximate meantime to robot-associated problems (MTTRP) and mean


time to robot failure (MTTRF) in hours (h) for Companies A, B, and C.
86 Applied Reliability, Usability, and Quality for Engineers

quite wide variation of MTTRP and MTTRF. There are many factors that dictate
robots’ effectiveness. Some of these factors are as follows [11, 16]:

• The robot mean time between failures


• The robot mean time to repair
• Rate of the availability of the needed spare parts/components
• Percentage of time the robot operates normally
• Relative performance of the robot under extreme conditions
• Availability and quality of manpower needed for keeping the robot in oper-
ating state
• Availability and quality of the robot repair equipment and facilities

6.5 ROBOT RELAIBILITY MEASURES


There are various types of reliability measures associated with robots. Four of these
measures are presented below [11, 17, 18].

6.5.1 Robot Reliability
Robot reliability may simply be expressed as the probability that a robot will per-
form its specified function satisfactorily for the stated time period when used as per
designed conditions. The general formula to obtain time-dependent robot reliability
is defined by [11, 17]:

 t 

 0

Rr ( t ) = exp  − λ r ( t ) dt 


(6.1)

where
Rr ( t ) is the robot reliability at time t.
λ r ( t ) is the robot hazard rate or time-dependent failure rate.

Equation (6.1) can be used for obtaining the reliability function, of a robot for any
failure times probability distribution (e.g., exponential, Rayleigh, or Weibull).

Example 6.1

Assume that the hazard rate of a robot is expressed by the following function:

θt θ−1
λ r (t ) =  (6.2)
ββθ−1
where
λ r (t ) is the hazard rate of the robot when its times to failure follow Weibull
distribution.
t is time.
β is the scale parameter.
θ is the shape parameter.
Robot Reliability 87

Obtain an expression for the robot reliability and then use it to calculate robot reli-
ability when t = 100 hours, θ = 1 (i.e., exponential distribution), and β = 1,500 hours.
By substituting Equation (6.2) into Equation (6.1), we obtain

 t θt θ−1 
Rr (t ) = exp  −

 0

dt 
ββθ−1  

θ
t
− 
 β
=e (6.3)

By inserting the specified data values into Equation (6.3), we obtain

 100 
− 
Rr (100 ) = e  1500 
= 0.9355

Thus, the robot reliability for the stated mission period of 100 hours is 0.9355.

6.5.2 Robot Hazard Rate


The robot hazard rate or time-dependent failure rate is defined by [11, 17]

1 dRr ( t )
λr (t ) = − . (6.4)
Rr ( t ) dt
where
λ r ( t ) is the robot hazard rate.
Rr ( t ) is the robot reliability at time t.

It is to be noted that Equation (6.4) can be used for obtaining the hazard rate when
robot times to failure follow any time-continuous probability distribution (e.g.,
Weibull, exponential, Rayleigh, etc.).

Example 6.2

With the aid of Equations (6.3) and (6.4) prove that the robot hazard rate is given
by Equation (6.2).
By substituting Equation (6.3) into Equation (6.4), we obtain

1  θ  t  θ−1  −  βt 
λ r (t ) = − θ −    e  
t
−   β  β   
 β
e
θ t θ−1
= (6.5)
β βθ−1

Both Equations (6.2) and (6.5) are identical. Thus, it proves that Equation (6.2) is an
expression for robot hazard rate.
88 Applied Reliability, Usability, and Quality for Engineers

6.5.3 Mean Time to Robot-Related Problems


This is the average productive robot time prior to the occurrence of a robot-related
problem and is expressed by
RPH − DTDRP
MTRP = (6.6)
TNRP
where
MTRP is the mean time to robot-related problems.
RPH is the robot production hours.
DTDRP is the downtime due to robot-related problems expressed in hours.
TNRP is the total number of robot-related problems.

Example 6.3

Assume that at an industrial installation, the annual robot production hours and
downtime due to robot-related problems are 60,000 hours and 800 hours, respec-
tively. During that period, there were ten robot-related problems. Calculate the
mean time to robot-related problems.
By substituting the given data values into Equation (6.6), we obtain

60,000 − 800
MTRP =
10
= 5,920 hours

Thus, the mean time to robot-related problems is 5,920 hours.

6.5.4 Mean Time to Robot Failure


MTTRF can be obtained by using either of the following three equations:


MTRF = Rr (t )dt
0
(6.7)

RPH − DTDRF
MTRF = (6.8)
TNRF
MTRF = lim Rr (s) (6.9)
s→ 0

where
MTRF is the mean time to robot failure.
Rr (t ) is the robot reliability at time t.
RPH is the robot production hours.
DTDRF is the downtime due to robot failures expressed in hours.
TNRF is the total number of robot failures.
s is the Laplace transform variable.
Rr (s) is the Laplace transform of the robot reliability function.
Robot Reliability 89

Example 6.4

Assume that annual production hours of a robot and its annual downtime due to
failures are 4,000 hours and 200 hours, respectively. During that period, the robot
failed five times. Calculate the MTTRF.
By inserting the specified data values into Equation (6.8), we obtain

4000 − 200
MTRF =
5
= 760 hours

Thus, the MTTRF is 760 hours.

Example 6.5

Assume that the constant failure rate, λ r , of a robot is 0.0004 failures per hour and
its reliability is expressed by

Rr (t ) = e −λ r t 
= e −(0.0004)t (6.10)
where
Rr (t ) is the robot reliability at time t.

Calculate the MTTRF by using Equations (6.7) and (6.9). Comment on the end result.
By inserting Equation (6.10) into Equation (6.7), we get


MTRF = e −(0.0004)t dt
0
1
=
0.0004
= 2500 hours

By taking the Laplace transform of Equation (6.10), we obtain

1 
Rr ( s ) = (6.11)
( s + 0.0004)
where
Rr ( s ) is the Laplace transform of the reliability function.

By substituting Equation (6.11) into Equation (6.9), we get

1
MTRF = lim
s→0( s + 0.0004)
1
=
(0.0004)
= 2500 hours

In both cases, the end result (i.e., MTRF = 2500 hours) is exactly the same. It proves
that both equations [i.e., Equations (6.7) and (6.9)] yield the same end result.
90 Applied Reliability, Usability, and Quality for Engineers

6.6 RELIABILITY ANALYSIS OF HYDRAULIC


AND ELECTRIC ROBOTS
As both hydraulic and electric robots are being used in the industrial sector, this
section presents reliability analysis of two typical hydraulic and electric robots by
using the block diagram approach/method [11–13]. Usually, for the purpose of design
evaluation in industry, it is assumed for both hydraulic and electric robots that all
robot parts/components form a series configuration (i.e., if any part/component fails,
the robot fails).

6.6.1 Reliability Analysis of the Hydraulic Robot


A hydraulic robot considered here is made up of five joints and, in turn, each joint
is driven and controlled by a hydraulic servo mechanism. The robot is subject to the
following assumptions/factors [11, 12]:

• Servo valve controls the motion of each hydraulic actuator. This motion
is transmitted directly or indirectly (i.e., through rods, gears, chains, etc.)
to the robot’s specific limb and, in turn, each limb is coupled to a position
transducer.
• Under high flow demand, an accumulator assists the pump to supply an
additional hydraulic fluid.
• Position transducer provides the joint angle codes and, in turn, each code’s
scanning is conducted by a multiplexer.
• Operator makes use of a teach pendent to control the arm-motion in teach
mode.
• Hydraulic fluid is pumped from the reservoir.
• Conventional motor and pump assembly generates pressure.
• Unloading valve is employed to keep pressure under the maximum limit.

The hydraulic robot under consideration in regard to reliability is represented by the


block diagram shown in Fig. 6.3. This figure shows that the hydraulic robot is com-
posed of four subsystems: subsystem 1 (gripper subsystem), subsystem 2 (electronic
and control subsystem), subsystem 3 (drive subsystem), and subsystem 4 (hydraulic
pressure supply subsystem) in series. In turn, as shown in Fig. 6.4 gripper subsystem
(i.e., block diagram (a)) is composed of two parts (i.e., pneumatic system and control
signal) in series and hydraulic pressure supply subsystem [i.e., block diagram (b)] is
also composed of two parts (i.e., hydraulic component and piping) in series.

FIGURE 6.3 Block diagram of the hydraulic robot under consideration.


Robot Reliability 91

FIGURE 6.4 Block diagram representing two subsystems shown in Fig. 6.3: (a) gripper
subsystem and (b) hydraulic pressure supply subsystem.

Furthermore, as shown in Fig. 6.5, the drive subsystem (shown in Fig. 6.3) is com-
posed of five parts (i.e., joints 1, 2, 3, 4, and 5) in series.
With the aid of Fig. 6.3, we obtain the following expression for the probability
of the nonoccurrence of the hydraulic robot event (i.e., undesirable hydraulic robot
movement causing damage to the robot-associated other equipment and possible
harm to humans):

Rhr = Rgs Res Rds Rhs (6.12)

where
Rhr is the hydraulic robot reliability or the probability of the nonoccurrence of the
hydraulic robot event (i.e., undesirable robotic arm movement causing dam-
age to the robot-associated other equipment and possible harm to humans).
Rgs is the reliability of the independent gripper subsystem.
Res is the reliability of the independent electronic and control subsystem.
Rds is the reliability of the independent drive subsystem.
Rhs is the reliability of the independent hydraulic pressure supply subsystem.

For independent parts, the reliabilities Rgs , Rhs , and Rds of gripper subsystem, hydraulic
pressure supply subsystem, and drive subsystem, using Figs. 6.4(a), 6.4 (b), and 6.5,
respectively, are

Rgs = Rps Rcs (6.13)

FIGURE 6.5 Block diagram representing subsystem 3 (i.e., drive subsystem) shown in Fig. 6.3.
92 Applied Reliability, Usability, and Quality for Engineers

Rhs = Rhc Rp (6.14)

and
5

Rds = ∏Ri =1
i (6.15)

where
Rps is the reliability of the pneumatic system.
Rcs is the reliability of the control signal.
Rhc is the reliability of the hydraulic component.
Rp is the reliability of the piping.
Ri is the reliability of joint i; for i = 1, 2, 3, 4, 5.

For constant failure rates of independent subsystems shown in Fig. 6.3, in turn, of
their independent parts shown in Figs. 6.4 and 6.5; from Equation (6.12) through
Equation (6.15), we obtain:

Rhr (t ) = e −λ gs t e −λes t e −λ ds t e −λ hs t
5

−λ ps t −λ cs t −λ es t
− ∑λi t
=e e e e i =1
e −λ hct e −λ pt
 5 
 ∑
−  λ ps +λ cs +λ es + λ i +λ hc +λ p  t

=e i =1 (6.16)
where
λ gs is the constant failure rate of the gripper subsystem.
λ ps is the constant failure rate of the pneumatic system.
λ cs is the constant failure rate of the control signal.
λ es is the constant failure rate of the electronic and control subsystem.
λ i is the constant failure rate of the joint i; for i = 1, 2, 3, 4, 5.
λ ds is the constant failure rate of the drive subsystem.
λ hs is the constant failure rate of the hydraulic pressure supply subsystem.
λ hc is the constant failure rate of the hydraulic component.
λ p is the constant failure rate of the piping.

By integrating Equation (6.16) over the time interval [ 0, ∞ ], we get


5

− ( λ ps +λ cs +λ es + ∑λi +λ hc +λ p )t
MTTHRF = e
∫0
i =1
dt

1
= (6.17)
 5

 λ ps + λ cs + λ es +

∑ i =1
λ i + λ hc + λ p 

where
MTTHRF is the mean time to hydraulic robot failure (i.e., the mean time to the
occurrence of the hydraulic robot undesirable event).
Robot Reliability 93

Example 6.6

Assume that the constant failure rates of the above type of hydraulic robot
are λ ps = 0.0009 failures/hour, λ cs = 0.0008 failures/hour, λ es = 0.0007 failures/
hour, λ1 = λ 2 = λ 3 = λ 4 = λ 5 = 0.0006 failures/hour, λ hc = 0.0005 failures/hour,
and λ p = 0.0004 failures/hour. Calculate the mean time to hydraulic robot failure.
By substituting the specified data values into Equation (6.17), we get

1
MTTHRF =
(0.0009 + 0.0008 + 0.0007 + 5(0.0006) + 0.0005 + 0.0004)
= 158.73 hours

Thus, the mean time to hydraulic robot failure is 158.73 hours.

6.6.2 Reliability Analysis of the Electric Robot


An electric robot considered here is the one that conducts a “normal” industrial task,
while its maintenance and programming are carried out by humans. The robot is
subject to the following assumptions/factors [11, 13]:

• Interface bus allows interaction between the supervisory controller and the
joint control processors.
• Each joint is coupled with a feedback encoder (i.e., transducer).
• Transducer sends all appropriate signals to the joint controller.
• Motor shaft rotation is transmitted to the robot’s appropriate limb through
a transmission unit.
• Microprocessor control card controls each joint.
• Direct current (DC) motor actuates each joint.
• Supervising computer/controller directs all joints.

With respect to reliability, the block diagram shown in Fig. 6.6 represents the electric
robot under consideration.
Fig. 6.6 shows that the electric robot under consideration has two hypothetical
subsystems 1 and 2 in series. Subsystem 1 represents no movement due to external
factors, and subsystem 2 represents no failure within the robot causing its movement.
In turn, as shown in Fig. 6.7 (a), Fig. 6.6 subsystem 1 has two hypothetical ele-
ments X and Y in series and subsystem 2 has five parts (i.e., supervisory computer/
controller, drive transmission, joint control, end-effector, and interface) [Fig. 6.7 (b)]

FIGURE 6.6 Block diagram for estimating the nonoccurrence probability (i.e., reliability)
of the undesirable movement of the electric robot.
94 Applied Reliability, Usability, and Quality for Engineers

FIGURE 6.7 Block diagram representing two subsystems shown in Fig. 6.6: (a) subsystem 1,
(b) subsystem 2.

in series. Furthermore, the Fig. 6.7 (a) element has two hypothetical subelements M
and N in series as shown in Fig. 6.8.
With the aid of Fig. 6.6, we get the following equation for the probability of non-
occurrence of the undesirable electric robot movement (i.e., reliability):

Rerm = Rss1 Rss 2 (6.18)

where
Rerm is the probability of nonoccurrence (reliability) of the undesirable electric
robot movement.
Rss1 is the reliability of the independent subsystem 1.
Rss2 is the reliability of the independent subsystem 2.

For independent elements X and Y, the reliability of subsystem 1 in Fig. 6.6 (a) is
expressed by

Rss1 = RX RY (6.19)

FIGURE 6.8 Block diagram representing Fig. 6.7 (a) element X.


Robot Reliability 95

where
RX is the reliability of the element X.
RY is the reliability of the element Y.

For hypothetical and independent subelements, the element X’s reliability in Fig. 6.8 is

RX = RM RN (6.20)

where
RM is the reliability of subelement M (i.e., the maintenance person’s reliability in
regard to causing the robot’s movement).
RN is the reliability of subelement N (i.e., the operator’s reliability in regard to
causing the robot’s movement).

Similarly, the reliability of subsystem 2 in Fig. 6.7 (b), for independent parts, is
expressed by

Rss 2 = Rsc Rdt R jc Ree Ri (6.21)

where
Rsc is the reliability of the supervisory computer/controller.
Rdt is the reliability of the drive transmission.
R jc is the reliability of the joint control.
Ree is the reliability of the end-effector.
Ri is the reliability of the interface.

Example 6.7

Assume that the following reliability data values are given for the above type of
electric robot:

RY = 0.96,RN = 0.94,RM = 0.95,Rsc = 0.94,


Rdt = 0.93,R jc = 0.92,Ree = 0.91,Ri = 0.9

Calculate the probability of nonoccurrence (i.e., reliability) of the undesirable elec-


tric robot movement. By substituting the specified data values into Equation (6.20)
and Equation (6.21), we get

RX = (0.95)(0.94)
= 0.893

And

Rss 2 = (0.94)(0.93)(0.92)(0.91)(0.9)
= 0.6586
96 Applied Reliability, Usability, and Quality for Engineers

By inserting the above calculated value for RX and the given value for RY into
Equation (6.18), we get

Rss1 = (0.893)(0.96)
= 0.8572

By inserting the above calculated values into Equation (6.18), we get

Rerm = (0.8572)(0.6586)
= 0.5645

Thus, the probability of nonoccurrence (i.e., reliability) of the undesirable electric


robot movement is 0.5645.

6.7 MODELS FOR CONDUCTING ROBOT RELIABILITY


AND MAINTENANCE STUDIES
There are many mathematical models that can be used, directly or indirectly, for
conducting various types of robot maintenance and reliability studies. Three of these
models are presented below.

6.7.1 Model I
This mathematical model can be utilised to calculate the optimum number of inspec-
tions per robot facility per unit time [19, 20]. This information is considered quite
useful to decision makers but inspections are often disruptive; however, such inspec-
tions usually lower the robot downtime because they reduce breakdowns. In this
model, the total downtime of the robot is minimised to obtain the optimum the
number of inspections.
The robot’s total downtime per unit time is defined by [21]
θTdb
RTDT = kTdi + (6.22)
k
where
RTDT is the robot’s total downtime per unit time.
k is the number of inspections per robot facility per unit time.
Tdi is the downtime per inspection for a robot facility.
θ is the constant for a specific robot facility.
Tdb is the downtime per breakdown for a robot facility.

By differentiating Equation (6.22) with respect to k and then equating it to zero, we get
1/2
 θT 
k * =  db  (6.23)
 Tdi 
where
k * is the optimum number of inspections per robot facility per unit time.
Robot Reliability 97

By substituting Equation (6.23) into Equation (6.22), we get

RTDT * = 2 [ θTdi Tdb ]


1/2
(6.24)

where
RTDT * is the minimum total downtime of the robot.

Example 6.8

Assume that for a robot facility, the following data values are specified:

Tsb = 0.6 months, Tdi = 0.04 months, and θ = 3.

Calculate the optimum number of robot inspections per month and the minimum
total robot downtime.
By substituting the above given data values into Equations (6.23) and (6.24),
we get
1/ 2
 3(0.6) 
k* =  
 (0.04) 
= 6.7 inspections per month
and

RTDT * = 2[(3)(0.04)(0.6)]1/ 2
= 0.53 months

Thus, the optimum number of robot inspections per month and the minimum total
downtime are 6.7 and 0.53 months, respectively.

6.7.2 Model II
This model is concerned with determining the robot’s economic life. More specif-
ically, the time limit beyond this is not economical for conducting robot repairs.
Thus, the robot economic life is expressed by [18–20, 22]:
1/2
2(Cri − Vrs ) 
REL =  (6.25)
 RAIRC 
where
REL is the robot economic life.
Cri is the robot initial cost (installed).
Vrs is the robot scrap value.
RAIRC is the robot’s annual increase in repair cost.

Example 6.9

Assume that the initial cost (installed) of a robot is $200,000 and its estimated scrap
value is $5,000. The estimated annual increase in its repair-associated cost is $400.
Estimate the time limit beyond which the robot-associated repairs will not be beneficial.
98 Applied Reliability, Usability, and Quality for Engineers

By inserting the specified data values into Equation (6.25), we get

1/ 2
 2(200,000 − 5,000 
REL =  
 400
= 31.22 years

Thus, the time limit beyond which the robot-associated repairs will not be benefi-
cial is 31.22 years.

6.7.3 Model III


This mathematical model represents a robot system that can fail either due to a
human error or other failures (e.g., hardware and software) and the robot system is
repaired back to its operating state. The robot system state-space diagram is shown
in Fig. 6.9.
The numerals in the diagram rectangle and circles denote system states. The model
is subjected to the following three assumptions [11, 19]:

• Human error and other failures are statistically independent and the repaired
robot system is as good as new.
• Human error and other failure rates are constant.
• Failed robot system repair rates are constant.

The following symbols are associated with the diagram shown in Fig. 6.9 and its
associated equations:

Pi (t ) is the probability that the robot system is in state i at time t; for i = 0


(operating normally), i = 1 (failed due to a human error), i = 2 (failed due to
failures other than human errors).
λ h is the robot system constant human error rate.
λ is the robot system constant nonhuman error rate.
α h is the robot system constant repair rate from failed state 1.
α is the robot system constant repair rate from failed state 2.

FIGURE 6.9 Robot system state-space diagram.


Robot Reliability 99

With the aid of Markov method presented in Chapter 4, we write down the follow-
ing equations for Fig. 6.9 [11, 19]:

dP0 (t )
+ (λ + λ h ) P0 (t ) = α h P1 (t ) + αP2 (t ) (6.26)
dt

dP1 (t )
+ α h P1 (t ) = λ h P0 (t ) (6.27)
dt

dP2 (t )
+ αP2 (t ) = λP0 (t ) (6.28)
dt

At time t=0, P0 (0) = 1, P1 (0) = 0, and P2 (0) = 0.


Solving Equations (6.26)–(6.28) using Laplace transforms, we get

αα h  ( k1 + α)( k1 + α h )  k1t  ( k2 + α)( k2 + α h )  k2t


P0 (t ) = + e − e (6.29)
k1k2  k1 ( k1 − k2 )   k2 ( k1 − k2 ) 

where

1/2
− b ±  b 2 − 4(αα h + λ h α + λα h 
k1 , k2 = (6.30)
2

b = λ + λh + α + α h (6.31)

k1k2 = αα h + λ h α + λα h (6.32)

( k1 + k2 ) = (λ + λ h + α + α h ) (6.33)

αλ h  λ h k1 + λ h α  k1t  (α + k2 )λ h  k2t
P1 (t ) = + e − e (6.34)
k1k2  k1 ( k1 − k2 )   k2 ( k1 − k2 ) 

λα h  λk1 + λα h  k1t  (α h + k2 )λ  k2t


P2 (t ) = + e − e (6.35)
k1k2  k1 ( k1 − k2 )   k2 ( k1 − k2 ) 

The robot system availability, RSAV(t), is given by

RSAV (t ) = P0 (t ) (6.36)

As time t becomes very large in Equations (6.34)–(6.36), we get the following steady-
state probability equations:

αα h
RSAV = (6.37)
k1k2
100 Applied Reliability, Usability, and Quality for Engineers

αλ h
P1 = (6.38)
k1k2

λα h
P2 = (6.39)
k1k2
where
RSAV is the robot system steady-state availability.
P1 is the steady-state probability of the robot system being in state 1.
P2 is the steady-state probability of the robot system being in state 2.

For α = α h = 0, from Equations (6.29), (6.34), and (6.35), we get

P0 (t ) = e − ( λ+λ h )t (6.40)

λh
P1 (t ) = 1 − e( λ+λ h )t  (6.41)
(λ + λ h ) 

λ
P2 (t ) = 1 − e − ( λ+λ h )t  (6.42)
(λ + λ h ) 

The robot system reliability from Equation (6.40) is

RSR(t ) = e − ( λ1 +λ h )t (6.43)

where
RSR(t) is the robot system reliability at time t.

By substituting Equation (6.43) into Equation (6.7), we obtain the following equation
for MTRF:


MTRF = e − ( λ+λ h )t dt
0
1
= (6.44)
(λ + λ h )

where
MTRF is the mean time to robot (i.e., robot system) failure.

Using Equation (6.43) in Equation (6.4), we get the following equation for the robot
(i.e., robot system) hazard rate:

1 d e − ( λ+λ h )t 
λr = − .
e − ( λ+λ h ) dt
= λ + λh (6.45)
Robot Reliability 101

It is to be noted that the right-hand side of Equation (6.45) is independent of time t,


which means that the robot system failure rate is constant and its times to failure are
exponentially distributed.

Example 6.10

Assume that a robot (i.e., robot system) can fail either due to a human error or
other failures, and its human errors and other failure rates are 0.0004 errors per
hour and 0.0008 failures per hour, respectively. The robot repair rate from both
the failure modes is 0.005 repairs per hour. Calculate the robot steady-state
availability.
By substituting the specified data values into Equation (6.37) we obtain

(0.005)(0.005)
RSAV =
(0.005)(0.005) + (0.0004)(0.005) + (0.0008)(0.005)
= 0.8064

Thus, the robot steady-state availability is 0.8064.

6.8 PROBLEMS
1. Define the following four terms:
• Robot reliability
• Robot mean time to failure
• Graceful failure
• Error recovery
2. Discuss robot failure categories and their causes.
3. What are the robot effectiveness dictating factors?
4. Write down formula to obtain robot hazard rate.
5. Assume that at an industrial installation, robot production hours and
downtime due to robot-related problems are 40,000 hours and 600 hours,
respectively. During that period, there were five robot-related problems.
Calculate the mean time to robot-related problems.
6. Compare a hydraulic robot with an electric robot with respect to reliability.
7. Assume that for a robot facility, the following data values are given:
• θ=4
• Tdi = 0.02 months
• Tdb = 0.8 months
Calculate the optimum number of robot inspections per month and the
minimum total robot downtime.
8. Assume that the initial cost (installed) of a robot is $500,000 and its esti-
mated scrap value is $4,000. The estimated annual increase in its repair
cost is $300. Estimate the time limit beyond which the robot-associated
repairs will not be beneficial.
9. Prove that the sum of Equations (6.29), (6.34), and (6.35) is equal to unity.
10. Write down three formulas that can be used to calculate MTTRF.
102 Applied Reliability, Usability, and Quality for Engineers

REFERENCES
1. Jablonowski, J., Posey, J.W., Robotics Terminology, in Handbook of Industrial Robotics,
edited by Nof, S.Y., John Wiley and Sons, New York, 1985, pp. 1271–1303.
2. Zeldman, M.I., What Every Engineer Should Know About Robots, Marcel Dekker,
New York, 1984.
3. Rudall, B.H., Automation and Robotics Worldwide: Reports and Surveys, Robotica,
Vol. 14, 1996, pp. 164–168.
4. Engleberger, J.F., Three Million Hours of Robot Field Experience, The Industrial
Robot, 1974, pp. 164–168.
5. Dhillon, B.S., Fashandi, A.R.M., Liu, K.L., Robot Systems Reliability and Safety: A Review,
Journal of Quality in Maintenance Engineering, Vol. 8, No. 3, 2002, pp. 170–212.
6. Dhillon, B.S., On Robot Reliability and Safety: Bibliography, Microelectronics and
Reliability, Vol. 27, 1987, pp. 105–118.
7. Tver, D.F., Bolz, R.W., Robotics Sourcebook and Dictionary, Industrial Press, New York,
1983.
8. Glossary of Robotics Terminology, in Robotics, edited by Fisher, E.L., Industrial
Engineering and Management Press, Institute of Industrial Engineers, Atlanta, Georgia,
1983, pp. 231–253.
9. American National Standard for Industrial Robots and Robot Systems: Safety
Requirements, ANSI/RIA R15.06-1986, American National Standards Institute (ANSI),
New York, 1986.
10. Susnjara, K.A., Manager’s Guide to Industrial Robots, Corinthian Press, Shaker
Heights, Ohio, 1982.
11. Dhillon, B.S., Robot Reliability and Safety, Springer-Verlag, New York, 1991.
12. Khodanbandehloo, K., Duggan, F., Husband, T.F., Reliability assessment of Industrial
Robots, Proceedings of the 14th International Symposium on Industrial Robots, 1984,
pp. 209–220.
13. Khodanbandehloo, K., Duggan, F., Husband, T.F., Reliability of Industrial Robots: A
Safety Viewpoint, Proceedings of the 7th British Robot Association Annual Conference,
1984, pp. 133–242.
14. Dhillon, B.S., Human Reliability: With Human Factors, Peramon Press, New York,
1986.
15. Jones, R., Dawson, S., People and Robots: Their Safety and Reliability, Proceedings of
the 7th British Robot Association Annual Conference, 1984, pp. 243–258.
16. Young, J.F., Robotics, Butterworth, London, 1973.
17. Dhillon, B.S., Design Reliability: Fundamentals and Applications, CRC Press, Boca
Raton, Florida, 1999.
18. Varnum, E.C., Bassett, B.B., Machine and Tool Replacement Practices, in Manufacturing
Planning and Estimating Handbook, edited by Wilson, F.W., Harvey, P.D., McGraw
Hill, New York, 1963, pp. 18.1–18.22.
19. Dhillon, B.S., Applied Reliability and Quality, Springer-Verlag, London, 2007.
20. Dhillon, B.S., Mechanical Reliability: Theory, Models, and Applications, American
Institute of Aeronautics and Astronautics, Washington, D.C, 1988.
21. Wild, R., Essential of Production and Operations Management, Holt, Rinehart, and
Winston, London, 1985, pp. 356–368.
22. Eidmann, F.L., Economic Control of Engineering and Manufacturing, McGraw Hill,
New York, 1931.
7 Computer and
Internet Reliability
7.1 INTRODUCTION
Nowadays, a vast amount of money is being spent annually around the globe to
produce computers for various types of applications ranging from personal use to
control space and other systems. As the computers are composed of both the hard-
ware and software components, for their successful operation, the reliability of both
these components is equally important. The history of computer hardware reliability
may be traced back to the late 1940s and 1950s [1–4]. For example, in 1956 Von
Neumann proposed triple modular redundancy (TMR) scheme for improving com-
puter hardware reliability [3]. It appears that the first serious effort on software reli-
ability started in 1964 at Bell Laboratories [5]. Nonetheless, some of the important
works that appeared in the 1960s on software reliability are available in Refs. [5–7].
The history of the internet goes back to 1969 with the development of Advanced
Research Projects Agency Network (ARPANET) and it was grown form four hosts
in 1969 to over 147 million hosts and 38 million sites in 2002 [8]. Nowadays, bil-
lions of people around the globe use internet services [8]. In 2001, there were over
52,000 internet-related failures and incidents. Needless to say, today the reliability
and stability of the internet has become extremely important to the world economy
and other areas, because internet-associated failures can easily generate millions of
dollars in losses and interrupt the daily routines of millions of end users [9].
This chapter presents various important aspects of computer hardware, software,
and internet reliability.

7.2 COMPUTER FAILURE-RELATED CAUSES AND


ISSUES IN COMPUTER SYSTEM RELIABILITY
There are many computer failure-related causes. The important ones are as follows
[10–12]:

• Communication network failures


• Peripheral device failures
• Human errors
• Processor and memory failures
• Environmental and power failures
• Mysterious failures
• Gradual erosion of the data base
• Saturation

The first six of the above computer failure-related causes are described below.

DOI: 10.1201/9781003298571-7 103


104 Applied Reliability, Usability, and Quality for Engineers

Communication network failures are mostly of a transient nature and are associ-
ated with inter-module communication. The application of “vertical parity” logic can
help to cut down approximately 70% of errors in communication lines. Peripheral
device failures are important to consider because they can cause serious problems
but they seldom result in a system shutdown. The frequently occurring errors in
peripheral devices are transient or intermittent, and the devices’ electromechanical
nature is the usual reason for their occurrence.
Human errors, in general, take place due to operator oversights and mistakes,
and frequently occur during starting up, running, and shutting down the system.
Processor and memory failures are associated with processor and memory party
errors. Although the occurrence of processor errors is quite rare, they are generally
catastrophic. However, there are occasions when the central processor malfunctions
to execute instructions appropriately due to a “dropped bit”. Nowadays, the memory
parity errors take place very rarely because of improvements in hardware reliability
and also they are not necessarily fatal.
Environmental failures take place due to factors such as failure of air condition-
ing equipment, electromagnetic interference, earthquakes, and fires, whereas power
failures due to factors such as total power loss from the local utility company and
transient fluctuations in frequency or voltage. In real-life systems, the failures that
cannot be classified properly are called mysterious failures. An example of such
failures is the sudden stop functioning of a normally functioning system without
indication of any problem (i.e., hardware, software, etc.).
There are many issues, directly or indirectly, concerned with computer system reli-
ability. In this case, some of the important factors to consider are as follows [8, 12, 13]:

• Prior to the production and installation phases, it could be quite difficult to


detect errors related with hardware design at the lowest system levels. It is
quite possible that oversights in hardware design may result in situations
where operational errors due to such oversights are impossible for distin-
guishing from the ones due to transient physical faults.
• Failures in the area of computer systems are quite highly varied in charac-
ter. For example, a component/part used in a computer system may, directly
or indirectly, experience a transient fault due to its surrounding environ-
ment, or it may malfunction permanently.
• Generally, the most powerful type of self-repair in computer systems is
dynamic fault tolerance, but it is quite difficult to analyse. However, for
certain applications it is very important and cannot be ignored totally.
• Modern computers consist of redundancy schemes for fault tolerance, and
advances made over the years have brought various types of improvements,
but there are still many practical and theoretical-associated problems that
remain to be solved properly.
• Computers’ main parts/components are the logic elements, which have quite
troublesome reliability-related features. In many situations, it is impossible
for appropriately determining such elements’ reliability and their defects
cannot be healed properly.
Computer and Internet Reliability 105

7.3 COMPUTER FAILURE CATEGORIES, HARDWARE


AND SOFTWARE ERROR SOURCES, AND COMPUTER
RELIABILITY-RELATED MEASURES
Computer failures may be classified under the five categories shown in Fig. 7.1 [14].
Category I: Specification failures. These failures are distinguished by their origin,
i.e., defects in the system’s specification, rather than in the design or execution of
either hardware or software.
Category II: Hardware failures. These failures are just like in any other piece of
equipment, and they take place due to factors such as poor design, poor maintenance,
defective parts, and unexpected environmental conditions. Category III: Software
failures. These failures are the result of the inability of programme for continuing
processing due to erroneous logic. Category IV: Human errors. These errors take
place due to incorrect actions or lack of actions by humans involved in the process
(e.g., the system’s designers, builders, and operators). Finally, Category V: Malicious
failures. These failures are due to a relatively new phenomenon, i.e., the malicious
introduction of programmes intended for causing damage to anonymous users. Quite
often these programmes are called computer viruses.
There are many sources for the occurrence of software and hardware errors. Five
of these sources are as follows:

• Inherited errors
• Handwriting errors
• Data preparation errors
• Keying errors
• Optical character reader

It is to be noted that in a computer-based system, the inherited errors can account


for over 50% of the errors [15]. Furthermore, data preparation-related tasks can also

FIGURE 7.1 Computer failure categories.


106 Applied Reliability, Usability, and Quality for Engineers

generate quite a significant proportion of errors. As per Ref. [15], at least 40% of all
errors come from manipulating the data (i.e., data preparation) prior to writing it
down or entering it into the involved computer system.
In the area of the computer system reliability, many measures are being used.
They may be grouped under the following two classifications [12, 16]:

• Classification I: This classification contains the following five measures for


handling gracefully degrading systems.
• Measure I: Computation reliability. This is the failure free probability
that the system will, without an error, execute a task of length, say y,
started at time t.
• Measure II: Computation threshold. This is the time at which certain
value of computation reliability is reached for a task whose length is,
say, y.
• Measure III: Mean computation before failure. This is the expected
amount of computation available on the system prior to failure.
• Measure IV: Computation availability. This is the expected computa-
tion capacity of the system at a given time t.
• Measure V: Capacity threshold. This is the time at which certain value
of computation availability is reached.
• Classification II: This classification contains the following four measures
that are considered suitable for configurations such as standby, hybrid, and
massively redundant systems:
• Measure I: System reliability
• Measure II: Mean time to failure
• Measure III: System availability
• Measure IV: Mission time

It is to be noted that to evaluate gracefully degrading systems, these four measures


may not be sufficient.

7.4 COMPARISONS BETWEEN COMPUTER


HARDWARE AND SOFTWARE RELIABILITY
As it is very important to have a clear understanding of the differences between
computer hardware and software reliability, Table 7.1 presents comparisons of the
some important areas [17–19].

7.5 FAULT MASKING


The term fault masking is used in the area of fault-tolerant computing to state that
a system with redundancy can tolerate a number of failures prior to its own failure.
More clearly, the implication of the term simply is that a problem has surfaced some-
where within the digital system framework, but because of the nature of the design,
the problem does not affect the overall operation of the system.
Computer and Internet Reliability 107

TABLE 7.1
Hardware and Software Reliability Comparisons

No. Hardware Reliability Software Reliability


1 Mean time to repair has certain significance Mean time to repair has no significance
2 Interfaces are visual Interfaces are conceptual
3 Wears out Does not wear out
4 Many hardware items fail as per the bathtub Software does not fail as per the bathtub
hazard rate curve hazard rate curve
5 A hardware failure is usually caused by physical A software failure is caused by programming
effects error
6 Preventive maintenance is performed to inhibit Preventive maintenance has no meaning in
failures software
7 The failed item/system is repaired by performing Corrective maintenance is basically redesign
corrective maintenance
8 Hardware can be repaired by using spare modules Software failures cannot be repaired by using
spare modules
9 Obtaining good quality data is a problem Obtaining good quality data is a problem
10 Normally redundancy is effective Redundancy may not be effective
11 Hardware reliability has well-developed theory Software reliability still lacks well-developed
and mathematical concepts theory and mathematical concepts

The best known fault masking method is probably modular redundancy and is
presented in the following sections [17].

7.5.1 Triple Modular Redundancy (TMR)


In this case, three identical units/modules perform the same task simultaneously and the
voter compares their outputs (i.e., the units/modules) and sides with the majority [3, 17].
More specifically, the TMR system fails only when more than one unit/module malfunc-
tions or the voter malfunctions. In other words, the TMR system can tolerate failure of
a single unit/module. An important example of the TMR scheme application was the
SATURN launch vehicle computer, which used TMR with voters in the central processor
and duplication in the main memory [20]. The TMR scheme’s block diagram is shown in
Fig. 7.2 and the blocks in the diagram represent units/modules and the voter. In addition,
the TMR system without the voter is inside the dotted rectangle.
For independently failing units/modules and the voter, the reliability of the sys-
tem shown in Fig. 7.2 is [17]

Rtv = (3 R 2 − 2 R3 ) Rv (7.1)

where
R is the unit/module reliability.
Rv is the voter reliability.
Rtv is the reliability of TMR system with voter.
108 Applied Reliability, Usability, and Quality for Engineers

FIGURE 7.2 Block diagram representing the TMR scheme with voter.

With a 100% reliable voter (i.e., Rv = 1), Equation (7.1) becomes

Rtv = 3 R 2 − 2 R3 (7.2)

where
Rtv is the reliability of the TMR system with perfect voter.

It is to be noted that the voter reliability and the single unit’s reliability determine
the improvement in reliability of the TMR system over a single unit system. For
the perfect voter (i.e., Rv = 1) the TMR system reliability given by Equation (7.2) is
only better than the single unit system when the single unit’s reliability is greater
than 0.5.
At Rv = 0.8, the reliability of the TMR system is always less than a single unit’s
reliability. Furthermore, when Rv = 0.9 the TMR system reliability is only marginally
better than the single unit/module reliability when the single unit/module reliability is
approximately between 0.833 and 0.667 [21].

7.5.1.1 TMR System Maximum Reliability with Perfect Voter


For the perfect (i.e., 100% reliable) voter, the TMR System reliability is given by
Equation (7.2). Under this scenario, the ratio of Rtv to a single unit reliability, R, is
expressed by [22]

Rtv 3 R 2 − 2 R3
α= = = 3R − 2 R2 (7.3)
R R
By differentiating Equation (7.3) with respect to R and then equating it to zero, we
obtain


= 3R − 4 R = 0 (7.4)
dR
Computer and Internet Reliability 109

By solving Equation (7.4), we obtain

R = 0.75

This simply means that the TMR system’s maximum reliability will occur at R = 0.75.
Thus, by substituting this value for R into Equation (7.2), we get

Rtv = 3(0.75)2 − 2(0.75)3


= 0.8438

Thus, the maximum value of the TMR system reliability with the perfect voter is
0.8438.

Example 7.1

Assume that the reliability of a TMR system with a perfect voter is expressed by
Equation (7.2). Determine the points where the single-unit and the TMR-system
reliabilities are equal.
In order to determine the points, we equate the reliability of the single unit (i.e., R)
with Equation (7.2) to obtain

R = Rtv = 3R 2 − 2R3(7.5)

By rearranging Equation (7.5), we obtain

2R 2 − 3R + 1 = 0 (7.6)

The above Equation (7.6) is a quadratic equation and its roots are

3 + [ 9 − (4)(2)(1)]
1/ 2

R= = 1(7.7)
(2)(2)
and

3 − [ 9 − (4)(2)(1)]
1/ 2
1 (7.8)
R= =
(2)(2) 2

It means that the reliabilities of the TMR system with perfect voter and the single
unit are equal at R = 1 or R = 1/2. Furthermore, the reliability of the TMR system
with perfect voter will only be higher than the single unit reliability when the value
of R is greater than 0.5.

7.5.1.2 TMR System with Voter Time-Dependent Reliability


and Mean Time to Failure
With the aid of material presented in Chapter 3 and Equation (7.1), for constant
failure rates of the TMR system units and voter unit, the TMR system with voter
reliability is given by [17, 23]
110 Applied Reliability, Usability, and Quality for Engineers

Rtv (t ) = 3e −2 λt − 2e −3λt  e −λ v t


= 3e − (2 λ+λ v )t − 2e − (3λ+λ v )t (7.9)
where
Rtv (t ) is the TMR system with voter reliability at time t.
λ v is the voter unit constant failure rate.
λ is the unit/module constant failure rate.

By integrating Equation (7.9) over the time interval from 0 to ∞, we get the following
expression for the TMR system with voter mean time to failure [12, 17]:

MTTFtv =
∫ 3e
0
− (2 λ+λ v ) t
− 2e − (3λ+λ v )t  dt

3 2
= −
(2λ + λ v ) (3λ + λ v ) (7.10)
where
MTTFtv is the mean time to failure of the TMR system with voter.

For perfect voter (i.e., λ v = 0 ), Equation (7.10) reduces to

3 2
MTTFtp = −
2λ 3λ
5
=
6λ (7.11)
where
MTTFtp is the mean time to failure of the TMR system with perfect voter.

Example 7.2

Assume that the constant failure rate of a unit/module belonging to a TMR system
with voter is λ = 0.0002 failures per hour. Calculate the system reliability for a 400-
hour mission if the voter constant rate is λ v = 0.0001 failures per hour. In addition,
calculate the TMR system mean time to failure.
By substituting the given data values into Equation (7.9), we obtain

Rtv (400) = 3e −[2(0.0002)+ 0.0001](400) − 2e −[3(0.0002)+ 0.0001](400)


= 0.9446
Similarly, by substituting the specified data values into Equation (7.10), we obtain

3 2
MTTFtv = −
[ 2(0.0002) + 0.0001] [3(0.0002) + 0.0001]
= 3142.85 hours

Thus, the TMR system with voter reliability and mean time to failure are 0.9446
and 3142.85 hours, respectively.
Computer and Internet Reliability 111

7.5.2 N-Modular Redundancy (NMR)


This is the general form of the TMR (i.e., it contains N identical units/modules
instead of only three units).
The number N is any odd number, and the N-modular redundancy (NMR) sys-
tem can tolerate a maximum of k unit/modular failures if the value of N is equal to
(2k + 1). As the voter acts in series with the N-module system, the entire system fails
whenever a voter unit failure occurs.
For independently failing units/modules and the voter, the reliability of the NMR
system with voter is given by [17, 22]

 k

Rnv = Rv 

∑(N )R
i =1
i
N −i
(1 − R)i 

(7.12)

where
N!
(N ) =
i ( N − i)! i !
Rv is the voter reliability.
R is the unit/module reliability.
Rnv is the NMR system with voter reliability.

Finally, it is to be noted that the time dependent reliability analysis of an NMR sys-
tem can be conducted in a manner similar to the TMR system reliability analysis.
Additional information on redundancy schemes is available in Ref. [23].

7.6 SOFTWARE RELIABILITY ASSESSMENT METHODS


There are many quantitative and qualitative methods that can be utilised to assess
software reliability. They may be classified under the following three categories [21]:

• Category I: Analytical methods


• Category II: Software reliability models
• Category III: Software metrics

Each of the above three categories is described in Sections 7.6.1–7.6.3.

7.6.1 Category I: Analytical Methods


There are a number of analytical methods that can be utilised to assess software
reliability. Two of these methods are fault tree analysis (FTA) and failure modes and
effect analysis (FMEA). Both these methods are quite commonly utilised to assess
reliability of hardware, and they can also be utilised to assess reliability of software
as well. Both FTA and FMEA methods are described in Chapter 4.
112 Applied Reliability, Usability, and Quality for Engineers

7.6.2 Category II: Software Reliability Models


There are many software reliability evaluation models [12, 17, 24–26]. This section
presents two such models.

7.6.2.1 Musa Model


The basis of Musa model is the premise that all reliability assessments in the time
domain can only be based upon actual execution time, as opposed to calendar/
elapsed time. The main reason for this is that only during the ongoing execution
process does a software programme becomes exposed to failure-provoking stress.
Two of the main assumptions associated with this model are as follows [17, 27]:

• Failure intervals are statistically independent and follow a Poisson


distribution.
• Execution time between failures is piecewise exponentially distributed, and
failure rate is proportional to the remaining defects.

The net number of corrected faults is expressed by [12, 17, 27].

m = n [1 − exp(−θt /nMT ) ] (7.13)

where
t is time.
MT is the mean time to failure at the beginning of the test.
m is the net number of corrected faults.
θ is the testing compression factor defined as the average ratio of detection rate
of failures during test of the rate during normal use of the software pro-
gramme under consideration.
N is the initial number of faults.

Mean time to failure, MTTF, increases exponentially with execution time and is
defined by

MTTF = MT exp(−θt /nMT ) (7.14)

Thus, the reliability at operational time t is given by

R(t ) = exp(−t /MTTF ) (7.15)

From the above relationships, we get the number of failures that must occur to
increase mean time to failure from, say, MTTF1 to MTTF2 [23]

 1 1 
∆m = nMT  −  (7.16)
 MTTF1 MTTF2 
Computer and Internet Reliability 113

The additional execution time needed to experience ∆m is expressed by

nMT   MTTF2 
∆t =  ln (7.17)
 θ   MTTF1 

Example 7.3

Assume that a newly developed software programme is estimated to have around


600 errors. Also, at the start of the testing process, the recorded mean time to
failure is 4 hours.
Estimate the amount of time needed to reduce the remaining errors to ten, if the
value of the testing compression factor is two. Also, calculate the reliability over a
80 hour operational period.
Using the specified data values in Equation (7.16) yields

1 1 
(600 − 10) = (600)(4)  − (7.18)
 4 MTTF2 

By rearranging Equation (7.18), we get

MTTF2 = 240 hours

By substituting the above result and the other given data values into Equation (7.17),
we obtain

 (600)(4)   240 
∆t =  ln
 2   4 
= 4913.21hours

Similarly, using the calculated and given data values in Equation (7.15) yields

 80 
R(80) = exp  −
 240 
= 0.7165

Thus, the needed testing time is 4913.21 hours, and the reliability of the software
for the stated operational period is 0.7165.

7.6.2.2 Mills Model


The basis for Mills model is that an assessment of the faults remaining in a software
programme can be carried out through a seeding process that assumes a homog-
enous distribution of representative group of faults. Prior to the seeding process’s
initation, a fault analysis is required to determine the expected types of faults in the
code as well as their relative frequency of occurrence.
An identification of seeded and unseeded faults is conducted during the reviews
or testing process, and the indigenous and seeded faults’ discovery allows an assess-
ment of remaining faults for the type of fault under consideration. However, it is to
114 Applied Reliability, Usability, and Quality for Engineers

be noted that the value of this measure can only be calculated if the seeded faults
are found.
The maximum likelihood of the unseeded faults is expressed by [21, 28]

UFm1 = ( SF )(UFu )/( SFf ) (7.19)

where
UFm1 is the maximum likelihood of the unseeded faults.
SFf is the number of seeded faults found.
UFu is the number of unseeded faults uncovered.
SF is the number of seeded faults.

Thus, the number of unseeded faults still remaining in a software programme under
consideration is given by

β = UFm1 − UFu (7.20)

where
β is the number of unseeded faults still remaining in a software programme under
consideration.

7.6.3 Category III: Software Metrics


Software metrics may simply be described as quantitative indicators of degree of
which a software process/item possesses a stated attribute. Quite often, these metrics
are used for determining the status of a trend in a software development process as
well as for determining the risk of going from one phase to another. Two such met-
rics considered quite useful to assess, directly or indirectly, software reliability are
presented in Sections 7.6.3.1 and 7.6.3.1.

7.6.3.1 Code and Unit Test Phase Measure


This software metric is concerned with assessing software reliability during the code
and unit test phase and is defined by [12, 17, 21]

 m 
CDRc = 
 ∑
j =1
γ j /θ

(7.21)

where
m is the number of reviews.
γ j is the number of unique defects at or above a specified severity level, found in
the jth code review.
θ is the total number of source lines of code reviewed, expressed in thousands.
CDRc is the cumulative defect ratio for code.
Computer and Internet Reliability 115

7.6.3.2 Design Phase Measure


This software metric is concerned with determining the degree of reliability growth
during the design phase. The metric/measure requires establishing necessary defect
severity categories and possesses some ambiguity, since its low value may mean
either a good product or a poor review process. The metric is defined by [12, 17, 21]

 m 
CDRd = 
 ∑
j =1
β j  /α

(7.22)

where
m is the number of reviews.
β j is the number of unique defects at or above a specified severity level, found in
the jth design review.
α is the total number of source lines of design statement in the design phase,
expressed in thousands.
CDRd is the cumulative defect ratio for design.

7.7 INTERNET FACTS, FIGURES, FAILURE EXAMPLES,


AND RELIABILITY-RELATED OBSERVATIONS
Some of the important internet facts, figures, and failure examples are as follows:

• In 2011, over 2.1 billion people around the globe were using the internet,
and around 45% of them were below the age of 25 years [29].
• From 2006 to 2011, developing countries in the world increased their share
of the world’s total number of internet users from 44% to 62% [29].
• In 2001, there were 52,658 internet-associated failures and incidents [9].
• In 2000, in the entire United States internet-associated economy generated
approximately $830 billion in revenues [9].
• In 2000, the internet carried around 51% of the information flowing through
two-way telecommunication, and by 2007 over 97% of all telecommuni-
cated information was transmitted through the internet [30].
• On November 8, 1998, a malformed routing control message because of
a software fault triggered an inter-operability problem between a number
of core internet backbone routers produced by different vendors. In turn,
this caused a widespread loss of network connectivity in addition to an
increment in packet loss and latency [8]. It took many hours for most of the
backbone providers for overcoming this outage.
• On August 14, 1998, a misconfigured main internet database server incor-
rectly referred all queries for internet machines/systems with names ending
in “net” to the wrong secondary database server. In turn, due to this prob-
lem, most of the connections to “net” internet web servers and other end
stations malfunctioned for a number of hours [8].
116 Applied Reliability, Usability, and Quality for Engineers

• On April 25, 1997, a misconfigured router of a Virginia service provider


injected an incorrect map into the global internet and, in turn, the internet
providers who accepted this map automatically diverted all their traffic to
the Virginia provider [31]. This caused network congestion, instability, and
overload of internet router table memory that ultimately shut down many of
the main internet backbones for about two hours [31].

In 1999, a study reported the following four internet reliability-related observations [32]

• In the internet backbone infrastructure, there is only a minute fraction of


network paths that contribute disproportionally, directly or indirectly, to the
number of long-term outages and backbone unavailability.
• Mean time to failure (MTTF) and mean time to repair (MTTR) for most
of the internet paths are about 25 days or less and about 20 minutes or less,
respectively.
• MTTF and availability of the internet backbone structure are quite signifi-
cantly lower than the Public Switched Telephone Network.
• Most interprovider path malfunctions take place from congestion collapse.

7.8 INTERNET OUTAGE CLASSIFICATIONS AND


AN APPROACH FOR AUTOMATING FAULT
DETECTION IN INTERNET-RELATED SERVCES
A case study performed over a period of one year concerning Internet-associated
outages grouped the outages under the following twelve classifications (along with
their occurrence percentages in parentheses) [29]:

• Classification I: Maintenance (16.2%)


• Classification II: Power outage (16%)
• Classification III: Fiber cut/circuit/carrier problem (15.3%)
• Classification IV: Unreachable (12.6%)
• Classification V: Hardware problem (9%)
• Classification VI: Interface down (6.2%)
• Classification VII: Routing problem (6.1%)
• Classification VIII: Miscellaneous (5.9%)
• Classification IX: Unknown/undetermined/no problem (5.6%)
• Classification X: Congestion/sluggish (4.6%)
• Classification XI: Malicious attacks (1.5%)
• Classification XII: Software problem (1.3%)

As many internet services (e.g., e-commerce and search engines) suffer faults, a
quick detection of these faults could be a very important factor in improving system
availability. For this very purpose, an approach known as the pinpoint method is
considered very useful. This method combines the easy deploy ability of low-level
monitors with the higher-level monitors’ ability to detect application-level faults [32].
Computer and Internet Reliability 117

This method is based upon the following three assumptions in regard to the system
under observation and its workload [32]:

i. An interaction with the system is relatively short-lived, whose processing


can be broken down as a path. More clearly, a tree of the names of elements
or components that participate in the servicing of that request.
ii. The software is composed of various interconnected components (modules)
with quite well-defined narrow interfaces. These could be software subsys-
tems, objects, or simply physical mode boundaries.
iii. There is a quite high volume of basically independent requests (i.e., from
different users).

The pinpoint method for detecting and localising anomalies is basically a three stage
process [32]:

• Stage 1: Observing the system. This is concerned with capturing the run-
time path of each and every request served by the system and then from
these paths extracting two specific low-level behaviors likely for reflecting
high-level functionality: path shapes and components’ interactions.
• Stage 2: Learning the patterns in system behaviour. This is concerned
with constructing a reference model representing the usual behaviour of
an application in regard to path shapes and components/parts interactions,
by assuming that most of the system functions correctly most of the time.
• Stage 3: Detecting anomalies in system behaviors. This is concerned with
analysing the system’s current behaviour and detecting anomalies in regard
to the reference model.

Additional information on this method is available in Ref. [32].

7.9 MATHEMATICAL MODELS FOR CONDUCTING


INTERNET RELIABILITY AND AVAILABILITY ANALYSIS
There are many mathematical models that can be used to conduct various types of
reliability and availability analysis concerning the reliability of internet-associated
services [17, 33–36]. Two of these models are presented below.

7.9.1 Model I
This mathematical model is concerned with evaluating the reliability and availabil-
ity of an internet server system. The model assumes that the server system can either
be in an operating or a failed state, its failure/outage and restoration/repair rates are
constant, and all its failures/outages occur independently and the repaired/restores
server system is as good as new.
The server system state space diagram is shown in Fig. 7.3, and the numerals in
box and circle denote system states.
118 Applied Reliability, Usability, and Quality for Engineers

FIGURE 7.3 Internet server system state-space diagram.

The following symbols were used to develop equations for the model:

j is the jth internet server system state shown in Fig. 7.3, for j = 0 (internet
server system operating normally), j = 1 (internet server system failed).
λ ss is the internet server system constant failure/outage rate.
µ ss is the internet server system constant repair/restoration rate.
Pj (t ) is the probability that the internet server system is in state j at time t, for
j = 0,1.

Using the Markov method presented in Chapter 4, we write down the following
equations for the diagram shown in Fig. 7.3 [17]:

dP0 (t )
+ λ ss P0 (t ) = µ ss P1 (t ) (7.23)
dt

dP1 (t )
+ µ ss P1 (t ) = λ ss P0 (t ) (7.24)
dt

At time t=0, P0 (0) = 1, and P1 (0) = 0.


By solving Equations (7.23) and (7.24), we get the following probability equations:

µ ss λ ss
P0 (t ) = AVss (t ) = + e − ( λ ss +µss )t (7.25)
(λ ss + µ ss ) (λ ss + µ ss )

λ ss λ ss
P1 (t ) = UAss (t ) = − e − ( λ ss +µss )t (7.26)
(λ ss + µ ss ) (λ ss + µ ss )

where
AVss (t ) is the internet server system availability at time t.
UAss (t ) is the internet server system unavailability at time t.

As time t becomes very large, Equations (7.25) and (7.26) reduce to

µ ss
AVss = lim AVss (t ) = (7.27)
t →∞ λ ss + µ ss
Computer and Internet Reliability 119

λ ss
UAss = lim UAss (t ) = (7.28)
t →∞ λ ss + µ ss

For µ ss = 0, Equation (7.25) reduces to

Rss (t ) = e −λ ss t (7.29)

where
Rss (t ) is the internet server system reliability at time t.

Thus, the internet server system mean time to failure is given by [17]


MTTFss = Rss (t )dt
0


= e −λ ss t dt
0
1
=
λ ss (7.30)

where
MTTFss is the internet server system mean time to failure.

Example 7.4

Assume that the constant failure/outage and repair/restoration rates of an internet


server system are 0.0005 failures/outages per hour and 0.08 repairs/restorations
per hour, respectively. Calculate the server system steady state availability.
By substituting the specified data values into Equation (7.27), we obtain

0.08
AVss =
0.0005 + 0.08
= 0.9937

Thus, the steady state availability of the internet server system is 0.9937.

7.9.2 Model II
This mathematical model is concerned with evaluating the availability of an
internetworking (router) system with two independent and identical switches. The
model assumes that the system malfunctions when both the switches malfunc-
tion and the switches form a standby-type configuration. In addition, the switch
failure/malfunction and restoration (repair) rates are constant. The system space
diagram is shown in Fig. 7.4. The numerals in rectangles and circle denote the
system state.
120 Applied Reliability, Usability, and Quality for Engineers

FIGURE 7.4 System state space diagram.

The following symbols were used to develop equations for the model:

j is the jth state shown in Fig. 7.4, for j = 0 (system operating normally [i.e.,
two switches functional: one operating, other on standby]), j = 1 (one switch
operating, the other failed), j = 2 (system failed [both switches failed]).
λ s is the switch constant failure rate.
µ s is the switch constant repair/restoration rate.
µ 2 is the constant repair/restoration rate from state 2 to state 0.
P is the probability of failure detection and successful switchover from switch
failure.
Pj (t ) is the probability that the internetworking (router) system is in state j at
time t; for j = 0,1,2.

Using the Markov method presented in Chapter 4, we write down the following
equations for the diagram shown in Fig. 7.4 [17, 37].

dP0 (t )
+ [ pλ s + (1 − p)λ s ] P0 (t ) = µ s P1 (t ) + µ 2 P2 (t ) (7.31)
dt

dP1 (t )
+ (λ s + µ s ) P1 (t ) = pλ s P0 (t ) (7.32)
dt

dP2 (t )
+ µ 2 P2 (t ) = λ s P1 (t ) + (1 − p)λ s P0 (t ) (7.33)
dt

At time t=0, P0 (0) = 1, P1 (0) = 0, and P2 (0) = 0.


The following stead-state probability solutions are obtained by setting derivatives
2

equal to zero in Equations (7.31)–(7.33) and using the relationship ∑P = 1:


j=0
j

P0 = µ 2 (µ s + λ s )/X (7.34)
Computer and Internet Reliability 121

where

X = µ 2 (µ s + pλ s + λ s ) + (1 − p)λ s (µ s + λ s ) + pλ s2 (7.35)

P1 = pλ s µ 2 /X (7.36)

P2 =  pλ 2s + (1 − p)λ s (µ s + λ s )  /X (7.37)

where
Pj is the steady-state probability that internetworking (router) system is in state j,
for j = 0,1,2.

The internetworking (router) system steady-state availability is given by

AVis = P0 + P1
= [ µ 2 (µ s + λ s ) + pλ 2µ 2 ] /X (7.38)
where
AVis is the internetworking (router) system steady-state availability.

7.10 PROBLEMS
1. Make a comparison between software reliability and hardware reliability.
2. What are the main causes of computer failures?
3. Discuss at least five categories of computer failures.
4. What are the sources of computer software and hardware errors?
5. What is fault masking?
6. Assume that the constant failure rate of a unit/module belonging to a TMR
system with voter is λ = 0.0004 failures per hour. Calculate the system reli-
ability for a 200-hour mission if the voter constant failure rate is 0.0002 fail-
ures per hour. In addition, calculate the TMR system mean time to failure.
7. Compare the Mills model with the Musa model.
8. Describe the pinpoint method.
9. Assume that the constant failure and repair rates of an internet server sys-
tem are 0.005 failures per hour and 0.04 repairs per hour, respectively.
Calculate the internet server system availability for a 20-hour mission.
10. Prove Equations (7.34), (7.36), and (7.37) by using Equations (7.31)–(7.33).

REFERENCES
1. Shannon, C.A., A Mathematical Theory of Communications, Bell System Technical
Journal, Vol. 27, 1948, pp. 379–423 and 623–656.
2. Hamming, W.R., Error Detecting and Error Correcting Codes, Bell System Technical
Journal, Vol. 29, 1950, pp. 147–160.
3. Von Neumann, J., Probabilistic Logistics and the Synthesis of Reliable Organisms from
Reliable Components, in Automata Studies, edited by Shannon, C.E., McCarthy, J.,
Princeton University Press, Princeton, New Jersey, 1956, pp. 43–48.
122 Applied Reliability, Usability, and Quality for Engineers

4. Moore, E.F., Shannon, C.E., Reliable Circuits Using Less Reliable Relays, Journal of
the Franklin Institute, Vol. 262, 1956, pp. 191–208.
5. Haugk, G., Tsiang, S.H., Zimmermann, L., System Testing of the No. 1 Electronic
Switching System, Bell System Technical Journal, Vol. 43, 1964, pp. 2575–2592.
6. Sauter, J.L., Reliability in Computer Programs, Mechanical Engineering, Vol. 91, 1969,
pp. 24–27.
7. Barlow, R., Scheuer, E.M., Reliability Growth During a Development Testing Program,
Technometrics, Vol. 8, 1966, pp. 53–60.
8. Dhillon, B.S., Computer System Reliability: Safety and Usability, CRC Press, Boca
Raton, Florida, 2013.
9. Goseva-Popstojanova, K., Mazimdar, S., Singh, A.D., Empirical Study of Session-Based
Workload and Reliability for Web Servers, Proceedings of the 15th Int. Symposium on
Software Reliability Engineering, 2004, pp. 403–414.
10. Yourdon, E., The Causes of System Failures-Part 2, Modern Data, Vol. 5, February
1972, pp. 50–56.
11. Yourdon, E., The Causes of System Failures-Part 3, Modern Data, Vol. 5, March 1972,
pp. 36–40.
12. Dhillon, B.S., Reliability in Computer System Design, Ablex Publishing, Norwood,
New Jersey, 1987.
13. Goldberg, J., A Survey of the Design and Analysis of Fault-Tolerant Computers, in
Reliability and Fault Tree Analysis, edited by Barlow, R.E., Fussell, J.B., Singpurwalla,
N.D., Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania,
1975, pp. 667–685.
14. Kletz, T., Chung, P., Broomfield, E., Shen-Orr, C., Computer Control and Human Error,
Gulf Publishing, Houston, Texas, 1995.
15. Bailey, R.W., Human Error in Computer Systems, Prentice Hall, Englewood Cliffs,
New Jersey, 1983.
16. Beaudry, M.D., Performance Related Reliability Measures for Computer Systems,
IEEE Transactions on Computers, Vol. 27, June 1978, pp. 540–547.
17. Dhillon, B.S., Design Reliability: Fundamentals and Applications, CRC Press, Boca
Raton, Florida, 1999.
18. Kline, M.B., Software and Hardware Reliability and Maintainability: What are the
Differences?, Proceedings of the Annual Reliability and Maintainability Symposium,
1980, pp. 179–185.
19. Ireson, W.G., Coombs, C.F., Moss, R.Y., Handbook of Reliability Engineering and
Management, McGraw Hill, New York, 1996.
20. Mathur, F.R., Avizienis, A., Reliability Analysis and Architecture of a Hybrid Redun­
dant Digital System: Generalized Triple Modular Redundancy with Self-Repair, Procee­
dings of the AFIPS Spring Joint Computer Conference, 1970, pp. 375–387.
21. Pecht, M., Ed., Product Reliability, Maintainability, and Supportability Handbook,
CRC Press, Boca Raton, Florida, 1995.
22. Shooman, M.L., Reliability of Computer Systems and Networks: Fault Tolerance,
Analysis, and Design, John Wiley and Sons, New York, 2002.
23. Nerber, P.O., Power-off Time Impact on Reliability Estimates, IEEE International
Convention Record, Part 10, March 1965, pp. 1–8.
24. Sukert, A.N., An Investigation of Software Reliability Models, Proceedings of the
Annual Reliability and Maintainability Symposium, 1977, pp. 478–484.
25. Musa, J.D., Iannino, A., Okumoto, K., Software Reliability, McGraw Hill, New York,
1987.
26. Schick, G.J., Wolverton, R.W., An Analysis of Competing Software Reliability Models,
IEEE Transactions on Software Engineering, Vol. 4, 1978, pp. 104–120.
Computer and Internet Reliability 123

27. Musa, J.D., A Theory of Software Reliability and Its Applications, IEEE Transactions
on Software Engineering, Vol. 1, 1975, pp. 312–327.
28. Mills, H.D., On the Statistical Validation of Computer Programs, Report No. 72-6015,
IBM: Federal Systems Division, Gaithersburg, MD, 1972.
29. ICT Facts and Figures, International Telecommunication Union, ICT Data and Statistics
Division, Telecommunication Development Bureau, Geneva, Switzerland, 2011.
30. Hilbert, M., Lopez, P., The World’s Technological Capacity to Store, Communicate,
and Compute Information, Science, Vol. 332, No. 6025, April 2011, pp. 60–65.
31. Barrett, R., Haar, S., Whitestone, R., Routing Snafu Causes Internet Outage, Interactive
Week, April 25, 1997, p. 9.
32. Kiciman, E., Fox, A., Detecting Application-Level Failures in Component-Based Internet
Services, IEEE Transactions on Neural Networks, Vol. 16, No. 5, 2005, pp. 1027–1041.
33. Chan, C.K., Tortorella, M., Spares-Inventory Sizing for End-to-End Service Availa­
bility, Proceedings of the Annual Reliability and Maintainability Symposium, 2001,
pp. 98–102.
34. Imaizumi, M., Kimura, M., Yasui, K., Optimal Monitoring Policy for Server System
with Illegal Access, Proceedings of the llth ISSAT International Conference on
Reliability and Quality in Design, 2005, pp. 155–159.
35. Hecht, M., Reliability/Availability Modeling and Prediction of E-Commerce and Other
Internet Information Systems, Proceedings of the Annual Reliability and Maintainability
Symposium, 2001, pp. 176–182.
36. Aida, M., Abe, T., Stochastic Model of Internet Access Patterns, IEICE Transactions on
Communications, Vol. E84-B, No. 8, 2001, pp. 2142–2150.
37. Dhillon, B.S., Kirmizi, F., Probabilistic Safety Analysis of Maintainable Systems,
Journal of Quality in Maintenance Engineering, Vol. 9, No. 3, 2003, pp. 303–320.
8 Power System Reliability

8.1 INTRODUCTION
An electric power system’s main areas are generation, transmission, and distribution
and a modern electric power system’s basic function is to supply its customers cost-
effective electrical energy with a high degree of reliability. In the context of electric
power system, reliability may simply be defined as concern regarding the system’s
ability for providing a satisfactory amount of electrical power [1].
The power system reliability’s history goes back to the early 1930s when probabil-
ity concepts were applied to electric power system-related problems [2–4]. The first
book on the subject in English appeared in 1970 [5]. Over the years, a large number of
publications, directly or indirectly, related to power system reliability have appeared.
Most of these publications are listed in Refs. [5–7].
This chapter presents various important aspects of power system reliability.

8.2 POWER SYSTEM RELIABILITY-ASSOCIATED


TERMS AND DEFINITIONS
There are many terms and definitions used in the area of power system reliability.
Some of the common ones are as follows [8–11]:

• Forced derating: This is when a piece of equipment or a unit is operated


at a forced derated or lowered capacity because of damage or a component
failure.
• Power system reliability: This is the degree to which the performance of
the elements in a bulk system results in electrical energy being delivered
to customers within the framework of stated standards and in the amount
required.
• Scheduled outage: This is the shutdown of a generating unit, transmission
line, or other facility, for maintenance or inspection, as per an advanced
schedule.
• Forced outage: This is when a piece of equipment or a unit has to be taken
out of service because of damage or a component failure.
• Forced outage hours: These are the total number of hours a piece of equip-
ment or a unit spends in the forced outage condition.
• Forced outage rate: This is (for an equipment) given by the total number of
forced outage hours times 100 over the total number of service hours plus
the total number of forced outage hours.
• Service hours: These are the total number of operation hours of a piece of
equipment or a unit.

DOI: 10.1201/9781003298571-8 125


126 Applied Reliability, Usability, and Quality for Engineers

• Mean time to forced outage: This is analogous to mean time to failure


(MTTF) and is given by the total of service hours over the total number of
forced outages.
• Mean forced outage duration: This is analogous to mean time to repair
(MTTR) and is given by the total number of forced outage hours over the
total number of forced outages.

8.3 LOSS OF LOAD PROBABILITY


Over the years, loss-of-load probability (LOLP) has been used as the single most
important metric to estimate overall power system reliability. LOLP may simply
be described as a projected value of how much time, in the long run, the load on
a given power system is expected to be higher than the capacity of the generating
resources [8]. Various probabilistic techniques/methods are utilised to calculate
LOLP.
In the setting up of an LOLP criterion, it is always assumed that an electric power
system strong enough for having a low LOLP, can probably withstand most of the
foreseeable peak loads, outages, and contingencies. Thus, an utility is expected to
arrange for resources (i.e., generation, purchases, load management, etc.) in such a
way so that the resulting system LOLP will be always at or lower than an accept-
able level.
Normally, the common practice is to plan to power system for achieving an LOLP
of 0.1 days per year or less. All in all, some of the difficulties with this LOLP’s use
are as follows [8].

• LOLP does not take into consideration the factor of additional emergency
support that one region or control area may, directly or indirectly, receive
from another, or other emergency actions/measures that control area opera-
tors can exercise to maintain system reliability.
• LOLP itself does not specify the magnitude or duration of the electricity’s
shortage.
• Major loss-of-load incidents generally occur because of contingencies not
modelled appropriately by the traditional LOLP calculation.
• Different LOLP estimation methods can result in different indices for
exactly the same electric power system.

8.4 POWER SYSTEM SERVICE PERFORMANCE-RELATED:


INDICES
In the area of electric power system, normally various service performance-related
indices are calculated for the total system, a specific region or voltage level, desig-
nated feeders or different groups of customers, etc.
Some of the most widely used indices are as follows [1, 12–14]:
Power System Reliability 127

8.4.1 Index I
This index is known as system average interruption frequency index and is
defined by

CI tn
α1 = (8.1)
Ctn

where
α1 is the system average interruption frequency.
CItn is the total number of customer interruptions per year.
Ctn is the total number of customers.

8.4.2 Index II
This index is concerned with measuring service quality (i.e., measuring the continuity
of electricity supply to the customer) and is expressed by [13, 14]

α 2 = (8, 760)( MTEI )/ [ ( MTTF )(8, 760) + ( MTEI ) ] (8.2)

where
α 2 is the mean number of annual down hours (i.e., service outage hours) per
customer.
MTTF is the mean time to failure (i.e., the average time between electricity
interruptions).
MTEI is the mean time to electricity interruption.
(8,760) is the total number of hours in one year.

Example 8.1

Assume that the annual failure rate of the electricity supply is 0.4 and the mean
time of electricity interruption is 4 hours. Calculate the mean number of annual
down hours (i.e., service outage hours) per customer.
In this case, MTTF (i.e., the average time between electricity interruptions) is

1
MTTF = = 2.8 hours
0.4

By inserting the above calculated value and the given values into Equation (8.2),
we obtain

α 2 = (8760)(4)/[(2.5)(8760) + (4)]
= 1.599 hours per year per customer

Thus, the mean number of annual down hours (i.e., service outage hours) per
customer is 1.599 hours.
128 Applied Reliability, Usability, and Quality for Engineers

8.4.3 Index III
This index is known as customer average interruption frequency index and is
expressed by

CI tn
α3 = (8.3)
CAtn

where
α 3 is the customer average interruption frequency.
CI tn is the total number of customer interruptions per year.
CAtn is the total number of customers affected. It is to be noted that the customers
affected should only be counted once, irrespective of the number of inter-
ruptions throughout the year they may have experienced.

8.4.4 Index IV
This index is known as system average interruption duration index and is defined by

CIDs
α4 = (8.4)
Ctn

where
α 4 is the system average interruption duration.
CIDs is the sum of customer interruption durations per year.
Ctn is the total number of customers.

8.4.5 Index V
This index is known as customer average interruption duration index and is
defined by

CIDs
α5 = (8.5)
CI tn

where
α 5 is the customer average interruption duration.
CIDs is the sum of customer interruption durations per year.
CI tn is the total number of customer interruptions per year.

8.4.6 Index VI
This index is known as average service availability index and is expressed by

CH as
α6 = (8.6)
CH d
Power System Reliability 129

where
α 6 is the average service availability.
CH as is the customer hours of available service.
CH d is the customer hours demanded. These hours are given by the 12-month
average number of customers serviced times 8,760 hours.

8.5 AVAILABILITY ANALYLSIS OF TRANSMISSION


AND ASSOCIATED SYSTEMS
In the power system area various types of systems and equipment are used to trans-
mit electrical energy from one point to another. Two examples of such systems and
equipment are transformers and transmission lines. This section presents three
mathematical models to perform availability analysis of transformers and transmis-
sion lines [5, 7, 9, 11].

8.5.1 Model I
This mathematical model represents a system composed of three active and iden-
tical single-phase transformers with one standby transformer (i.e., unit) [9]. The
system state space diagram is shown in Fig. 8.1. The numerals in circles denote
system states.
The model is subject to the following five assumptions [9, 11]:

• All failures are statistically independent.


• Transformer failure, repair, and replacement (i.e., installation) rates are
constant.

FIGURE 8.1 State space diagram of three identical single-phase transformers with one on
standby.
130 Applied Reliability, Usability, and Quality for Engineers

• Repaired transformers are as good as new.


• The standby transformer or unit cannot fail in its standby mode.
• The entire transformer bank is considered failed when more than one trans-
former fails. In addition, it is assumed that no more transformer failures
occur.

The following symbols are associated with the state space diagram shown in Fig. 8.1
and its associated equations:

Pj is the probability that the system is in state j at time t; for j = 0 (three trans-
formers operating, one on standby), j = 1, (two transformers operating, one
on standby), j = 2 (three transformers operating, none on standby), j = 3 (two
transformers operating, none on standby).
λ is the transformer failure rate.
µ is the transformer repair rate.
α is the standby transformer/unit installation rate.

Using the Markov method presented in Chapter 4, we write down the following
equations for Fig. 8.1 state space diagram [9, 11]:

dP0 (t )
+ 3λP0 (t ) − µP2 (t ) = 0 (8.7)
dt

dP1 (t )
+ αP1 (t ) − 3λP0 (t ) − µP3 (t ) = 0 (8.8)
dt

dP2 (t )
(3λ + µ) P2 (t ) − αP1 (t ) = 0 (8.9)
dt

dP3 (t )
+ µP3 (t ) − 3λP2 (t ) = 0 (8.10)
dt

At time t=0, P0 (0) = 1, P1 (0) = 0, P2 (0) = 0, and P3 (0) = 0.


The following steady-state equations are obtained from Equation (8.7) to
Equation (8.10) by setting the derivatives with respect to time to equal to zero and
3

using the relationship ∑P = 1:


j=0
j

P0 = [1 + M1 (1 + M 2 + M1 ) ]
−1
(8.11)

where

M1 = 3λ /µ (8.12)

M 2 = (3λ + µ)/α (8.13)


Power System Reliability 131

P1 = M1 M 2 P0 (8.14)

P2 = M1P0 (8.15)

P3 = M12 P0 (8.16)

where
P0 , P1 , P2 , and P3 are the steady state probabilities of the system being in states 0,
1, 2, and 3, respectively.

Thus, the system steady state availability is given by

AVss = P0 + P1 + P2 (8.17)

where
AVss is the system steady state availability.

8.5.2 Model II
This mathematical model represents a system composed of two non-identical and
redundant transmission lines subject to common-cause failures. A common-cause
failure may simply be described as any instance where multiple units fail due to
a single cause [15, 16]. In transmission lines a common-cause failure may take
place due to factors such as tornado, aircraft crash, and poor weather. The system
state space diagram is shown in Fig. 8.2. The numerals in circles and box denote
system states.
The following three assumptions are associated with the model:

• All failures are statistically independent.


• All failure and repair rates of transmission lines are constant.
• A repaired transmission line is as good as new.

The following symbols are associated with the state space diagram shown in Fig. 8.2
and its associated equations:

Pi (t ) is the probability that the system is in state i at time t, for i = 0 (both trans-
mission lines operating normally), i = 1 (transmission line a failed, other
operating), i = 2 (transmission line b failed, other operating), i = 3 (both
transmission lines failed).
λ c is the system common-cause failure rate.
λ ta is the transmission line a failure rate.
λ tb is the transmission line b failure rate.
µ ta is the transmission line a repair rate.
µ tb is the transmission line b repair rate.
132 Applied Reliability, Usability, and Quality for Engineers

FIGURE 8.2 State space diagram for two non-identical transmission lines.

Using the Markov method presented in Chapter 4, we write down the following
equations for Fig. 8.2 state space diagram [11, 15, 16]:

dP0 (t )
+ (λ ta + λ tb + λ c ) P0 (t ) − µ ta P1 (t ) − µ tb P2 (t ) = 0 (8.18)
dt

dP1 (t )
+ (λ tb + µ ta ) P1 (t ) − µ tb P3 (t ) − λ ta P0 (t ) = 0 (8.19)
dt

dP2 (t )
+ (λ ta + µ tb ) P2 (t ) − µ ta P3 (t ) − λ tb P0 (t ) = 0 (8.20)
dt

dP3 (t )
+ (µ ta + µ tb ) P3 (t ) − λ ta P2 (t ) − λ tb P1 (t ) − λ c P0 (t ) = 0 (8.21)
dt

At time t=0, P0 (0) = 1, P1 (0) = 0, P2 (0) = 0, and P3 (0) = 0.


The following steady-state equations are obtained from Equation (8.18) to
Equation (8.21) by setting the derivatives with respect to time t equal to zero and
3

using the relationship ∑P = 1:


i=0
i

P0 = µ ta µ tb N /N 3 (8.22)
Power System Reliability 133

where

N = N1 + N 2 (8.23)

N1 = (λ ta + µ ta ) (8.24)

N 2 = (λ tb + µ tb ) (8.25)

N 3 = NN1 N 2 + λ c  N1 ( N 2 + µ ta ) + µ tb N 2  (8.26)

P1 = [ N1λ ta + N 4 λ c ] µ tb /N 3 (8.27)

where

N 4 = (λ ta + µ tb ) (8.28)

P2 = [ Nλ tb + N 5 λ c ] µ ta /N 3 (8.29)

where

N 5 = (λ tb + µ ta ) (8.30)

P3 = Nλ ta λ tb + N 4 N 5 λ c /N 3 (8.31)

P0 , P1 , P2 and P3 are the steady state probabilities of the system being in state 0, 1, 2,
and 3, respectively.
The system steady state availability is given by

AVss = P0 + P1 + P2 (8.32)

where
AVss is the system steady state availability.

8.5.3 Model III


This mathematical model represents transmission lines and other equipment operat-
ing in fluctuating outdoor environments (i.e., normal and stormy). The system can
fail under both these conditions. The system state space diagram is shown in Fig. 8.3.
The numerals in circles and boxes denote system states. The following assumptions
are associated with the model:

• All failure, repair, and weather fluctuation rates are constant.


• The repaired system is as good as new.
• All failures are statistically independent.
134 Applied Reliability, Usability, and Quality for Engineers

FIGURE 8.3 State space diagram of a system operating under fluctuating environments.

The following symbols are associated with the state space diagram shown in
Fig. 8.3 and its associated equations:

Pj (t ) is the probability that the system is in state j at time t; for j = 0 (operating


normally in normal weather), j = 1 (failed in normal weather), j = 2 (operat-
ing normally in stormy weather), j = 3 (failed in stormy weather).
λ n is the system constant failure rate in normal weather.
λ s is the system constant failure rate in stormy weather.
µ n is the system constant repair rate in normal weather.
µ s is the system constant repair rate in stormy weather.
θ is the constant transition rate from normal weather to stormy weather.
γ is the constant transition rate from stormy weather to normal weather.

Using the Markov method presented in Chapter 4, we write down the following
equations for Fig. 8.3 state space diagram [11]:

dP0 (t )
+ (λ n + θ) P0 (t ) − γP2 (t ) − µ n P1 (t ) = 0 (8.33)
dt

dP1 (t )
+ (µ n + θ) P1 (t ) − γP3 (t ) − λ n P0 (t ) = 0 (8.34)
dt
Power System Reliability 135

dP2 (t )
+ (λ s + γ ) P2 (t ) − µ s P3 (t ) − θP0 (t ) = 0 (8.35)
dt

dP3 (t )
+ ( γ + µ s ) P3 (t ) − λ s P2 (t ) − θP1 (t ) = 0 (8.36)
dt

At time t=0, P0 (0) = 1, P1 (0) = 0, P2 (0) = 0, and P3 (0) = 0.


The following steady-state equations are obtained from Equation (8.33) to
Equation (8.36) by setting the derivatives with respect to time t equal to zero and
3

using the relationship ∑P = 1:


j=0
j

γA1
P0 = (8.37)
θ( A2 + A3 ) + γ ( A4 + A1 )
where

A1 = µ s θ + µ n A5 (8.38)

A2 = µ n γ + µ s A6 (8.39)

A3 = λ n γ + λ s A6 (8.40)

A4 = λ s θ + λ n A5 (8.41)

A5 = λ s + γ + µ s (8.42)

A6 = λ n + θ + µ n (8.43)

P1 = A4 P0 /A1 (8.44)

P2 = θP0 A2 /γA1 (8.45)

P3 = θP0 A3 /γA1 (8.46)

P0 , P1 , P2 , and P3 are the steady state probabilities of the system being in states 0, 1,
2, and 3, respectively.
The system steady state availability is given by

AVss = P0 + P2 (8.47)

where
AVss is the system steady state availability.
136 Applied Reliability, Usability, and Quality for Engineers

8.6 AVAILABILITY ANALYSIS OF A SINGLE GENERATOR UNIT


There are many mathematical models that can be used, directly or indirectly, for per-
forming availability analysis of a single generator unit. This section presents three
such models that can also be used for performing availability analysis of equipment
other than a generator unit [11]. Two examples of such equipment are a pulveriser
and a transformer.

8.6.1 Model I
This mathematical model represents a power generator unit that can either be in
operating state or a failed state. The failed power unit is repaired. The power genera-
tor unit state space diagram is shown in Fig. 8.4. The numerals in the circles denote
the power generator unit state.
The following three assumptions are associated with the model:

• The power generator failures are statistically independent.


• The power generator failure and repair rates are constant.
• The repaired power generator unit is as good as new.

The following symbols are associated with the diagram shown in Fig. 8.4 and its
associated equations:

Pi (t ) is the probability that the power generator unit is in state i at time t; for
i = 0 (operating normally), i = 1 (failed).
λ pg is the power generator unit constant failure rate.
θ pg is the power generator unit constant repair rate.

Using the Markov method presented in Chapter 4, we write down the following
equations for the state space diagram shown in Fig. 8.4 [11]:

dP0 (t )
+ λ pg P0 (t ) − θ pg P1 (t ) = 0 (8.48)
dt

FIGURE 8.4 Power generator unit state space diagram.


Power System Reliability 137

dP1 (t )
+ θ pg P1 (t ) − λ pg P0 (t ) = 0 (8.49)
dt

At time t=0, P0 (0) = 1, and P1 (0) = 0.


Solving Equations (8.48)–(8.49) by using Laplace transforms we get

θ pg λ pg
P0 (t ) = + e − ( λ pg +θ pg )t (8.50)
λ pg + θ pg λ pg + θ pg

λ pg θ pg
P1 (t ) = − e − ( λ pg +θ pg )t (8.51)
λ pg + θ pg λ pg + θ pg

The power generator unit availability and unavailability are given by

θ pg λ pg
AVpg (t ) = P0 (t ) = + e − ( λ pg +θ pg )t (8.52)
λ pg + θ pg λ pg + θ pg
and

λ pg θ pg
UApg (t ) = P1 (t ) = − e − ( λ pg +θ pg )t (8.53)
λ pg + θ pg λ pg + θ pg

where
AVpg (t ) is the power generator unit availability at time t.
UApg (t ) is the power generator unit unavailability at time t.

For large t, Equations (8.52)–(8.53) reduce to

θ pg
AVpg = (8.54)
λ pg + θ pg

and

λ pg
UApg = (8.55)
λ pg + θ pg
where
AVpg is the power generator unit steady state availability.
UApg is the power generator unit steady state unavailability.

1 1
Since λ pg = and θ pg = , Equations (8.54)–(8.55) become
MTTFpg MTTRpg

MTTFpg
AVpg = (8.56)
MTTRpg + MTTFpg
138 Applied Reliability, Usability, and Quality for Engineers

and

MTTRpg
UApg = (8.57)
MTTRpg + MTTFpg

where
MTTFpg is the power generator unit mean time to failure.
MTTRpg is the power generator unit mean time to repair.

Example 8.2

Assume that constant failure and repair rates of a power generator unit are as
follows:

λ pg = 0.0002 failures/hour

and

θ pg = 0.0006 repairs/hour

Calculate the steady state availability of the power generator unit.


By inserting the specified data values into Equation (8.54), we obtain

0.0006
AVpg = = 0.75
0.0002 + 0.0006

Thus, the steady state availability of the power generator unit is 0.75.

8.6.2 Model II
This mathematical model represents a power generator unit that can either be in
operating state or failed state or down for preventive maintenance. This scenario is
depicted by the state space diagram shown in Fig. 8.5. The numerals in circles and
box denote the system state.

FIGURE 8.5 Power generator unit state space diagram.


Power System Reliability 139

The following assumptions are associated with the model:

• The power generator unit failure, repair, preventive maintenance down, and
preventive maintenance performance rates are constant.
• The power generator unit failures are statistically independent.
• After repair and preventive maintenance, the power generator unit is as
good as new.

The following symbols are associated with the state space diagram shown in Fig. 8.5
and its associated equations:

Pj (t ) is the probability that the power generator unit is in state j at time t; for
j = 0 (operating normally), j = 1 (down for preventive maintenance), j = 2
(failed).
λ is the power generator unit failure rate.
λ p is the power generator unit (down for) preventive maintenance rate.
θ is the power generator unit repair rate.
θ p is the power generator unit preventive maintenance performance (repair)
rate.

Using the Markov method presented in Chapter 4, we write down the following
equations for the state space diagram shown in Fig. 8.5 [11]:

dP0 (t )
+ (λ p + λ) P0 (t ) − θ p P1 (t ) − θP2 (t ) = 0 (8.58)
dt

dP1 (t )
+ θ p P1 (t ) − λ p P0 (t ) = 0 (8.59)
dt

dP2 (t )
+ θP2 (t ) − λP0 (t ) = 0 (8.60)
dt

At time t=0, P0 (0) = 1, P1 (0) = 0, and P2 (0) = 0.


Solving Equations (8.58)–(8.60) by using Laplace transforms, we obtain

θ pθ  (b1 + θ p )(b1 + θ)  b1t  (b2 + θ p )(b2 + θ)  b2t


P0 (t ) = + e − e (8.61)
b1b2  b1 (b1 − b2 )   b2 (b1 − b2 ) 

λ pθ  (λ p b1 + λ pθ)  b1t  (θ + b2 )λ p  b2t


P1 (t ) = + e − e (8.62)
b1b2  b1 (b1 − b2 )   b2 (b1 − b2 ) 

λθ p  (λb1 + λθ p )  b1t  (θ p + b2 )λ  b2t


P2 (t ) = + e − e (8.63)
b1b2  b1 (b1 − b2 )   b2 (b1 − b2 ) 
140 Applied Reliability, Usability, and Quality for Engineers

where

b1b2 = θ pθ + λ pθ + λθ p (8.64)

b1 + b2 = −(θ p + θ + λ p + λ) (8.65)

The power generator unit availability at time t, AVpg (t ), is given by

θ pθ  (b1 + θ p )(b1 + θ)  b1t  (b2 + θ p )(b2 + θ)  b2t


AVpg (t ) = P0 (t ) = + e − e (8.66)
b1b2  b1 (b1 − b2 )   b2 (b1 − b2 ) 

It is to be noted that the above availability expression is valid if and only if b1 and b2
are negative. Thus, for large t, Equation (8.66) reduces to

θ pθ
AVpg = lim AVpg (t ) = (8.67)
t →∞ b1b2

where
AVpg is the power generator unit steady state availability.

Example 8.3

Assume that for a power generator unit we have the following data values:

λ = 0.0004 failures/hour
λ p = 0.0007/hour
θ p = 0.0008/hour
θ = 0.0005 repairs/hour

Calculate the power generator unit steady state availability. By substituting the
given data values into Equation (8.67), we get

(0.0008)(0.0005)
AVpg =
(0.0008)(0.0005) + (0.0007)(0.0005) + (0.0004)(0.0008)
= 0.3738

Thus, the steady state availability of the power generator unit is 0.3738.

8.6.3 Model III


This mathematical model represents a power generator unit that can be either oper-
ating normally (i.e., producing electricity at its full capacity), derated (i.e., produc-
ing electricity at a derated capacity, for example, say 200 megawatts instead of
500 megawatts at full capacity), or failed. This scenario is depicted by the state space
diagram shown in Fig. 8.6. The numerals in the circles denote system state.
Power System Reliability 141

FIGURE 8.6 Power generator unit state space diagram.

The following three assumptions are associated with the model:

• The power generator unit failures are statistically independent.


• All power generator unit failure and repair rates are constant.
• The repaired power generator unit is as good as new.

The following symbols are associated with Fig. 8.6 diagram and its associated
equations:

Pi (t ) is the probability that the power generator unit is in state i at time t; for
i = 0 (operating normally), i = 1 (derated), i = 2 (failed).
λ is the power generator unit failure rate from state 0 to state 2.
λ d is the power generator unit failure rate from state 0 to state 1.
λ1 is the power generator unit failure rate from state 1 to state 2.
θ is the power generator unit repair rate from state 2 to state 0.
θd is the power generator unit repair rate from state 1 to state 0.
θ1 is the power generator unit repair rate from state 2 to state 1.

Using the Markov method presented in Chapter 4, we write down the following
equations for Fig. 8.6 state space diagram [11]:

dP0 (t )
+ (λ + λ d ) P0 (t ) − θd P1 (t ) − θP2 (t ) = 0 (8.68)
dt

dP1 (t )
+ (θd + λ1 ) P1 (t ) − θ1P2 (t ) − λ d P0 (t ) = 0 (8.69)
dt
142 Applied Reliability, Usability, and Quality for Engineers

dP2 (t )
+ (θ + θ1 ) P2 (t ) − λ1P1 (t ) − λP0 (t ) = 0 (8.70)
dt

At time t=0, P0 (0) = 1, P1 (0) = 0, and P2 (0) = 0.


Solving Equations (8.68)–(8.70), by using Laplace transforms, we get

M1 M2  M1 M2  m2 t
P0 (t ) = + e m1t + 1 − − e (8.71)
m1m 2 m1 (m1 − m2 )  m1m2 m1 (m1 − m2 ) 

where

M1 = θθd + λ1θ + θd θ1 (8.72)

M 2 = θd m1 + θm1 + θ1m1 + m1λ1 + m12 + θd θ + λ1θ + θd θ1 (8.73)

1/2
− M 3 ±  M 32 − 4 M 4 
m1 , m2 = (8.74)
2

M 3 = θ + θ1 + θd + λ + λ1 + λ d (8.75)

M 4 = θd θ + λ1θ + θd θ1 + θλ d + λ1λ d + θd λ + λθ1 + λλ1 + λ d θ1 (8.76)

M5 M6  M M6  m2 t
P1 (t ) = + e m1t −  5 + e (8.77)
m1m2 m1 (m1 − m2 )  m1m2 m1 (m1 − m2 ) 

where

M 5 = λ d θ + λ d θ1 + λθ1 (8.78)

M 6 = m1λ d + M 5 (8.79)

M7 M8  M M8  m2 t
P2 (t ) = + e m1t −  7 + e (8.80)
m1m2 m1 (m1 − m2 )  m1m2 m1 (m1 − m2 ) 

where

M 7 = λ d λ1 + θd λ + λλ1 (8.81)

M8 = m1λ + M 7 (8.82)
Power System Reliability 143

The power generator unit operational availability is given by

AVpgo (t ) = P0 (t ) + P1 (t ) (8.83)

where
AVpgo (t ) is the power generator unit operational availability at time t.

For large t, Equation (8.83) reduces to


M1 + M 5
AVpg 0 = lim [ P0 (t ) + P1 (t ) ] = (8.84)
t →∞ m1 m2
where
AVpgo is the power generator unit operational steady state availability.

8.7 PROBLEMS
1. Write an essay on power system reliability.
2. Define the following four terms:
• Forced derating
• Power system reliability
• Forced outage rate
• Forced outage
3. Describe loss of load probability.
4. What are the difficulties associated with the use of loss of load probability?
5. Define the following indices:
• System average interruption frequency index
• Customer average interruption duration index
• Average service availability index
6. Assume that the annual failure rate of the electricity supply is 0.5 and the
mean time to electricity interruption is 6 hours. Calculate the mean number
of annual down hours (i.e., service outage hours) per customer.
7. Assume that constant failure and repair rates of a power generator unit are
as follows:
• λ pg = 0.0001 failures/hour
• µ pg = 0.0005 repairs per hour
• Calculate the steady state unavailability of the power generator unit.
8. Prove that the sum of Equations (8.37), (8.44)–(8.46) is equal to unity.
9. Prove Equation (8.17) by using Equations (8.7)–(8.10).
10. Prove Equation (8.84).

REFERENCES
1. Billinton, R., Allan, R.N., Reliability of Electric Power Systems: An Overview, in
Handbook of Reliability Engineering, edited by Pham, H., Springer-Verlag, London,
2003, pp. 511–528.
2. Smith, S.A., Service Reliability Measured by Probabilities of Outage, Electrical World,
Vol. 101, 1934, pp. 371–374.
144 Applied Reliability, Usability, and Quality for Engineers

3. Layman, W.J., Fundamental Consideration in Preparing a Master System Plan,


Electrical World, Vol. 101, 1933, pp. 778–792.
4. Smith, S.A., Spare Capacity Fixed by Probabilities of Outage, Electrical World, Vol. 103,
1934, pp. 222–225.
5. Bilinton, R., Power System Reliability Evaluation, Gordon and Breach Science
Publishers, New York, 1970.
6. Billinton, R., Bibliography of the Application of Probability Methods in Power System
Reliability Evaluation, IEEE Transactions on Power Apparatus and Systems, Vol. 91,
1972, pp. 649–660.
7. Dhillon, B.S., Applied Reliability and Quality: Fundamentals, Methods, and Procedures,
Springer-Verlag, London, 2007.
8. Kueck, J.D., Kirby, B.J., Overholt, P.N., Markel, L.C., Measurement Practices for
Reliability and Power Quality, Report No. ORNL/TM-2004/91, June 2004. Available
from the Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA.
9. Endrenyi, J., Reliability Modeling in Electric Power Systems, John Wiley and Sons,
New York, 1978.
10. Kennedy, B., Power Quality Primer, McGraw Hill Book Company, New York, 2000.
11. Dhillon, B.S., Reliability Engineering in Systems Design and Operation, Van Nostrand
Reinhold Company, New York, 1982.
12. Billinton, R., Allan, R.N., Reliability Evaluation of Power Systems, Plenum Press,
New York, 1996.
13. Gangel, M.W., Ringlee, R.J., Distribution System Reliability Performance, IEEE
Transactions on Power Apparatus and Systems, Vol. 87, 1968, pp. 1657–1665.
14. Dhillon, B.S., Power System Reliability, Safety, and Management, Ann Arbor Science
Publishers, Ann Arbor, Michigan, 1983.
15. Gangloff, W.C., Common Mode Failure Analysis, IEEE Transactions on Power Appa­
ratus and Systems, Vol. 94, February 1975, pp. 27–30.
16. Billinton, R., Medicherla, T.L.P., Sachdev, M.S., Common-Cause Outages in Multiple
Circuit Transmission Lines, IEEE Transactions on Reliability, Vol. 27, 1978, pp. 128–131.
9 Medical Device Usability

9.1 INTRODUCTION
Each year, a vast sum of money is spent to produce various types of medical devices
around the globe. Their usability has become a very important issue, because vari-
ous studies performed over the years clearly indicate that poorly designed medical
devices’ human-machine interfaces significantly increase the risk for the occurrence
of human errors [1–4]. These errors can result in patient injury or even death.
Medical device usability may simply be defined as the medical device’s inter-
active systems quality in regard to factors such as ease of use, ease of learning,
and user satisfaction [3, 5]. This means that for attaining high user adoption and
to successfully navigate all involved regulatory processes, a medical device must
be properly designed around all types of users and must incorporate appropriate
defences against potential risks under varied conditions. More clearly, the medical
device usability-associated concerns in regard to users such as nurses, physicians,
patients, family members, and professional caregivers must be raised to the same
level as conventional economic, technological, and manufacturing concerns, during
the design phase.
This chapter presents various important aspects of medical device usability.

9.2 MEDICAL DEVICE USERS, USER INTERFACES, USE


DESCRIPTIONS, AND USE ENVIRONMENTS
There are various types of medical device uses who need medical devices that
can be used effectively and safely. In order to satisfy these users’ needs, it is very
important to fully understand the abilities and limitations of all medical device
potential users, i.e., professional health care providers, young and old individuals,
and patients. Factors that can affect, directly or indirectly, the ability of such users
include stress, medication, and fatigue.
Some of the important characteristics of the medical device potential users that
must be taken into consideration during the device design process are as follows [6]:

• Memory, strength, physical size, and cognitive ability.


• Sensory capabilities (i.e., hearing, touch, and vision).
• General health and mental state (e.g., relaxed, stressed, and tired) when
using the device under consideration.
• Knowledge about device operation and the associated medical condition.
• Ability to adapt to adverse circumstances.
• Past experience with similar devices or user interfaces.
• Coordination (i.e., manual dexterity) and motivation.

DOI: 10.1201/9781003298571-9 145


146 Applied Reliability, Usability, and Quality for Engineers

The user interfaces of medical devices are very important because they help
to facilitate correct actions and prevent or discourage the hazardous actions’
occurrence. They comprise all elements of medical devices with which users
interact while using devices, preparing them for use (e.g., setup and calibration),
or conducting maintenance-related activities. More specifically, the user inter-
face incorporates all hardware features that control the device operation. Some
examples of these features are knobs, buttons, switches, and user information
providing device features such as indicator lights, displays, and auditory and
visual alarms.
Nowadays, in medical devices, the user interfaces are usually computer based.
Thus, in this case interface characteristics include items such as follows [4, 6]:

• Navigation logic
• Data entry requirements
• Control and monitoring screens
• Alerting mechanisms
• The manner in which data are organised and presented
• Screen elements
• Help functions
• Prompts
• Keyboards
• Mouse

Finally, it is to be noted that items such as device labelling, training materials, pack-
aging, and operation instructions are also considered part of the user interface, and
thus require a careful consideration in regard to their effective usability.
In order to understand a medical device’s use completely and accurately, the
clearly written its use description is essential. The description includes information
on items such as follows [4, 6]:

• User needs for effective and safe use of the device and how the device
satisfy them
• Device operation
• User population characteristics
• User interface design or preliminary design
• Use environments
• General use scenarios

The medical devices’ use environments can vary quite significantly from situation
to another and can have major impacts on their usability. The four main factors with
respect to device users that must be considered carefully are as follows [4]:

• Factor I: Mental workload. This is concerned with the degree of concen-


tration and thinking an individual exerts while using a medical device.
When the mental workload imposed on device users by the environments
exceeds their abilities, the probability of error occurrence in using medical
Medical Device Usability 147

devices increases dramatically. An example of such situation could be an


operating room having too many alarms on different medical devices, thus
making it difficult for anaesthetists to accurately highlight the source of
any single alarm.
• Factor II: Noise and light. The effectiveness of auditory and visual displays
(e.g., auditory alarms, lighted indicators, and other signals) can be limited
by the use environments if they are designed poorly. For example, in noisy
environments, alarms may not be heard by the device users if they are not
distinctive or sufficiently loud.
• Factor III: Physical workload. This is concerned with the use of medical
device and adds to the user stress. Under high stress, the users of medical
devices are distracted and have less time to make decisions (e.g., consider
multiple device outputs).
• Factor IV: Vibration and motion. In this case, these two things can seri-
ously affect device users’ ability to read displayed information, to conduct
fine physical manipulations such as typing on the keyboard portion of a
medical device, etc.

9.3 MEDICAL DEVICES WITH HIGH INCIDENCE


OF USER/HUMAN ERROR AND A GENERAL
APPROACH FOR DEVELOPING MEDICAL
DEVICES’ EFFECTIVE USER INTERFACES
In 1991, the Food and Drug Administration (FDA) conducted a study of data col-
lected over many years and highlighted the twenty most user/human error-prone
medical devices [7, 8]. These devices (in the order of highest error-prone to lowest
error-prone) were as follows [5, 7, 8]:

• Glucose meter (Highest error-prone)


• Balloon catheter
• Orthodontic bracket aligner
• Administration kit for peritoneal dialysis
• Permanent pacemaker electrode
• Implantable spinal core stimulator
• Intravascular catheter
• Infusion pump
• Urological catheter
• Electrosurgical cutting and coagulation device
• Non-powered suction apparatus
• Mechanical/hydraulic impotence device
• Implantable pacemaker
• Peritoneal dialysate delivery system
• Catheter introducer
• Catheter guide wire
• Transluminal coronary angioplasty catheter
148 Applied Reliability, Usability, and Quality for Engineers

• Low-energy defibrillator (external)


• Continuous ventilator (respirator)
• Contact lens cleaning and disinfecting solutions (lowest error-prone)

All in all, this FDA study indicated that errors in using medical devices cause, each
day in the United States, an average of at least three deaths or serious injuries [7].
Six steps of a general approach for developing medical devices’ effective user
interfaces area as follows [3, 9]:

• Step 1: Define all goals of the project and system functionality.


• Step 2: Conduct analysis of user capabilities, tasks, and work environments.
• Step 3: Document all user requirements and needs.
• Step 4: Conduct usability testing.
• Step 5: Develop appropriate design specifications for device user interface.
• Step 6: Evaluate device interface designs during their field use.

It is to be noted that Step 2 is also concerned with allocating tasks between the
humans and the system and in Step 3, design prototypes and usability goals are
also developed. In Step 4, the results of usability testing are also evaluated against
performance objectives and goals, and a loop back to Step 3 is made as appropriate.
In Step 6, a loop back to Step 3 is made as necessary.

9.4 USEFUL GUIDELINES FOR MAKING INTERFACES


OF MEDICAL DEVICE MORE USER-FRIENDLY
Medical devices such as ventilators, patient monitors, kidney dialysis machines,
blood chemistry analysers, and infusion pumps often have various superficial user-
interface design-associated problems. These problems can negatively affect usability
and appeal of a medical device. Past experiences over the years, clearly indicate that
such problems are relatively easy to remedy. The following ten guidelines are quite
useful to address these design-associated problems [10]:

• Guideline I: Simplify typography as much as possible. User interfaces of


efficient medical devices are based on typographical rules that make screen
contents easy to read and direct users effectively toward the important
information first. This can be achieved by having a single font and a few
character sizes.
• Another mechanism used for simplifying typography is to minimise as
much as possible excessive highlighting such as bolding, italicizing, and
underlining.
• Guideline II: Reduce screen density as much as possible. This is concerned
with lowering the over-stuffing of medical device displays with informa-
tion and controls. The empty space created by such reduction is very use-
ful in a user-interface because it helps to separate information into related
Medical Device Usability 149

categories/groups and provides a resting place for users’ eyes. Otherwise,


overly dense-looking device user-interfaces can be intimidating to medical
professionals such nurses, physicians, and technicians, making it difficult
for these professionals to retrieve required information at a glance.

Nonetheless, actions such as listed below can be useful to eliminate extraneous


information on medical device displays:

• Present secondary information on demand through pop-ups or relocate it to


other screens (if possible).
• Decrease text size by stating things in a more simplified manner.
• Reduce the size of graphics related to identity (i.e., brand names and logos).
• Utilise empty space rather than lines to separate content.
• Use simplified graphics.

Guidelines III: Provide appropriate navigation cues and options. In a medical


device user-interface, a navigator can sometimes become lost when going from one
place to another. Actions such as those shown in Fig. 9.1 are considered useful with
respect to providing navigation cues and options.

• Guideline IV: Ascribe to a Grid. Past experiences, over the years, indi-
cate that most screens generally operate and look better when their screen
components are aligned and they serve a utilitarian objective effectively.
Grid-based screens are generally easier to implement in computer code
because of visual elements’ predictability. The following two guidelines
are considered quite useful with respect to ascribing to a grid:
• Keep on-screen elements at a fixed distance from the grid lines.
• Begin by defining the screen’s dominant elements and approximate
space-related requirements when developing a grid structure.
• Guideline V: Harmonise and refine icons. This is a very important
guideline and some of the actions that can be taken to give the icons a

FIGURE 9.1 Useful actions to provide appropriate navigation cues and options.
150 Applied Reliability, Usability, and Quality for Engineers

family resemblance to each other and maximise icon comprehension are


as follows [10]:
• Conduct user testing for ensuring that no two icons are so similar they
create confusion.
• Simplify icon element for eliminating unnecessary and confusing details.
• Develop a limited set of icon elements that denote nouns only.
• Reinforce all icons with text labels.
• Make similar-purpose icons of the same overall size.
• Use the same style for all similar-purpose icons.
• Guideline VI: Limit the usage of colours. This is concerned with limiting
the colour palette of medical devices’ user interfaces. Two guidelines con-
sidered quite useful in this regard are as follows:
• Limit the number of colours of the background and major on-screen
components to between three and five, including shades of grey.
• Ensure that the selection of colours is consistent with the existing med-
ical components. For example, red is often used to symbolise alarm
related-information or to communicate arterial blood pressure values.
• Guideline VII: Eliminate design inconsistencies as much as possible. These
inconsistencies are quite toxic to usability and user-interface appeal, and for
some medical devices, they can compromise safety.
• Guideline VIII: Create effective visual balance. Generally, this is created
about the vertical axis by arranging visual elements on either side of an
assumed axis with each side containing about the same amount of contents
as empty space. There are a number of methods that can be used for evaluat-
ing the balance of a composition, and perceived imbalances can be corrected
through means such as relocating elements to other screens, reorganising
information, adjusting the gaps between labels and fields, and popping up
elements only upon request.
• Guideline IX: Make use of simple language as much as possible. Often
medical device user-interfaces are characterised by overly complex words
and phrases that lead to usability problems. In this regard, some of the cor-
rective measures are writing shorter sentences, giving meaningful subhead-
ings and headings, using consistent syntax, and breaking rather difficult
procedures into a number ordered steps.
• Guideline X: Maximise the use of hierarchical labels. As the use of redun-
dant labels leads to congested screens that can take a long time to scan,
hierarchical labelling is quite useful to save space and speed scanning by
displaying items such as respiratory rate, heart rate, and arterial blood
pressure in a more efficient manner.

9.5 DESIGNING MEDICAL DEVICES FOR OLD USERS


The population of older people in the United States is growing at a significant rate.
For example, for the period from 1990 to 2000, there number of persons age 55 years
and over was forecasted to increase about 11.5% (i.e., gain of almost 5 million peo-
ple) [7, 11]. Furthermore, it was forecasted that by the year 2020, one out of every
Medical Device Usability 151

FIGURE 9.2 Factors to be considered in designing medical devices for use by older personnel.

five or six Americans will be over 65 years of age [7, 11]. It means there is a definite
need to design medical devices for use by older people by considering factors such as
those shown in Fig. 9.2 during the device design phase [7, 12–14]. The factors shown
in Fig. 9.2 are sensory limitations, cognitive limitations, and physical limitations.
In regard to sensory limitations, the two common limitations among older person-
nel are impaired vision and hearing. To overcome impaired vision associated limi-
tations, the designers must give careful consideration to using somewhat oversized
fonts for displays, readouts, and labels on medical devices. Decline in the hearing
ability of a person is generally a function of age. Males and females, as they age, suf-
fer greater hearing loss at frequencies in the 3000–6000 Hz range and 550–1000 Hz
range, respectively. Therefore, designers must carefully consider these factors in medi-
cal devices to be used by older people.
In regard to cognitive limitations, past experiences clearly indicate that the cog-
nitive abilities of older people can vary quite significantly. Some may experience
attention deficits often referred to as “cognitive rigidity”, a condition that makes it
quite difficult to learn new procedures and approaches, so designers must lower the
number of steps in a given procedure concerning a medical device for improving its
usability effectiveness among the older population [4, 7].
Finally, in regard to physical limitations, a significant number of people generally lose
10%–20% of their strength by the time they reach 60–70 years of age. Their mobility
may also be limited by various types of joint-associated problems. In order to overcome
physical limitations such as these, the medical devices’ designers should incorporate
controls with large-diameter knobs so that rotation needs lesser fine control, textured
knob surfaces that need less pinching strength for overcoming finger slippage, and so on.

9.6 CUMULATIVE TRAUMA DISORDER (CTD) IMPLICATIONS


IN MEDICAL DEVICE DESIGN
Cumulative trauma disorder (CTD) is most prevalent among people with occupa-
tions requiring the performance of the same task repeatedly. The affliction rate for
CTDs is as high as 25% among people performing motion-intensive tasks with their
152 Applied Reliability, Usability, and Quality for Engineers

hands [15]. CTD presents a significant risk to healthcare workers, but it can be pre-
vented through actions such as better medical device design and better work habits
reinforced through effective warning labels and instructions [4, 16].
Some guidelines for designing hand-operated medical devices with respect to
CTD are as follows [4, 7, 17]:

• Select appropriate materials for handles that provide a non-slip grip and
protection to the hands of users from electrical conduction, vibrations, and
cold temperatures.
• Design objects in such a way so that can easily be grasped by the entire
hand, rather than pinched between fingers and the thumb, in circumstances
when high precision is not required.
• Position work surfaces in such a way that permits forearms to extend at an
angle of around 90 degrees with respect to the user’s body, with the elbows
held at one’s side.
• Provide padding and ergonomically contoured surfaces for reducing con-
centration of mechanical stresses on the user’s skin and underlying tissues.
• Ensure that an adequate amount of space is provided for forearm and hand
movements for preventing potential device users from assuming poor hand
postures while performing tasks.
• Ensure that gripping controls and surfaces are designed in such a way that
they enable potential users to keep their hands in a neutral, resting position.
• Provide force-assist mechanisms as necessary for decreasing the muscle
exertion needed to operate a device.
• Perform analysis of the range of user hand motion as a basis to determine
the dynamic characteristics of controls and handles.
• Design heavy/awkwardly shaped objects in such a way that they can easily
be lifted/grasped by device users with both hands.
• Design hand-operated devices in such a way that they will be comfortable
for individuals with different hand sizes.
• Develop operational sequences that prevent the frequent occurrence of a
repetitive movement.
• Provide effective advisory instructions/visual ones with respect to holding
a device.
• Reduce the weight of objects so that they can easily be picked up or moved.
• Avoid those designs that will require device users to exert force continuously.
• Provide appropriate instructions to device users on how to prevent the
occurrence of CTDs.
• Shield devices to reduce vibrations they will transmit to their potential users.

9.7 USEFUL DOCUMENTS FOR IMPROVING


MEDICAL DEVICE USABILITY
Over the years, many publications on engineering usability and related areas have
appeared [4]. Some of these publications that can directly or indirectly be useful to
improve medical device usability are as follows:
Medical Device Usability 153

• Liljegren, E., Osvalder, A.L., Cognitive Engineering Methods as Usability


Evaluation Tools for Medical Equipment, International Journal of Industrial
Ergonomics, Vol. 34, No. 1, 2004, pp. 49–62.
• Gramer, K., Lijegren, E., Osvalder, A.L., Dahlman, S., Application of
Usability Testing to the Development of Medical Equipment: Usability
Testing of a Frequently Used Infusion Pump and a New User Interface
Developed with a Human Factors Approach, International Journal of
Industrial Ergonomics, Vol. 29, No. 3, 2002, pp. 145–159.
• Navai, M., Guo, X., Caird, J.K., Dewar, R.E., Understanding of Prescription
Medical Labels as a Function of Age, Culture, and Language, Proceedings of
the Human Factors and Ergonomics Society Conference, 2000, pp. 1487–1491.
• Chaffin, D.B., Faraway, J.J., Zhang, X., Woolley, C., Stature, Age, and
Gender Effects on Reach Motion Postures, Human Factors, Vol. 42, 2000,
pp. 408–420.
• Burgess-Limerick, R., Mon-Williams, M., Coppard, V.L., Visual Display
Height, Human Factors, Vol. 42, 2000, pp. 140–150.
• Wiklund, M., Editor, Usability in Practice: How Companies Develop Use-
Friendly Products, Academic Press, Cambridge, Massachusetts, 1994.
• Clans, P.L., Gibbons, P.S., Kaihoi, B.H., Usability Laboratory: A New Tool
for Process Analysis at the Mayo Clinic, Proceedings of the Healthcare
Information Management Systems Society Conference, 1997, pp. 149–159.
• Gosbee, J., The Discovery Phase of Medical Device Design: A Blend of
Intuition, Creativity, and Science, Medical Device and Diagnostic Industry
Magazine, October 1997, pp. 113–118.
• Gasbee, J., Richie, E.M., Human-Computer Interaction and Medical Soft­
ware Development, Interactions, Vol. 4, No. 4, 1997, pp. 13–18.
• Obradovich, J.H., Woods, D.D., Users as Designers: How People Cope with
Poor HCI Design in Computer-Based Medical Devices, Human Factors,
Vol. 38, No. 4, 1996, pp. 574–592.
• Brown, S., The Challenges of User-Based Design in a Medical Equipment
Market, in Field Methods Casebook for Software Design, John Wiley and
Sons, New York, 1996, pp. 157–176.
• ANSI/AAMI-HE-48, Human Factors Engineering Guidelines and Preferred
Practices for the Design of Medical Devices, American National Standards
Institute (ANSI), New York, 1993. This standard was developed by the
Association for the Advancement of Medical Instrumentation (AAMI) and
Approved by the ANSI, AAMI, Arlington, Virginia, 1993.
• Seagull, F.J., Sanderson, P.M., Anesthesia Alarms in Context: An Obser­
vational Study, Human Factors, Vol. 43, 2001, pp. 66–78.
• Cook, R.I., Woods, D.D., Adapting to New Technology in the Operating
Room, Human Factors, Vol. 38, No. 4, 1996, pp. 593–613.
• Morrow, D.G., Leirer, V.O., Andrassy, J.M., Using Icons to Convey Medication
Schedule Information, Applied Ergonomics, Vol. 27, 1996, pp. 267–275.
• Hyman, W.A., Errors in the Use of Medical Equipment, in Human Error
in Medicine, Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1994,
pp. 327–347.
154 Applied Reliability, Usability, and Quality for Engineers

• Designer’s Handbook: Medical Electronics, Canon Communications, Santa


Monica, California, 1995.
• Hix, D., Hartson, R., Developing User Interfaces: Ensuring Usability through
Product and Process, John Wiley and Sons, New York, 1993.
• Volaitis, L.E., Chou, R.S., Wiklund, M., Consumers’ Expectation of the
Weights of Portable Communications Devices, Proceedings of the 42nd
Human Factors and Ergonomics Society Annual Meeting, 1997, pp. 140–146.

9.8 PROBLEMS
1. List at least seven important characteristics of the medical device potential
users that should be taken into consideration during the design process?
2. Discuss the main factors with respect to device users that must be consid-
ered carefully.
3. List at least twelve medical devices with high incidence of user/human
error.
4. Describe a general approach for developing medical devices’ effective user
interfaces.
5. List at least nine guidelines for making medical device interfaces more
user-friendly.
6. Discuss the following three factors with respect to medical device use
environments:
• Noise and light
• Mental workload
• Physical workload
7. What are the important factors that must be considered during the medical
device design phase when the device is going to be used by persons over
65 years of age?
8. List at least twelve useful guidelines for designing hand-operated medical
devices with respect to CTD.
9. Write an essay on medical device usability.
10. List at least six most useful documents to improve usability of medical
devices.

REFERENCES
1. Obradovich, J.H., Woods, D.D., Users as Designers: How People Cope with Poor HCI
Design in Computer-Based Medical Devices, Human Factors, Vol. 38, 1996, pp. 40–46.
2. Hayman, W.A., Errors in the Use of Medical Equipment, in Human Error in Medicine,
edited by Bogner, M.S., Lawrence Erlbaum Associates, New York, 1994, pp. 327–347.
3. Garmer, K., et al., Arguing for the Need of Triangulation and Iteration When Designing
Medical Equipment, Journal of Clinical Monitoring and Computing, Vol. 17, 2002,
pp. 105–114.
4. Dhillon, B.S., Engineering Usability: Fundamentals, Applications, Human Factors, and
Human Error, American Scientific Publishers, Stevenson Ranch, California, 2004.
5. ISO 13407, User-Centered Design Process for Interactive Systems, International
Organization for Standardization (ISO), Geneva, Switzerland, 1999.
Medical Device Usability 155

6. Medical Device Use-Safety: Incorporating Human Factors Engineering into Risk


Management, Draft Guidance Document, Center for Devices and Radiological Health,
Food and Drug Administration, Washington, D.C, 2000.
7. Wiklund, M.E., Medical Device and Equipment Design: Usability Engineering and
Ergonomics, Interpharm Press, Inc., Buffalo Grove, Illinois, 1995.
8. Medical Device Reporting (MDR) System, Center for Devices and Radiological
Health (CDRH), Food and Drug Administration, Washington, D.C, 1991.
9. Salvemini, A.J., Challenge for User-Interface Designers of Telemedicine Systems,
Telemedicine Journal, Vol. 5, No. 2, 1999, pp. 10–15.
10. Wiklund, M.E., Making Medical Device Interfaces More User Friendly, Medical
Device and Diagnostic Industry (MDDI) Magazine, May 1998, pp. 177–184.
11. Czaja, S., Special Issue Preface, Human Factors, Vol. 32, No. 5, 1990, p. 505.
12. Czaja, S., Clark, M., Weber, R., Computer Communication Among Older Adults,
Proceedings of the Human Factors Society 34th Annual Meeting, 1990, pp. 304–309.
13. Small, A., Design for Older People, in Handbook of Human Factors, edited by Salvendy,
G., John Wiley and Sons, New York, 1987, pp. 125–140.
14. CPSC Publication No. 702, Product Safety and the Older Consumers: What
Manufacturers/Designers Need to Consider, Consumer Product Safety Commission
(CPSC), Washington, D.C, 1988.
15. Armstrong, T., Radwin, R.G., Hansen, D.J., Repetitive Trauma Disorders: Job Evalua­
tion and Design, Human Factors, Vol. 28, No. 3, 1986, pp. 325–330.
16. Herbert, L., Living with CTD, IMPACC, Inc., Bangor, Maine, 1990.
17. Putz-Anderson, V., Cumulative Trauma Disorders, Taylor and Francis, Inc., New York,
1988.
10 Software Usability

10.1 INTRODUCTION
Over the years, due to increasing development of software for interactive application,
attention to the requirements and preferences of potential end users has intensified
quite significantly. Nowadays, the user interface quite often plays a very important
role in the success or failure of a software project. Furthermore, as per Ref. [1],
around 50%–80% of all source code development accounts, directly or indirectly,
for the user interface.
Generally, user-friendly software enables its all potential users to conduct their
tasks easily and intuitively, and it clearly supports rapid learning and high skills
retention of all the involved individuals. Furthermore, in today’s competitive global
environment, usability is not a luxury, but a basic ingredient in software systems, as
the users’ productivity as well as comfort relate directly to it.
Software usability may simply be defined as quality in software application or
use: specifically, how productively its users will be able to conduct their tasks, how
much support the users will require, and how easy and straightforward the software
is to learn and use [2, 3].
This chapter presents various important aspects of software usability.

10.2 NEED FOR CONSIDERING USABILITY DURING


THE SOFTWARE DEVELOPMENT PROCESS AND
THE HUMAN-COMPUTER INTERFACE
FUNDAMNTAL PRINCIPLES
In order to produce user-friendly software products, careful consideration to usability
during the development phase is absolutely essential. Four of the important factors
that clearly dictate the need for considering usability during the software develop-
ment process are as follows [4]:

• Factor I: Mixed users. The software products’ users could be professionals


or non-professionals with limited or no computer skills at all.
• Factor II: Competition. Failure to address usability-associated issues prop-
erly can result in the loss of market share, should competitors release their
software products with better usability.
• Factor III: Global market. Software products generally cover a global
market with varying language proficiencies, cultures, etc.
• Factor IV: Cost. Poor usability software products reduce user’s produc-
tivity and increase developer cost in regard to customer support service,
hotlines, etc.

DOI: 10.1201/9781003298571-10 157


158 Applied Reliability, Usability, and Quality for Engineers

Five fundamental principles of the human-computer interface in regard to soft-


ware are as follows [4, 5]:

• The software must satisfy all types of task-associated requirements.


• The software must be easy to use, user friendly, and adaptable to the levels
of knowledge or experience of its potential users.
• The software system must be able to provide an effective feedback to its
potential users in regard to their performance.
• The software-associated ergonomics principles must be applied properly, in
particular, to human data processing.
• The software system must be able to display information in a format as well
as at a pace adapted for its potential operators.

10.3 SOFTWARE USABILITY ENGINEERING PROCESS


Past experiences, over the years, clearly indicate that the software usability engineer-
ing process may be viewed differently by different organisations. Nonetheless, a typi-
cal process followed in product development is essentially composed of the following
three principal activities (i.e., activity A, activity B, and activity C) in parallel [6]:

• Activity A: Visiting customers to understand their requirements. This is


concerned with gaining insight into customers’ ongoing experience with a
system. Data on users’ experience are obtained basically through contex-
tual interviews (i.e., relevant interviews are carried out while users conduct
their tasks). During the interviews, users are asked about their system inter-
faces, perception of the system, the type of work being performed, etc. The
contextual interviews’ one important advantage is that they produce large
amounts of data quite rapidly. However, it is to be noted with care that dif-
ferent users in different contexts have quite different needs. The aspects of a
user’s context that influence a system’s usability for each user include items
such as the type of work being performed, organisational culture, physical
workplace environment, and interactions with other software systems.
• Activity B: Establishing an operational usability specification for the soft-
ware system under consideration. A software usability specification may
simply be described as a measurable definition of usability that is shared by
all of the personnel involved. This is based on the understanding of the user
needs, the resources needed for producing the software system, and com-
petitive analysis. In developing the usability specification, two important
points to be considered with care are as follows [6]:
• All personnel involved with development of the usability specification
must evaluate it continuously during the development process and make
necessary changes for reflecting up-to-date information on the needs
of users.
• Failure to understand clearly the user needs prior to developing a speci-
fication can result in a specification document that does not properly
reflect the needs of users.
Software Usability 159

• Activity C: Adopting an evolutionary delivery method to software system


development. This activity means first building a small subset of the soft-
ware system and then “growing” it during the development process and
studying users on a continuous basis as the system evolves. Furthermore, it
is to be noted that evolutionary delivery exploits, rather than overlooks, the
dynamic nature of software-associated requirements. Some of the methods
used for improving software system usability during evolutionary delivery
stages include collecting user feedback during early field tests, building and
testing early prototypes, and conducting analysis of the impact of design
solutions [7–9]. An important benefit of the evolutionary delivery method
is that it helps to build the shared understanding of project team members
of the user-interface design of the software system.

10.4 STEPS TO IMPROVE SOFTWARE PRODUCT USABILITY


Software usability can be improved by following steps A, B, C, and D presented
below [4, 10].

• Step A: Understand users. Two particular items essential for understand-


ing users are the software product usage environment and the user profile.
The software product use environment is useful for understanding the range
of factors that, directly or indirectly, impact the utilisation of the product.
Some of the environmental factors taken into consideration are the type of
network and security, system configuration, connection speed (if applicable),
browser types and settings (if applicable), degree of privacy/noise levels,
and location (i.e., office, mobile, home, etc.).
The user profile is quite useful to focus design-related efforts on actual
issues concerning potential users and avoid wasting resources and time on
sideline issues. However, the user profile needs the collection of informa-
tion on items such as potential users’ interests, demographics, and needs.
Some of the sources for obtaining this type of information are product
registrations, market research data, training and customer support staff,
and marketing and sales staff.
• Step B: Evaluate Software Product under consideration. Usability is rela-
tive. It simply means that a software product may be simple and straightfor-
ward to use for one type of users, but quite confusing or intuitive for other
users. Thus, an evaluation from the users’ perspective seeks to determine if
there is compatibility between the software product and its potential users.
A good method of evaluating a software product includes actions such as
testing usability, determining the user-product “fit”, analysing the available
user data, and conducting user field research (i.e., ethnographic research).
The action “testing usability” is basically concerned with determining
where users are having problems in using the software product. Usability
testing can be conducted in different settings such as on customer premises,
a designated area within the framework of company establishment, and an
outside market research facility. The action “determining the user-product
160 Applied Reliability, Usability, and Quality for Engineers

ʻfitʼ” is concerned with assessing and prioritizing all usability-associated


issues. This should be conducted by keeping in mind that the most critical
interface elements that, directly or indirectly, impact the success of a software
product include screen language clarity, presentation performance, graphic
quality, error handling, intuitive navigation, underlying behavioural meta-
phor, and personalisation of content.
The action “analyzing the available user data” is concerned with the data
obtained from sources such as customer service, sales support, marketing
personnel, and technical support. Finally, the action “conducting user field
research” is concerned with obtaining first-hand information on the software
product’s functioning with its intended users.
• Step C: Assess available resources. This step is basically concerned with
assessing all the available resources and the capability of all involved indi-
viduals for executing the software product design/redesign project. The
diverse capabilities and talents of all the involved design team members
should also be taken into consideration with care. Furthermore, it is very
important to assess with care the existence of user advocacy among the
personnel forming the design team, because this is key to producing an
effective user-centred design. There should always be at least one member
of the design team to advocate for the users’ point of view.
• Step D: Update development process. In order to have an effective usabil-
ity process for future uses/releases, an evaluation of the overall development
cycle is absolutely necessary. The critical elements of a user-centred design
process are developing cycles of user feedback, establishing approaches of
tracking results, and keeping satisfactory design-associated documentation.
Nonetheless, it should be noted with care that the documents that are
considered quite useful for software design/redesign work include user-
interface specification, application-related specifications, marketing (busi-
ness) requirements, flowcharts and functional specifications. Finally, it is
worth emphasising that the evaluation of documents such as these is a very
important step toward updating the ongoing development process.

10.5 SOFTWARE USABILITY INSPECTION METHODS


AND CONSIDERATIONS FOR THEIR SELECTION
There are many software usability inspection methods. Five most widely used meth-
ods for inspecting/evaluating software usability are shown in Fig. 10.1 [4, 10, 11].
All the five methods shown in Fig. 10.1 are described below, separately.

• Pluralistic walk-through: This method makes use of group meetings


where individuals such as usability developers, specialists, and students
step through a learning scenario and discuss each and every dialogue ele-
ment. Furthermore, this feature inspection lists items such as sequences
of features used for conducting typical tasks, steps that require extensive
experience for properly evaluating a proposed set of features, checks for
long sequences, and difficult steps. Some of the main benefits of this
Software Usability 161

FIGURE 10.1 Widely used software usability inspection methods.

method are that it is quite straight forward to learn and use, allows itera-
tive evaluation/testing, and is quite useful in satisfying the criteria of all
involved parties.
• Cognitive walkthrough: This method makes use of a detailed procedure
for simulating task execution at each step of the dialogue for determining,
if the simulated user’s memory content and goals can be safely assumed
for leading to the next correct anticipated measure or action. The principal
benefits of this method are that it is a quite effective approach for predict-
ing problems and capturing the cognitive process. In contrast, its two main
drawbacks are that it is focused on only one attribute of usability and that
there is a need to train a skilled evaluator [12].
• Guidelines checklists: These are quite useful for ensuring that the appropriate
usability principles will be considered in software design work. A checklist
provides involved inspectors with a basis for comparing the software product.
Generally, checklists are employed in conjunction with a usability inspection/
evaluation method.
• Heuristic evaluation: This method involves usability specialists who deter-
mine whether each and every dialogue element satisfies set usability principles
effectively. Some of the main benefits of this method are straightforward to
learn and use, useful for highlighting problems early in the design process,
and inexpensive to implement.
• Standards inspection: This method involves usability experts who inspect
the interface for compliance with given standards. These standards could
be user-interface standards, domain-specific software standards, or depart-
mental standards (if any).

Additional information on these five methods is available in Refs. [10, 11].


During the selection of an appropriate usability evaluation/inspection method
or combination of methods for a specific application, the person involved in the
162 Applied Reliability, Usability, and Quality for Engineers

selection process must take into consideration with care, the different foci of the
evaluation. Most of these considerations/foci are as follows [13]:

• The type of measures provided.


• The stage in the life cycle at which the evaluation is conducted.
• The resources required.
• The method’s subjectivity/objectivity level.
• The information provided.
• The immediacy of the response.
• The level of interference implied.
• Evaluation style.

10.6 SOFTWARE USABILITY TESTING METHODS


AND IMPORTANT FACTORS WITH
RESPECT TO SUCH METHODS
There are a number of usability testing methods that measure system performance in
regard to predefined criteria according to the usability attributes stated by the usabil-
ity standards and empirical metrics [5]. Generally, in these methods/approaches,
users conduct certain tasks with the software system/product. Furthermore, the
required data are collected on measured performance (e.g., the time required to
conduct the task).
Four commonly used software usability testing methods are presented below [5].

• Method 1: In-field studies. This method is basically concerned with observ-


ing the users conducting their assigned tasks in their normal study/work
environment. The principal benefits of the method are the natural user
performance and group interaction. However, it is to be noted that the
method has certain limitations in terms of measuring performance because
appropriate testing equipment cannot be utilised properly in normal work
environments.
• Method 2: Thinking-aloud protocol. This is a commonly used method in the
software testing area because it is quite helpful in conducting formative eval-
uations [14]. During the test, participants are asked to express their thoughts,
opinions, and feelings while interacting with the software product and con-
ducting tasks. Their remarks provide quite significant insight into the most
effective method of designing the system interaction. However, it is to be
noted that the thinking-aloud protocol could be quite difficult to use with cer-
tain groups of people (e.g., young students), who are distracted by the process.
• Method 3: Codiscovery. In this method, a group of users conduct tasks
together that simulate a work process under consideration. Furthermore,
most of these users have someone else available for help. The method is
clearly considered quite useful in various work scenarios.
• Method 4: Performance measurement. This is another widely used method
in which usability-related tests directed at determining hard and quantitative
Software Usability 163

FIGURE 10.2 Important factors in regard to software usability testing methods.

data. Normally, these data are in the form of performance metrics (e.g.,
required time to conduct specific tasks). Finally, it is to be noted that the
International Organization for Standardization (ISO) promotes the usabil-
ity evaluation method based on measured performance of predetermined
usability metrics [15].

Finally, factors such as those shown in Fig. 10.2 are considered very important in
regard to software usability testing methods [16].
Additional information on the above four methods is available in Refs. [5, 13, 16].

10.7 USEFUL GUIDELINES TO PERFORM SOFTWARE


USABILITY TESTING
Over the years, professionals working in the area of usability have developed many
useful guidelines for performing software usability testing. Seven of these guidelines
are as follows [4, 12]:

• Guideline 1: Keep the testing session as neutral as possible. This means


that there should be no vested interest, whatsoever, in the outcomes of a
given test.
• Guideline 2: Ensure that the person conducting/directing the usabil-
ity test is clearly conscious of his/her body language and voice. This is
very important because it is fairly easy to unintentionally influence others
through body language and voice.
• Guideline 3: Treat each individual participating in the test as a totally
new case. This means that each and every participant should be considered
164 Applied Reliability, Usability, and Quality for Engineers

unique irrespective of his/her previous performance in usability testing ses-


sions or background.
• Guideline 4: Ensure that the individual directing/conducting the usability
test keeps going even when he/she makes an error. This individual must
not panic even when he/she has inadvertently revealed some information
or in some other way has clearly biased the usability test in process. If
the involved individual just continues, then his/her action may not even be
observed by the personnel participating in the text.
• Guideline 5: Use the “thinking aloud” method as considered appropriate.
This method has proved to be quite useful in capturing the thinking of the
participating personnel while working with the interactive software.
• Guideline 6: Use humour as necessary for keeping the test environment
fairly relaxed. Humour has proved to be very useful in counteracting par-
ticipants’ self-consciousness, and it can also help them to relax.
• Guideline 7: Assist individuals taking part in the usability test only in
exceptional circumstances. More clearly, let these people struggle as much
as possible.

10.8 PROBLEMS
1. What are the important factors that dictate the need for considering usabil-
ity during the software development process?
2. What are the fundamental principles of the human-computer interface in
regard to software?
3. Describe the software usability engineering process.
4. Discuss the steps for improving software product usability.
5. What are the widely used methods for inspecting/evaluating software
usability? Describe at least two of these methods.
6. Describe the following two software usability testing methods:
• Performance measurement
• Thinking-aloud protocol
7. What are the important factors in regard to software usability testing methods?
8. Discuss at least four useful guidelines to perform software usability testing.
9. List at least six factors that must be considered during the selection of soft-
ware usability inspection methods.
10. Write an essay on software usability.

REFERENCES
1. Myers, B., Robson, M.B., Survey on User Interface Programming, Proceedings of the
ACM CHI’92 Human Factors in Computing Systems Conference, 1992, pp. 195–202.
2. International Organization for Standardization (ISO), Software Product Evaluation:
General Overview, ISO/IEC 14598-1, Geneva, Switzerland.
3. Juristo, N., Windl, H., Constantine, L., Introducing Usability, IEEE Software, Vol. 18
(January/February), 2001, pp. 20–21.
4. Dhillon, B.S., Engineering Usability: Fundamentals, Applications, Human Factors, and
Human Error, American Scientific Publishers, Stevenson Ranch, California, 2004.
Software Usability 165

5. Avouris, N.M., An Introduction to Software Usability, ECE Department, University of


Patras, Greece, 2002.
6. Good, M., Software Usability Engineering, Digital Technical Journal, Vol. 6, February
1988, pp. 125–133.
7. Whiteside, J., Archer, N., Wixon, D., Good, M., How Do People Really Use Text
Editors? SIGOA Newsletter, Vol. 3, No. 1–2, 1982, pp. 29–40.
8. Good, J., Whiteside, D., Wixon, D., Jones, S., Building a User-Derived Interface,
Communications of the ACM, Vol. 27, October 1984, pp. 1032–1043.
9. Wixon, D., Bramhall, M., How Operating Systems are Used: A Comparison of VMS
and UNIX, Proceedings of the Human Factors Society 29th Annual Meeting, 1985,
pp. 245–249.
10. Emerson, M., Porter, D., Rudman, F., Improving Software Usability: A Manager’s
Guide, Enervision Media, East Chatham, New York, 2002.
11. Fitzpatrick, R., Strategies for Evaluating Software Usability, Department of Mathematics,
Statistics, and Computer Science, Dublin Institute of Technology, Dublin, Ireland, 2002.
12. Lee, S.H., Usability Testing for Developing Effective Interactive Multimedia Software:
Concepts, Dimensions, and Procedures, Educational Technology and Society, Vol. 2,
No. 2, 1999, pp. 100–113.
13. Dix, K., Finlay, J., Abowd, G., Beale, R., Human Computer Interaction, Prentice Hall,
Hemel Hempstead, U.K., 1998.
14. Ferre, X., Juristo, N., Windl, H., Constantine, L., Usability Basics for Software
Developers, IEEE Software, Vol. 18, No. 1, 2001, pp. 22–29.
15. International Organization for Standardization (ISO), Ergonomics Requirements for
Office Work with Visual Display Terminals (VDT), Part II: Guidance on Usability
(ISO 9241-11, draft international standard), Geneva, Switzerland, 1997.
16. Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., Carey, T., Human Computer
Interaction, Addison-Wesley, Reading, Massachusetts, 1994.
11 Web Usability

11.1 INTRODUCTION
A few decades ago World Wide Web (WWW) was released by the European
Laboratory for Particle Physics (CERN), and its growth has mushroomed to hundreds
of millions of sites around the globe from just 623 sites in 1993 [1]. Nowadays, the
usage of the web in the global economy has become an instrumental factor. For exam-
ple, in 2001, the global e-commerce market was estimated to be around $1.2 trillion,
and today it has grown to many trillions of dollars [1, 2]. Moreover, there are billions
of web users throughout the world.
Web usability may simply be expressed as allowing the users to manipulate fea-
tures of a website for accomplishing certain goals [1, 2]. Some of the main goals of
web usability are as follows [2, 3]:

• Provide the correct choices to all the potential users, and do so in a very
obvious way.
• Present the information to all potential users in a clear and concise fashion.
• Put the most important thing in the appropriate place on a web page or a
web application.
• Remove any ambiguity whatsoever concerning an action’s consequences
(e.g., clicking on remove/delete/purchase).

Nowadays, usability rules the web, if a website is not easy and straightforward to
use, people simply leave and move to something else. This chapter presents various
important aspects of web usability.

11.2 WEB USABILITY-ASSOCIATED FACTS AND FIGURES


Some of the web usability-associated facts and figures are as follows:

• A study of e-commerce sites reported that only around 56% of intended


tasks were conducted successfully by the site users [4].
• A study reported that about 40% of website users elected not to return to a
site because of design-related problems [5].
• A study reported that approximately 65% of all online shopping trips result
in failure [6].
• A study revealed that around 70% of retailers lacked a well-defined
e-commerce strategy and firmly believed that they were using their web-
sites to test the waters for online-associated demand [7].
• A number of studies have shown that web usability is improving by around
2%–8% per year [2, 8, 9].

DOI: 10.1201/9781003298571-11 167


168 Applied Reliability, Usability, and Quality for Engineers

• User interface accounts for about 47%–60% of the lines of system or appli-
cation code [10].
• A number of studies have shown that approximately 10% of all users scroll
beyond the information that is visible on the screen when a web page
appears [2, 9, 11].
• In 2000, over 50% of the companies in the United States sold their products
online, and there were over 800 million pages on the web throughout the
United States [12, 13].

11.3 COMMON WEB DESIGN-RELATED ERRORS


There are many errors that are quite common on all levels of web design. Some of
these errors are as follows [9]:

• Content authorizing error. This type of error occurs when the designer
writes in the normal linear style instead of writing for potential online read-
ers who often scan text and need short pages, where secondary information
is best relegated to supporting pages.
• Page design error. This type of error frequently occurs when the emphasis
is on creating attractive pages for evoking good feelings about the organisa-
tion/company. Instead, the emphasis should be on designing for an optimal
user experience under a day-to-day environment. Furthermore, utility is
more important than attractive pages.
• Linking strategy error. This type of errors occurs when the designer treats
the website as an indispensable entity and does not provide appropriate
links to other sites. Beyond not having proper entry points where others can
link, many organisations/companies even overlook the use of essential links
to their own site in their very own advertisements.
• Project management-associated error. This type of error occurs when a
web project is managed simply as a conventional corporate project rather
than as a single-customer-interface project. The main drawback of the
traditional method is that it generally leads to a rather internally focused
design with an inconsistent user interface.
• Information architecture-associated error. This type of error occurs when
the website is constructed for mirroring the organisational structure rather
than structuring it to mirror the tasks of users and their specific views of
the information space.
• Business model-associated error. This type of error occurs when the
designer treats the web as a marketing communication (Marcom) brochure,
rather than recognizing it as a paradigm shift that will ultimately change
the way that business-associated transactions are conducted in this age of a
networked economy.

11.4 WEB PAGE DESIGN


The design of a web page is a very important factor in regard to the web usability’s
effectiveness, as it is the most immediately visible element of web design. Some of
Web Usability 169

TABLE 11.1
Web Page Design: Important Usability Dos and Don’ts

No. Dos Don’ts


1 Design web pages such that the potential browsers can Avoid assuming that users can see
easily resize them to meet their specific needs. what you see.
2 Fit main page contents in the potential browser window’s Avoid getting carried with creative
width, even when the window is not maximised to fill the or “artistic” fonts.
total screen.
3 Keep the size of most web pages to a level that can be Avoid specifying fonts using
downloaded within 10 seconds. absolute sizes.
4 Tailor images to elements that are clearly meaningful. Avoid using animation unless is
Furthermore, past experiences clearly indicate that dense absolutely essential.
graphics generally alienate users.
5 Consider screen real estate as a highly valuable commodity. Avoid using all capital letters.
6 Make use of visual highlighting as necessary to draw -----------------------------
attention of users to important information.

the important usability-related dos and don’ts in regard to page design are presented
in Table 11.1 [14].
In web page design, some of the important factors that should be considered
with utmost care are shown in Fig. 11.1 [14]. Each of the important factors shown
in Fig. 11.1 is described in detail in Sections 11.4.1–11.4.5.

11.4.1 Page Size


Page size is important to usability in the following two ways:

i. The downloading and displaying speed of pages.


ii. The flexibility of the pages to fit the available display area.

In regard to “The downloading and displaying speed of pages”, the length of time
for downloading a page from the server and displaying it in the browser window is
a very important factor to sizing web pages effectively. Nonetheless, response time
may simply be expressed as the time from when a user requests a page to when it has

FIGURE 11.1 Some important factors to be considered in web page design.


170 Applied Reliability, Usability, and Quality for Engineers

been displayed totally. Some of the useful guidelines concerned with response time
are as follows:

• Ensure that the response time is well within ten seconds for keeping the
attention of potential users.
• Ensure that the response time is within one second to fit into the chain of
potential users’ thoughts.
• Ensure that the response time is within 0.1 second in order to make the
system feel interactive.
• Provide an adequate warning to potential users when a web page will
require more than 10 seconds to download.

In regard to “The flexibility of the pages to fit the available display area”, some of
the guidelines considered quite useful for its successful achievement are as follows:

• Design web pages in such a way that they can easily be resized (i.e., to fit
within a wide range of window sizes).
• Generally, no horizontal scrolling is needed when the window is 800 pixels
wide.
• Ensure that all the key page elements are clearly visible with scrolling when
the window is 400 pixels in height.
• Use relative, instead of absolute, sizes for all elements that fall under a
browser’s resizing capability.
• During the design process, pay close attention to the ability for resizing
footers, headers, etc.

11.4.2 Font Usage
Fonts are used for creating a variety of web page elements, including buttons, naviga-
tion bars, links, menus, footers and headers, and tables, in addition to the text that
conveys most of the content of a website. Font faces fall under two basic categories:
serif and sans-serif. Serif fonts have small appendages at the bottoms and tops of
letters. Three examples of such fonts are Times Roman, Courier, and Century. These
fonts are useful as they make it easier to read long lines.
Sans-serif fonts are simpler in shape because they consist of only basic line
strokes. Two examples of sans-serif fonts are Helvetica and Arial. Some of the use-
ful pointers directly or indirectly concerned with font usage are as follows [2, 14]:

• Avoid specifying as much as possible absolute font sizes and getting carried
away in the use of font sizes, faces, and styles.
• Note that different browsers support different font faces.
• Use italics for defining terms or emphasising an occasional word.

11.4.3 Textual Element Usage


A website’s most of the key elements are conveyed through the use of text, tables, and
lists. Thus it is very important to write in a style that not only transmits appropriate
Web Usability 171

information, but also clearly reflects how websites are used. Some of the guidelines
considered quite useful in writing effective web page text are as follows [14]:

• Ensure that all of the text is as concise as possible.


• Ensure that all exaggerated/subjective language is converted clearly and
effectively to more neutral terms.
• Ensure that all of the text layout is converted appropriately to a format that
is more scannable.

11.4.4 Image Usage
Users quite often blame webpage images as an impediment to successful web access. In
this regard, the following guidelines to the use of images may be quite helpful [2, 9, 14]:

• Use of commercial image-compression tool for reducing the size of image


files as much as possible.
• Avoid using a different photograph on each and every page of a website, as
this will degrade the website’s performance.
• Make use of animation only when it clearly adds to the meaning of the
information.
• Use a thumbnail image on web pages that link to the larger image.
• Limit graphics to those elements that are really required.
• Reuse graphics on other pages if the need arises.
• Limit the use of different colours.
• Use the most efficient format for an image.

11.4.5 Help Users
Past experiences over the years, clearly indicate that web users generally do not read
web pages in a serial manner; rather, they hop from one visual element to another.
Thus, the biggest challenge faced by web designers is to use the visual elements most
effectively for drawing the attention of potential users to key elements. Furthermore, it
should be emphasised that users will not read text unless they are specifically enticed
to do son. The following list presents some of the important guidelines/pointers [14]:

• Use the same colour for developing a common thread among elements that
cannot be placed next to each other.
• Ensure that the visual-highlighting approaches’ application is consistent
throughout the website.
• Test each web page’s final design by eliminating all visual elements in ques-
tion, one at a time.
• Past experiences over the years, clearly indicate that users generally assume
that a row of similar elements should be “read” from left to right or from
top to bottom.
• Reinforce the hierarchy of web page contents with a visual dominance
hierarchy.
172 Applied Reliability, Usability, and Quality for Engineers

TABLE 11.2
Website Design: Important Usability Dos and Don’ts

No. Dos Don’ts


1 Ensure that web pages effectively support Avoid having a copyright notice. It is not required
browser resizing as much as possible. for establishing ownership to web materials.
2 Ensure that the web pages clearly honour the Avoid having a banner page.
user’s browser settings.
3 Ensure that each and every web page Avoid saying “welcome” on web pages.
incorporates real content.
4 Ensure that a single website’s all pages share Try not to pop up windows without the consent of
a common look and feel. the user.
5 ---------------------------------------------- Avoid the use of frames.

• Ensure that the key text has the highest possible contrast.
• Use size for making users understand which elements fall where in regard
to the content hierarchy.
• Past experiences over the years, clearly indicate that items above and to the
left of the page centre appear to be noticed first.

11.5 WEBSITE DESIGN


Generally, more attention is paid to page design than site design. However, from the
usability point of view, the site design is more important and challenging. Some of
the important usability related “dos” and “don’ts” in regard to website design are
presented in Table 11.2 [14].
Some of the important factors to be considered with care in website design
are shown in Fig. 11.2 [9, 14]. Each of the important factors shown in Fig. 11.2 is
described in Sections 11.5.1–11.5.3.

FIGURE 11.2 Some important factors to be considered in website design.


Web Usability 173

11.5.1 Site Organisation
Past experiences over the years, clearly indicate that site organisation needs careful con-
sideration during design because users generally do not read web pages the way they read
books. Some useful guidelines concerning website organisation are as follows [9, 14]:

• Ensure that the pertinent information is positioned in such a way that is still
clearly visible even in a situation when the browser window is shrunk to
around 50% of the screen width.
• Organise the site into many bite-size pieces capable of being traversed in
varying ways to take advantage of the web’s navigational flexibility.
• Ensure that pointers to related topics are clearly visible somewhere in the
upper half of the page in question.
• Provide users some content on each and every page.
• Do not display blocks of text in a large font.

11.5.2 Shared Elements of Site Pages


In regard to the effective usability of a site, it is very important to help potential users
become familiar with the site with minimal effort on their part. This can be achieved
by adapting a consistent page style that repeats all common elements throughout the
website. This approach can also be quite helpful in improving user speed. Another
approach that quite significantly improves usability is to concentrate all common
elements at the bottom and top of each page or along the left-hand side.
Some of the user expectations of common elements are as follows [9, 14]:

• Incorporation of a search mechanism only when the site includes a rather


large number of pages.
• Incorporation of a help feature only for those situations in which it provides
quite substantive information.
• Very clear display of a “contact us” mechanism.
• Capability to go to the home page by simply clicking on the website’s icon.
• Gathering of information intended for all sponsoring agencies under
“about us”.

11.5.3 Site Testing and Maintenance


Regular testing and maintenance of the website are very important for maintaining
usability effectiveness. To maintain a website’s quality, test the design and each and
every new page by using Internet Explorer and Netscape, disabling images, using
different browser window widths, and making use of a dialup connection [14].
As the web pages tend to change over time, it is very important to carry out appro-
priate maintenance activities on a regular basis. At a minimum of once a month,
all of the links that are still in active mode must be verified properly. Furthermore,
whenever a web page is changed, every effort should be made for double checking
that its links are functioning in a normal fashion.
174 Applied Reliability, Usability, and Quality for Engineers

TABLE 11.3
Navigation Aids: Important Usability Dos and Don’ts

No. Dos Don’ts


1 Give appropriate importance to bread-crumb trails or Do not assume that all users will be able
navigation bars because they provide users with an to familiarise themselves with the
understanding of location on a site. website.
2 Organise all navigation aids by considering the user Do not label items, buttons, or links with
tasks to be conducted. meaningless phrases.
3 Ensure the appropriate conformance to the standard Do not change the standard colours used
practice of having all links underlined. for links.
4 Ensure that menu items, links, or navigational bar Do not confuse potential users by
items that lead to the “current location” are implementing menus in a “creative” way.
deactivated in an appropriate manner.

11.6 NAVIGATION AIDS


There are various types of navigation aids utilised by users to find their way around
websites. Some of the important usability “dos” and “don’ts” concerning navigation
aids are presented in Table 11.3 [14, 15].
Three important factors that need to be considered with utmost care in regard to
the navigation aids are shown in Fig. 11.3 [2, 14, 15].

11.6.1 Link Usage
Links are probably the most common mechanism that directly or indirectly support web-
site navigation, and web pages make use of links in the following three ways [2, 14, 15]:

• To direct users to an alternative source when the current page does not
contain the required information.
• To provide efficient access to the website’s other pages.
• To direct users to pages that contain additional information on the text/
graphic stated in the link.

FIGURE 11.3 Three important factors to be considered in regard to navigation aids.


Web Usability 175

The following list presents some guidelines considered quite useful in regard to
the effective use of links [14, 15]:

• Underline the words that really matter for improving the link’s readability.
• Make the image itself be the link when there is a definite need to link to
larger copy of an image.
• Group the links into categories when multiple links show up in a list.
• Make use of standard colours and underline all links.
• Format all links by utilising lowercase and uppercase letters.
• Locate all the alternative links at the top of the page.
• Select link text with utmost care.

11.6.2 Menus and Menu Bar Usage


Generally, websites use menu bars for providing fundamental navigation-associated
functionality. They may incorporate various links or include menu titles that drop down
as the user’s cursor clicks on them or passes over them. The following list presents
some guidelines considered quite useful for the use of menus and menu bars [2, 14, 15].

• Ensure that all menus are anchored properly to a menu bar across the top
of the web page.
• Ensure that all menu titles form a consistent group and are short.
• Format all menus titles and menu items by utilising uppercase and lower-
case letters.
• Ensure that all menu items are grouped together logically.
• Avoid using cascading (i.e., multilevel) menus.

11.6.3 Navigation Bar Usage


The main objective of having navigation bars is to lay out the website structure in
a hierarchical form, and they are normally located along one side of the web page.
The following two factors have proved to be very helpful in using the navigation
bar effectively [2]:

• Factor I: The selection of navigation labels and structure with utmost care.
• Factor II: The selection of the top ten things (during the navigation bar
development process) that all users are most likely to do on the site in
question.

The following seven steps are considered quite useful in selecting navigation
structure and labels [14, 15]:

• Step I: Establish a list of operations/functionalities that the website has to


support if potential users are to conduct the highlighted top ten tasks/things
effectively.
• Step II: Write down clearly the operations on individual index cards.
176 Applied Reliability, Usability, and Quality for Engineers

• Step III: Spread out the cards with care and group them into logical
categories.
• Step IV: Highlight at least five users and have them repeat the preceding
three steps.
• Step V: Make comparison of the findings of all the sortings. If a pattern
of classifications/categories is not emerging, then repeat the entire process
(i.e., all the steps) again.
• Step VI: When the classifications/categories arrived at by most users look
similar, take advantage of them for developing an outline of the website
structure.
• Step VII: Present the structure’s outline to at least five other users and ask
their inputs. Repeat the process as the need arises.

11.7 WEB USABILITY EVALUATION TOOLS


There are many tools or methods that can be used to highlight web usability-
related problems. Over the years, these methods have proved to be very useful in
checking routine site-design elements in regard to consistency and in encouraging
the application of good design-related practices. Four of these tools are Web SAT,
Max, Net Raker, and Lift [2, 16]. Each of these four tools or methods is described
in Sections 11.7.1–11.7.4.

11.7.1 Web SAT


The Web Static Analyzer Tool (Web SAT) is used for checking web page html
against typical usability-related guidelines for potential problems. It is one of the
tools/methods that belong to Web Metrics Suite and it was developed by the National
Institute of Standards and Technology (NIST) [2]. Web SAT allows generally up
to five individuals Uniform Resource Locators (URLs) to be checked against its
usability-associated guidelines. At the end of evaluation, Web SAT provides a com-
prehensive report of problems found on each web page entered. The problems are
grouped under the following six categories [2, 16]

• Category I: Form use. Under this category, the problems are concerned
with the form Submit and Reset buttons.
• Category II: Readability. Under this category, the problems are concerned
with content readability.
• Category III: Performance. Under this category, the problems are concerned
with the size and coding of graphics in regard to page download speeds.
• Category IV: Accessibility. Under this category, the problems are concerned
with the page making appropriate use of tags for visually impaired users.
• Category V: Maintainability. Under this category, the problems are con-
cerned with tags and coding information that would make the page easier
to port to another server.
• Category VI: Navigation. Under this category, the problems are concerned
with the coding of links.
Web Usability 177

All in all, the main limitation of the Web SAT is that it can examine or check only
individual web pages.

11.7.2 Max
This is another useful usability tool that scans through a website for collecting
information about vital statistics and rates concerning a site’s usability. Max uses a
statistical model for simulating the experience of a user in calculating ratings in the
following three areas [2, 16]:

• Area I: Accessibility. In this case, Max estimates the mean time a user takes
to find something on the site under consideration.
• Area II: Content. In this case, Max summarises the percentage of different
media elements (i.e., graphics, multimedia, and text) in addition to client-side
technologies utilised (e.g., Flash and Portable Document Format [PDF]) that
comprise the website.
• Area III: Load time. In this case, Max estimates the mean time to load
website pages.

The principal weakness of Max is that it does not provide many suggestions for mak-
ing changes to design. In contrast, its main strength is that it provides a performance
benchmark.

11.7.3 NetRaker
NetRaker consists of a number of online tools that help to highlight usability-related
problems and conduct market research. NetRaker provides a set of comprehen-
sive guidelines for composing objective survey questions and a customisable set of
usability survey templates. The questions are randomly made available to website’s
users by providing them with an option to participate.
The survey requires users to conduct tasks on the website and then provide satisfac-
tory feedback in regard to the simplicity of carrying out the tasks. Some of the main
benefits of NetRaker are as follows [2, 16].

• This is a very useful tool for obtaining feedback in the context of a website’s
intended purpose as opposed to relying totally on generic hypertext markup
language (HTML) checks of statistical analysis.
• This is a quite useful tool to survey users and gather usability-associated
feedback quickly.
• The NetRaker automation ensures that all users are surveyed consistently.

Finally, it is added that NetRaker is one of the best tools/methods for identifying
usability-related issues because it is based on users’ direct feedback.
178 Applied Reliability, Usability, and Quality for Engineers

11.7.4 Lift
This is another usability tool used for performing analysis of a web page for uncover-
ing usability-related problems. There are following two types of Lift [2]:

• Type I: Lift Online. This carry out HTML checks derived from usability
principles in a similar way to Web SAT. More specifically, it checks one
page at a time and then provides a report on the usability-related issues of a
page. Furthermore, Lift Online goes a step further than Web SAT because
it provides appropriate code change-related recommendations.
• Type II: Lift Onsite. This can be easily run from a personal computer (PC),
and it provides the very compelling feature of directly fixing the HTML-
associated problems as they are being reviewed in the usability evaluation
report.

All in all, Lift provides usability-based HTML validations for ensuring good coding
practices.

11.8 QUESTIONS FOR EVALUATING WEBSITE MESSAGE


COMMUNICATION EFFECTIVENESS
This section presents a checklist of questions considered quite useful in evaluating
the effectiveness of messages communicated by a Website. These questions will be
quite helpful (directly or indirectly) in improving web usability, and they are grouped
under six distinct areas [13, 17–19]. These areas are concept, content, text, mechan-
ics, design, and navigation.
Questions pertaining to each of these six areas are presented in Sections 11.8.1–11.8.6.

11.8.1 Concept
Some of the questions pertaining to this area are as follows [13, 17–19]:

• What existing websites can be compared with the one in question or under
consideration?
• What expectations will the website raise for its visitors?
• What basic image of the company/organisation does the site project?
• What does the first page clearly promise concerning the rest of the website?
• Can the rest of the site satisfy this promise appropriately?

11.8.2 Content
Some of the questions pertaining to this area are as follows [13, 17–19]:

• What information is the website expected to convey to its users?


• Is the content easily accessible and the purchasing procedure (if applicable)
clearly user friendly?
Web Usability 179

• Is the information on the website easily accessible, attractive, accurate,


complete, and clear?
• What portion of the web page(s) was allocated for content in comparison to
other factors?
• How unique and accurate are the website contents?
• Does the site provide any communication-associated options?

11.8.3 Text
Some of the questions pertaining to this are as follows [13, 17–19]:

• Does the first page effectively convey a clear message to potential visitors,
including what they can expect to find in the website?
• Are the titles and subheadings informative enough for their effective
application?
• Is the text on the first and feature pages short enough for its effective
application?
• Are all of the titles appropriate for their effective use by search engines?
• Are the hyperlinks and button titles straightforward and clear?
• Can the text be read appropriately in a cursory manner?
• Is the text under consideration grammatically checked?
• Is the text sufficiently attractive for reading?

11.8.4 Mechanics
Some of the questions pertaining to this area are as follows [13, 17–19]:

• Do tools such as roll-down menus and mouse over events (if utilised) clearly
support the site’s use?
• Are all hyperlinks and buttons operating as per requirements?
• How quickly does the site react; how quickly do the pages load?
• How functional is the website under consideration?
• Are there any error-associated messages?

11.8.5 Design
Some of the questions pertaining to this area are as follows [13, 17–19]:

• Is there an appropriate level of contrast between background and text


colour?
• Are the users directed appropriately to the most important page elements?
• Is the site’s design clearly impressionable?
• Is there proper balance between design and content?
• Can the design in anyway deter potential users?
• Is the site’s style unique?
180 Applied Reliability, Usability, and Quality for Engineers

11.8.6 Navigation
Some of the questions pertaining to this area are as follows [13, 17–19]:

• Is it possible to appropriately predict the contents of the options on the


menu without clicking them.
• How simple and straightforward is it to use the hyperlinks, buttons, and
menu for browsing the site?
• Is it possible to find anything using simple keywords on the engine?
• Does the site contain a local search engine?
• Will the browsing location be clear to users?
• How does the site conform to all existing web standards?

11.9 PROBLEMS
1. Define the term “Web usability” and write an essay on web usability.
2. List at least five web usability-associated facts and figures.
3. Discuss commonly occurring web design-related errors.
4. What are the important factors that must be considered in the web page
design?
5. List at least four web page design usability-related dos and don’ts.
6. List at least three website design usability-associated dos and don’ts.
7. What are the important factors to be considered in website design? Discuss
at least two of these factors.
8. List at least three navigation-aids-related usability dos and don’ts.
9. What are the important factors to be considered with respect to navigation
aids? Discuss at least one such factor.
10. Describe the following two tools used for evaluating web usability:
• Max
• Web SAT

REFERENCES
1. Powell, T., Web Design: The Complete Reference, Osborne McGraw-Hill, Berkeley,
California, 2000.
2. Dhillon, B.S., Engineering Usability: Fundamentals, Applications, Human Factors, and
Human Error, American Scientific Publishers, Stevenson Ranch, California, 2004.
3. Cloyd, M.H., Designing User-Centered Web Applications in Web Time, IEEE Software,
Vol. 18, No. 1, 2001, pp. 62–69.
4. Chi, E.H., Improving Web Usability Through Visualization, IEEE Internet Computing,
Vol. 6, No. 2, 2002, pp. 64–71.
5. Manning, H., McCarthy, J.C., Souza, R.K., Why Most Web Sites Fail (White Paper),
Forrester Research, Cambridge, Massachusetts, September 1998.
6. Souza, R.K., Manning, H., Goldman, H., Tong, J., The Best of Retail Site Design
(White Paper), Forrester Research, Cambridge, Massachusetts, October 2000.
7. Becker, S.A., Mottay, F.E., A Global Perspective on Web Site Usability, IEEE Software,
Vol. 18, No. 1, 2001, pp. 54–61.
Web Usability 181

8. Neilsen, J., PR on Websites: Increasing Usability, Retrieved on December 10, 2012,


from Alertbox Web Site: www.useit.com/alertbox/20030310.html.
9. Neilsen, J., Designing Web Usability, New Riders Publishing, Indianapolis, Indiana,
2000.
10. Trenner, L., Bawa, J., Eds., The Politics of Usability: A Practical Guide Designing
Usable Systems in Industry, Springer-Verlag, London, 1998.
11. Neilsen, J., Top Ten Mistakes in Web Design, Retrieved on December 10, 2012, from
Alertbox Web Site: www.useit.com/alertbox/9605.html.
12. Preece, J., Online Communities: Design Usability, Supporting Sociability, John Wiley
and Sons, New York, 2000.
13. McDonald, S., Waern, Y., Cockton, G., Eds., People and Computers XIV: Usability or
Else! Spring-Verlag, London, 2000.
14. Brown, G.E., Web Usability Guide, NEES Consortium, Richmond, California, 2003.
Retrieved on December 10, 2012, from NEEShub Web site: www.nees.org/info/con-
tact-us.html.
15. Fleming, J., Web Navigation: Designing the User Experience, O’Reilly and Associates,
Sebastopol, California, 1998.
16. Chak, A., Usability Tools: A Useful Start (New Architect: Strategy product Review).
Retrieved on December 10, 2012, from www.webtechniques.com/archives/2000/08/
stratevu/.
17. Williams, R., Tollett, J., The Non-Designer’s Web Book: An Easy Guide to Creating,
Designing, and Posting Your Own Website, Peachpit Press, Berkeley, California, 2000.
18. Price, J., Price, L., Hot Text: Web Writing that Works, New Riders Publishing,
Indianapolis, Indiana, 2002.
19. Niederst, J., Web Design in a Nutshell: A Desktop Quick Reference, O’Reilly and
Associates, Sebastopol, California, 2001.
12 Quality in Health Care

12.1 INTRODUCTION
Each year a vast sum of money is being spent on health care around the globe. For
example, in 1992 the United States alone spent around $840 billion on health care,
or about 14% of its gross domestic product (GDP) [1]. Furthermore, since 1960 the
health care-related spending in the United States has increased from 5.3% of the gross
national product (GNP) to about 13% in 1991 [2].
The history of quality in health care goes back to the 1860s, when Florence
Nightingale (1820–1910), a British nurse, helped to lay the foundation for the health
care quality assurance programmes, by advocating the need for a uniform system for
the collection and evaluation of hospital-associated statistics [1]. Her analysis of the
data collected clearly showed that mortality rates varied quite significantly from one
hospital to another.
In 1914, E.A. Codman (1869–1940) in the United States, studied the results of
health care in regard to quality, and clearly emphasised the issues, when examining
the quality of care, such as the accreditation of institutions, the importance of licen-
sure or certification of providers, the need for properly taking into consideration the
severity or stage of the disease, the economic-related barriers to receiving care, and
the patients’ health and illness behaviours [1, 3].
This chapter presents various important aspects quality in health care.

12.2 HEALTH CARE QUALITY-RELATED TERMS


AND DEFINITIONS AND REASONS FOR
THE RISING COST OF HEALTH CARE
Some of the commonly used terms and definitions in health care quality are as
follows [4, 5]:

• Health care. This is services provided to individuals/communities for main-


taining, monitoring, promoting, or restoring health.
• Quality of care. This is the level to which delivered health services satisfy
established professional standards and judgements of value to consumers.
• Adverse event. This is an incident in which unintended harm resulted to an
individual receiving health care.
• Quality assessment. This is the measurement of the degree of quality at
some point in time, without any effort for improving or changing the degree
of care.
• Quality. This is the extent to which the properties of a product or service
produce/generate a desired outcome.

DOI: 10.1201/9781003298571-12 183


184 Applied Reliability, Usability, and Quality for Engineers

• Clinical audit. This is the process of reviewing the delivery of care against
established standards to highlight and remedy all deficiencies through a
process of continuous quality improvement.
• Quality assurance. This is the measurement of the degree of care given
(assessment) and, when appropriate, mechanisms for improving it.
• Dimensions of quality. These are the measures of health system per-
formance, including measures of effectiveness, appropriateness, safety,
capability, sustainability, accessibility, responsiveness, continuity, and
efficiency.
• Total quality management (TQM). This is a philosophy of pursuing contin-
uous improvement in each and every process through the integrated efforts
of all concerned persons associated with the organisation.
• Quality improvement. This is the total of all the appropriate activities that
create a desired change in quality.
• Cost of quality. This is the expense of not doing effectively all the right
things right the first time.

There are many reasons for the rising health care-related cost. Six of these reasons
are as follows [6]:

• Reason I: Medical malpractice.


• Reason II: Over specialisation of physicians and other providers.
• Reason III: Cost of poor quality (i.e., waste, rework, and human error).
• Reason IV: Use of new technology.
• Reason V: Variation in practice and poor incentives to control cost.
• Reason VI: Aging population.

All of the above six reasons are discussed in detail in Refs. [2, 6].

12.3 COMPARISONS OF TRADITIONAL QUALITY


ASSURANCE AND TOTAL QUALITY MANAGEMENT
(TQM) IN REGARD TO HEALTH CARE AND QUALITY
ASSURANCE VERSUS QUALITY IMPROVEMENT
IN HEALTH CARE INSTITUTIONS
A comparison of traditional quality assurance and TQM, directly or indirectly, in
regard to different areas of health care is presented in Table 12.1 [2].
Over the years, many authors have discussed the differences between quality
assurance and quality improvement in health care institutions [7–11]. A clear com-
prehension of these differences is very important, as they directly or indirectly con-
tribute to differing information needs. Eleven of these differences are presented in
Table 12.2 [7–11].
Quality in Health Care 185

TABLE 12.1
Comparisons of Traditional Quality Assurance and Total Quality
Management in Regard to Health Care

Area
No. (Characteristic) Traditional Quality Assurance Total Quality Management
1. Scope Clinical processes and outcomes All processes and systems (i.e., clinical
and non-clinical)
2. Purpose Enhance quality of patient care Enhance all products and services quality
for patients for patients and other customers
3. Focus Peer review vertically focused by Horizontally focused peer review for
clinical process or department improving all processes and individuals
(i.e., each department looks that affect outcomes
after its own quality assurance)
4. Leadership Physician and clinical leaders All leaders (i.e., clinical and
(i.e., clinical staff chief and non-clinical)
quality assurance committee)
5. Aim Problem solving Continuous improvement, even when no
deficiency/problem is identified
6. Customer Customers are review Customers are review organisations,
organisations and professionals professionals, patients, and others
with focus on patients
7. Outcomes Includes measurement and Includes also measurement and
monitoring monitoring
8. People involved Appointed committees and Each and every individual involved with
quality assurance programme process
9. Methods Includes hypothesis testing, Includes Pareto chart, force field analysis,
nominal group techniques, chart checklist, fishbone diagram, flow charts,
audits, and indicator monitoring control chart, Hoshin planning, etc.

12.4 ASSUMPTIONS FOR GUIDING THE DEVELOPMENT


OF QUALITY-RELATED STRATEGIES IN HEALTH
CARE AND HEALTH CARE-ASSOCIATED
QUALITY GOALS AND STRATEGIES
A clear comprehension of the assumptions guiding the development of quality-
related strategies in health care is absolutely necessary for their ultimate success.
Five of these assumptions are as follows [1]:

• Assumption I: Quality improvement definitely requires timely access for


reliable clinical data and an effective capability for analysing and interpret-
ing clinical pathways.
• Assumption II: Total quality management (TQM) is a good means of
furthering the organisational mission and culture. More clearly, this
186 Applied Reliability, Usability, and Quality for Engineers

TABLE 12.2
Comparisons of Quality Assurance and Quality Improvement in Health Care
Institutions

No. Area (Characteristic) Quality Improvement Quality Assurance


1. Customers Patients, caregivers, payers, Regulators
enrolees, support staff, managers,
technicians, etc.
2. Goal Satisfy customer requirements Regulatory compliance
3. Direction Decentralised through the Committee or central
management line of authority coordinator
4. Participants Every associated individual Peers
5. Action taken Implement appropriate Recommend appropriate
improvements improvements
6. Focus All involved processes Physician
7. Defects studied Special and common causes Outliers special causes
8. Functions involved Many (clinician and support Few (mainly doctors)
system)
9. Viewpoint Proactive Reactive
10. Performance measure Need/capability External standards
11. Review technique Analysis Summary

basically means that quality results from continuously improving care


and work-related processes, patients and others served are the highest
priority and should have a rather quite strong voice in the design and
delivery of care, quality must flow from leadership and permeate each
and every level of the organisation, decisions should be based on facts,
but reflect compassion and caring, and processes are improved by team-
work and involvement.
• Assumption III: TQM is a quite important unifying leadership philosophy
that encompasses all function of a health care organisation, not just the
quality assurance function and clinical care.
• Assumption IV: The system will be increasingly responsible for delivering
the quality of care to all enrolled individuals on a regional basis.
• Assumption V: The measurement of quality care must include items such as
the determination of patient outcomes, patient feedback and involvement,
review of key internal processes, cost effectiveness, and appropriate coordi-
nation of care across a continuum of services and providers.

Four important health care-associated quality goals are as follows [1]:

• Goal I: Establish a good system perspective on analysing and communicat-


ing information and data on the quality, outcomes, appropriateness, and
cost of care.
Quality in Health Care 187

Three strategies pertaining to this goal are as follows [1]:

• Establish a system plan for addressing information needs concerning


quality management, including common definitions, enhanced analysis of
available information, and pivotal clinical data set.
• Further develop the competencies and skills of persons associated with
quality through user conferences and other appropriate means.
• Document and share critical quality-related performance and outcome
studies throughout the system and assess the implications of new develop-
ments in the evolution of electronic medical records.
• Goal II: Provide good person-centred compassionate care that respects
dignity of all individuals and is responsive to the needs of patients, resi-
dents, families, etc.

Three strategies pertaining to this goal are as follows [1]:

• Ensure, in an effective manner, the assessment of patient, employee, and


medical staff satisfaction periodically by incorporating survey standards
and benchmarking.
• Aim to maximize families’ and patients’ involvement in the care experi-
ence by using shared decision making as well as improving patient involve-
ment in care choices.
• Implement recommendations concerning compassionate care of dying
and carefully address the spiritual-related needs of patients and families
through pastoral care.
• Goal III: Engage all physicians, employees, and board members in system
efforts for implementing TQM.

Three strategies pertaining to this goal are as follows [1].

• Develop and apply appropriate management-related models that help to


promote effective teamwork and participatory decision making.
• Actively involve physicians when developing treatment-related protocols
and improving care systems.
• Develop appropriate programmes on TQM for individuals such as physi-
cians, employees, and board members.
• Goal IV: Effectively support a quality management mechanism that is
useful to further coordination of care across the continuum of providers
and services.

Two strategies pertaining to this goal are as follows [1]:

• Determine the ways the development of integrated delivery systems can


help to promote access and quality of care.
• Develop further and apply case management-related models across the con-
tinuum of services.
188 Applied Reliability, Usability, and Quality for Engineers

12.5 STEPS FOR QUALITY IMPROVEMENT IN HEALTH CARE


AND PHYSICIAN REACTIONS TO TOTAL QUALITY
The ten steps that can be used in improving quality in the health care system are as
follows [12]:

• Step I: Assign responsibility.


• Step II: Delineate scope of care in an effective manner.
• Step III: Highlight main aspects of care.
• Step IV: Highlight all appropriate indicators.
• Step V: Develop appropriate thresholds for evaluation.
• Step VI: Collect and organise all relevant data.
• Step VII: Assess care when thresholds are reached.
• Step VIII: Initiate appropriate actions for improving care.
• Step IX: Determine effectiveness and maintain the gain.
• Step X: Communicate final results to all concerned individuals.

Over the years, there have been varying reactions of physicians to TQM. Seven of
the typical ones are as follows [2]:

• Reaction I: The application of the TQM concept is a further encroachment


of the physician-patient relationships, as patient care cannot be standardised
like industrial processes.
• Reaction II: Physicians have always used the scientific method; thus the
scientific method advocated by TQM is nothing new.
• Reaction III: The TQM concept is another cost-cutting mechanism by man-
agement that will limit access to resources physicians need for their patients.
• Reaction IV: The application of the TQM concept will result in additional
committee meetings for time-constrained physicians.
• Reaction V: The TQM concept is applicable to administrative systems and
industrial processes, but not to the patients’ clinical care.
• Reaction VI: TQM basically is quality assurance in different clothing.
• Reaction VII: The application of the TQM concept will wrest control of the
patient care process from physicians.

12.6 QUALITY TOOLS FOR USE IN HEALTH CARE


There are many methods that can be used for improving quality in health care. Most
of these methods area as follows [4, 12, 13]:

• Cost-benefit analysis
• Brainstorming
• Check sheets
• Multivoting
• Force field analysis
• Affinity diagram
• Cause and effect diagram
Quality in Health Care 189

• Control charts
• Proposed options matrix
• Pareto chart
• Prioritisation matrix
• Scatter diagram
• Histogram
• Process flowchart

The first five of the above methods are described below.

12.6.1 Cost-Benefit Analysis


Cost-benefit analysis may simply be described as a weighing-scale approach for
decision making, where all plusses (i.e., cash flows and other intangible benefits) are
grouped and put on one side of the balance and all the minuses (i.e., drawbacks and
costs) are grouped and put on the other side. At the end the heavier side wins.
The main objective of the application of the cost-benefit analysis method, in
the area of health care quality, is that all the quality team members carefully con-
sider the total impact of their recommended actions. Additional information on this
method is available in Refs. [2, 14, 15].

12.6.2 Brainstorming
The objective of brainstorming in health care quality is to generate ideas, options
or highlight problems, concerns. It is quite often referred to as a form of divergent
thinking because the basic objective is to enlarge the number of ideas being con-
sidered. Thus, brainstorming may simply be described as a group decision-making
approach designed for generating many creative ideas by following an interactive
process. The team concerned with health care quality can make use of brainstorming
for getting its ideas organised into a quality method such as a process flow diagram
or a cause and effect diagram.
Past experiences over the years, clearly indicate that questions such as presented
below can be very useful to start a brainstorming session concerned with health care
quality [12].

• What are the major obstacles to improve quality?


• What are the health care organisation’s three most pressing unsolved
quality-related problems?
• What type of action plan is required for overcoming these problems?
• What are the most pressing areas that need such action plan?

Six guidelines that are considered very useful for conducting effective brain-
storming sessions are as follows [16, 17]:

• Guideline I: Think of some possible solutions to the brainstorming problem


under consideration ahead of time.
• Guideline II: Keep the ranks of participants fairly equal.
190 Applied Reliability, Usability, and Quality for Engineers

• Guideline III: Do not allow criticism.


• Guideline IV: Welcome freewheeling as much as possible.
• Guideline V: Record each and every idea.
• Guideline VI: Combine and improve ideas.

12.6.3 Check Sheets


Check sheets are basically utilised for collecting data on specified events’ occur-
rence frequency. A check sheet, for example, can be utilised in determining the
occurrence frequency of, say, two to four problems highlighted during multivot-
ing [2]. In the quality areas, check sheets are generally used in a quality improve-
ment process for collecting frequency-associated data later displayed in a Pareto
diagram.
Although there is no standard design of check sheets, the basic idea is to docu-
ment all types of important information related to nonconformities and nonconform-
ing items, so that the sheets can directly or indirectly facilitate improvement in the
process. Additional information on check sheets is available in Refs. [18–20].

12.6.4 Multivoting
This is a quite useful method for reducing a large number of ideas to a manageable few
judged important by the participating personnel. Generally by following this approach,
the number of ideas are reduced to three to five [2]. Another thing that can be said
about multivoting is that it is a form of convergent thinking because the objective is to
lower the number of ideas being considered. Needless to say, moltivoting is considered
to be a very useful tool for application in the area of health care quality, additional
information on the method is available in Ref. [21].

12.6.5 Force Field Analysis


This method was developed by Kurt Lewin for highlighting the forces that are related
to a certain issue under consideration [13, 22]. The method is also referred to as bar-
riers and aids analysis [2]. In this approach, the problem/issue statement is written at
the top of a sheet and two columns are created below it for writing positive forces on
one side and the negative on the other.
Additional information on this method is available in Refs. [2, 13, 22].

12.7 IMPLEMENTATION OF SIX SIGMA METHODOLOGY


IN HOSPITALS AND ITS POTENTIAL BENEFITS
AND IMPLEMENTATION BARRIERS
The history of Six Sigma as a measurement standard may be traced back to Carl
Frederick Gauss (1777–1855), the father of the concept of the normal curve. In the
1980s Motorola explored this very standard and created the methodology and neces-
sary cultural change associated with it.
Quality in Health Care 191

Six Sigma may simply be described as a methodology implementation directed at


a measurement-based strategy that develops process improvements as well as varied
cost reductions throughout an organisational set up. It is to be noted that in many
organisations, Six Sigma simply means a measure of quality that strives for near
perfection.
Over the years, many organisations in the area of health care have also started
to apply the Six Sigma methodology into their operations. At total of nine steps are
involved in the implementation of define, measure, analyse, improve, and control
(DMAIC) Six Sigma methodology in an industrial organisation [23]. All these steps
can be tailored accordingly for the implementation of the methodology in hospitals.
These nine steps are as follows [23]:

• Step I: Provide appropriate training and start project.


• Step II: Highlight all stakeholders and collect relevant data.
• Step III: Map and analyse processes including important sub processes.
• Step IV: Highlight appropriate metrics for process and set targets for
improvement.
• Step V: Estimate costs of defects and establish recommended alternative
solutions.
• Step VI: Select/implement solutions.
• Step VII: Rework the process.
• Step VIII: Determine process improvements’ sustainability.
• Step IX: Estimate process improvements.

There are many potential benefits of implementation of Six Sigma methodology in


hospitals. Some of the important ones of these potential benefits are as follows [23]:

• Establishment of shared accountability in regard to continuous quality


improvement.
• The methodology’s implementation with emphasis on improving custom-
ers’ lives, could result in the involvement of more health care professionals
and support of more health care professionals and support personnel in the
quality improvement effort.
• Better job satisfaction of health care employees.
• Measurement of essential health care performance-related requirements on
the basis of commonly used standards.

Past experiences over the years, indicate that there are many potential barriers to
the implementation of Six Sigma programmes in hospitals. Some of these barriers
are as follows [23]:

• Risk of the methodology being implemented to only easily measured non-


patient care process.
• Difficulty in obtaining base-line data on process performance.
• Rather long project ramp-up times (i.e., typically six or more months).
• Poor support from physicians.
192 Applied Reliability, Usability, and Quality for Engineers

• Governmental regulations.
• Nursing shortage.
• Costs (start-up and maintenance).

12.8 PROBLEMS
1. Define the following four terms:
• Clinical audit
• Quality of care
• Adverse event
• Health care
2. Compare quality assurance and quality improvement in health care
institutions.
3. What are the main reasons for the rising health care-related cost?
4. Discuss important health care-associated goals?
5. What are the ten steps that can be used in improving quality in the health
care system?
6. Discuss physician reactions to TQM.
7. List at least 12 quality tools for use in health care.
8. Discuss the following two methods considered useful to improve quality in
health care:
• Brainstorming
• Force field analysis
9. Discuss the implementation of Six Sigma methodology in hospitals and its
advantages.
10. Write a short essay on the historical developments in health care quality.

REFERENCES
1. Graham, N.O., Quality Trends in Health Care, in Quality in Health Care, edited by
N.O. Graham, Aspen Publishers, Gaithersburg, Maryland, 1995, pp. 3–14.
2. Gaucher, E.J., Coffey, R.J., Total Quality in Health Care: from Theory to Practice,
Jossey-Bass Publishers, San Francisco, California, 1993.
3. Codman, E.A., The Product of the Hospital, Surgical Gynaecology and Obstetrics,
Vol. 28, 1914, pp. 491–496.
4. Graham, N.O., Ed., Quality in Health Care: Theory, Application, and Evolution, Aspen
Publishers, Gaithersburg, Maryland, 1995.
5. Glossary of Terms Commonly Used in Health Care, Prepared by the Academy Health,
Suite 701-L, 1801 k St. NW, Washington, D.C., 2004.
6. Marszalek-Gaucher, E., Coffey, R.J., Transforming Health Care Organizations: How to
Achieve and Sustain Organizational Excellence, John Wiley and Sons, New York, 1990.
7. Coltin, K.L., Aronow, D.B., Quality Assurance and Quality Improvement in the
Information Age, In Quality in Health Care: Theory, Application, and Evolution, edited
by N.O. Graham, Aspen Publishers, Gaithersburg, Maryland, 1995.
8. Berwick, D.M., Peer Review and Quality Management: Are They Compatible? Quality
Review Bulletin, Vol. 16, 1990, pp. 246–251.
9. Laffel, G., Blumenthal, D., The Case for Using Industrial Quality Management Science
in Health Care Organization, Journal of the American Medical Association, Vol. 262,
1989, pp. 2869–2873.
Quality in Health Care 193

10. Fainter, J., Quality Assurance Not Quality Improvement, Journal of Quality Assurance,
January/February 1991, pp. 8, 9, and 36.
11. Andrews, S.L., QA versus QI: The Changing Role of Quality in Health Care, January/
February 1991, pp. 14, 15, 38.
12. Stamatis, D.H., Total Quality Management in Health Care, Irwin Professional
Publishing, Chicago, Illinois, 1996.
13. Dhillon, B.S., Creativity for Engineers, World Scientific Publishing, River Edge,
New Jersey, 2006.
14. Levin, H.M., McEwan, P.J., Cost-Effectiveness Analysis: Methods and Applications,
Sage Publications, Thousand Oaks, California, 2001.
15. Boardman, A.E., Cost-Benefit Analysis: Concepts and Practice, Prentice Hall, Upper
Saddle River, New Jersey, 2006.
16. Osborn, A.F., Applied Imagination, Charles Scribner’s Sons, New York, 1963.
17. Dhillon, B.S., Engineering and Technology Management Tools and Applications,
Artech House, Inc, Boston, Massachusetts, 2002.
18. Montgomery, D.C., Introduction to Statistical Control, John Wiley and Sons, New York,
1996.
19. Ishikawa, K., Guide to Quality Control, Asian Productivity Organization, Tokyo, 1976.
20. Leitnaker, M.G., Sanders, R.D., Hild, C., The Power of Statistical Thinking: Improving
Industrial Processes, Addison-Wesley, Reading, Massachusetts, 1996.
21. Tague, N.R., The Quality Toolbox, ASQ Quality Press, Milwaukee, Wisconsin, 2005.
22. Jay, R., The Ultimate Book of Business Creativity: 50 Great Thinking Tools, for
Transforming Your Business, Capstone Publishing Limited, Oxford, U.K, 2000.
23. Frings, G.W., Graut, L., Who Moved My Sigma-Effective Implementation of the Six
Sigma Methodology to Hospitals, Quality and Reliability Engineering International,
Vol. 21, 2005, pp. 311–328.
13 Medical Device
Quality Assurance
13.1 INTRODUCTION
Nowadays, because quality is very important in the manufacture of medical devices,
manufacturers are under increasing pressure to follow more closely the quality sys-
tems for ensuring that the manufactured items are effective, reliable, safe, and they
clearly meet applicable specifications and standards.
Although, the history of quality assurance may be traced back to the ancient
times, in regard to medical devices, the amendments to the Federal, Food, Drug,
and Cosmetic Act of 1976 concerning medical devices have established a com-
plex statutory framework for allowing the Food and Drug Administration (FDA)
to regulate almost all aspects of medical devices, from testing to marketing, thus
putting more pressure on the quality assurance programmes concerning medical
devices.
This chapter presents various important aspects of medical device quality
assurance.

13.2 REGULATORY COMPLIANCE OF MEDICAL


DEVICE QUALITY ASSURANCE
In order to produce better quality medical devices, agencies such as the FDA and the
International Organization for Standardization (ISO) are playing a very important
role through their good manufacturing practices (GMP) regulation and ISO 9000
family of quality system standards, respectively. Both demand directly or indirectly
a comprehensive approach to medical device quality from manufacturers. The GMP
regulation went into effect on June 1, 1997, but the FDA granted a grace period to the
manufacturer until June 14, 1998. During this period, the FDA could have inspected
a manufacturer’s facilities in regard to the Quality System Regulation, but it would
not have listed any deficiencies on Form 483, nor would it have brought any sanctions
against manufacturers for noncompliances.
A detailed description of the ISO 9000 requirements is available in Ref. [1].
A mechanism for meeting both the GMP regulation and applicable ISO 9000
requirements is described below.

13.2.1 Procedure for Satisfying GMP Regulation and


ISO 9000 Requirements in Regard to Quality Assurance
This procedure is published in the form of a quality assurance manual, and assists
medical devices’ manufacturers to comply with regulatory-related requirements in a

DOI: 10.1201/9781003298571-13 195


196 Applied Reliability, Usability, and Quality for Engineers

straightforward, organised, and effectively documented manner [2]. The approach is


divided into the following three areas:

• Area I: It outlines the company policy in regard to the manufacture of medi-


cal devices and the authority and responsibility of the quality assurance
department for implementing the policy, and defines the type of record to
be maintained.
• Area II: It involves defining the company policy in regard to the qual-
ity assurance department’s administration and its subdivisions such as
receiving inspection, standards laboratory, and tool and gage inspection.
Furthermore, it is also concerned with outlining policy for internal audits,
product qualification, and the organisational chart.
• Area III: It concerns outlining quality assurance-related directives useful
for implementing and monitoring device conformance and procedural-
related compliance according to the GMP regulation. It is to be noted that
these directives also apply to the ISO 9000 requirements. These directives
cover the following nineteen distinct sub-areas [2]:
• Design control
• Quality audits
• Statistical quality control and sampling
• Field corrective action
• Sterilisation process and control
• Failure analysis
• Facility control for the manufacturing of medical devices
• Control of measuring equipment and tooling
• Receiving inspection
• Procedure for FDA inspection
• Component/warehouse control
• Control of inspection stamps
• Approval and control of labels, labelling, and advertisement
• Equipment
• Complaint report procedure
• Supplier and subcontractor quality audits
• Lot numbering and traceability
• Personnel
• In-process and final inspection

13.3 MEDICAL DEVICE DESIGN QUALITY


ASSURANCE PROGRAMME
As in the case of any other engineering system/product, the design phase in the med-
ical device’s life cycle is the most important phase. It is the phase when the device’s
inherent safety, reliability, and effectiveness are established. Furthermore, it may be
said that regardless of the degree of carefulness exercised during the manufacture
or the effectiveness of the GMP programme, a medical device’s inherent safety and
effectiveness cannot be improved except through design enhancement. The design
Medical Device Quality Assurance 197

FIGURE 13.1 Elements in the preproduction or design quality assurance programme


recommended by the FDA.

quality assurance programme is a very important factor in this regard. The FDA has
played a pivotal role in getting manufacturers to develop design quality assurance
programmes by publishing a document entitled “Preproduction Quality Assurance
Planning: Recommendations for Medical Device Manufacturers” [3]. This document
clearly outlines useful design-related practices applicable to medical devices, thus
assisting manufacturers in planning and implementing their preproduction quality
assurance programmes.
There are twelve elements, shown in Fig. 13.1, in the preproduction or design
quality assurance programme recommended by the FDA.
All the 12 elements shown in Fig. 13.1 are described in Sections 13.3.1–13.3.12.

13.3.1 Organization
This is concerned with the organisational aspects of the preproduction or design
quality assurance programme, for example, the organisational elements and authori-
ties appropriate for developing the programme, to execute programme-related
requirements, formal establishment of audit programme, formal documentation of
the specified programme-related goals, etc.

13.3.2 Specifications
After establishing physical performance, and chemical-related characteristics for the
proposed device, the characteristics should be translated into formally documented
design specifications through which the design can be developed, controlled, and
evaluated. These specifications should clearly address factors such as reliability,
198 Applied Reliability, Usability, and Quality for Engineers

safety, precision, and stability. Furthermore, in establishing physical configuration,


performance, safety, and effectiveness goals of the design, factors such as the user, the
user environment, and the expected use of the device should be properly considered.
It is also very important for professionals belonging to areas such as quality assur-
ance, research and development, reliability, manufacturing, and marketing to review
and evaluate the specification document. In this case, careful consideration should
be given in the specification to the following two factors.

• Factor I: Design changes. These are changes made to the specification


during the research and development process that are accepted as design
changes. Such changes must be well-documented and carefully reviewed
so that they do not compromise safety or effectiveness and they effectively
accomplish all the intended goals.
• Factor II: System compatibility. This involves the device’s compatibility
with all other devices in the intended operating system to assure proper
functioning of the overall system. For example, make sure you include
breathing circuits with ventilators as well as disposable electrodes with car-
diac monitors.

13.3.3 Design Review
The purpose of design review is to highlight and rectify design-related deficien-
cies as early as possible because they will be less costly to implement. The design
review programme should be well-documented and include items such as organisa-
tional units, procedures, variables’ checklist, schedule, and process flow diagrams.
Although the extent and frequency of design reviews will very much depend on
the complexity and significance of the device under study, the assessment should
include items such as subsystems, packaging, software (if applicable), labelling,
components, and support documentation (i.e., instructions, test specifications,
drawings, etc.).
The design review team members should be from areas such as quality assurance,
research and development, engineering, manufacturing, purchasing, servicing, and
marketing. Also, when considered appropriate, design reviews should include the
performance of failure modes and effect analysis (FMEA) and fault tree analysis
(FTA). Both these methods are described in Chapter 4.

13.3.4 Reliability Assessment
This may simply be described as the process of prediction and demonstration used
for estimating the basic reliability of an item or device. Reliability assessment should
be conducted for new and modified designs, and its appropriateness and extent
should be determined by the degree of risk the device presents to its user. Reliability
assessment is started by statistical and theoretical approaches by first determining
the reliability of each and every part/component/element and ultimately the entire
device/system. It is to be noted that this approach provides only an estimate of reli-
ability. For proper or better assessment, the device/system should be tested under a
Medical Device Quality Assurance 199

simulated use environment. However, the most meaningful reliability-related data


can be obtained only from actual field use.
All in all, reliability assessment is a very important element of the preproduc-
tion quality assurance programme and is a continual process that includes reliability
prediction, demonstration, and data analysis, then reprediction, redemonstration, and
data reanalysis on a continual basis.

13.3.5 Parts and Materials Quality Assurance


This is concerned with assuring that parts and materials used in device or product
designs have an appropriate level of reliability for achieving their set goals. This
needs establishment and implementation of comprehension parts and materials
quality assurance-related programmes by the medical device manufacturers. These
programmes should appropriately encompass areas such as specification, selection,
and qualification and ongoing verification of the parts’ and materials’ quality, and
whether the parts are fabricated in-house or purchased from vendors.
All parts and materials should be categorised according to the severity of their
effect on effectiveness, reliability, and safety in the event of their failure to achieve
the set goal. Furthermore, their acceptability for chosen applications should be eval-
uated and supported by both observed and computed test data. All in all, parts’ and
materials’ failure during qualification to satisfy stated effectiveness, safety, and per-
formance goals should be thoroughly examined, and all the conclusions should be
described in well-written documents.

13.3.6 Software Quality Assurance


Because software is a very important element of a medical device, a software quality
assurance programme is absolutely essential when a design incorporates software
developed in-house. To ensure the device software’s overall functional reliability,
the programme should include a protocol for formal review and validation. The soft-
ware quality assurance programmes main objectives should be reliability, testability,
maintainability, and correctness.
Furthermore, the programme should also assure (if applicable) that the subcon-
tractor has a satisfactory software quality assurance programme so that the software
reliability, testability, maintainability, and correctness are effectively looked after.

13.3.7 Labelling
This includes display labels, manuals, charts, inserts, panels, and recommended
test and calibration protocols. The design review process should also review label-
ling to assure it appropriately complies with all applicable laws and regulations and
contains easy-to-understand directions. The verification of instructions’ accuracy
contained in the labelling should be a part of the qualification testing of the device
under consideration.
Maintenance manuals (if applicable) must be written clearly so that the device
under consideration could be maintained in an effective and safe condition.
200 Applied Reliability, Usability, and Quality for Engineers

13.3.8 Design Transfer
After translating the design into a physical entity, the design’s technical adequacy,
safety, and reliability should be appropriately verified through intensive testing under
simulated or real-life use environments. After verifying technical adequacy through
appropriate testing, the design is generally approved. It is to be noted that when
moving laboratory to scaled-up production, standards, or methods and procedures
may not be properly transferred. It is quite possible that additional manufacturing
processes are needed. Thus, this very scenario requires careful consideration.

13.3.9 Certification
After the successful passing of preproduction qualification testing by the initial
production units, it is essential to carry out a formal technical review so that the
adequacy of the design, production, and quality assurance-related procedures is
assured. In addition, the review should determine the following six factors:

• Factor I: Adequacy of specifications.


• Factor II: Suitability of testing methods employed for evaluating compli-
ance with respect to approved specifications.
• Factor III: Resolution of any discrepancy between the standards and proce-
dures employed for producing the design during research and development
and those recommended for the production phase.
• Factor IV: Adequacy of the specification change control programme.
• Factor V: Resolution of any discrepancy between the final approved device
specifications and the actual end device.
• Factor VI: Adequacy of the entire quality assurance programme.

13.3.10 Test Instrumentation
This involves effectively calibrating and maintaining all equipment employed in the
qualification of the design. More clearly, such equipment should be kept under a
formal calibration and maintenance programme.

13.3.11 Personnel
This calls for the performance of design-related activities, including design review,
analysis, and testing by properly trained professionals.

13.3.12 Quality Monitoring After the Design Phase


The effort for producing safe, reliable, and effective medical devices does not end
when the design phase is completed; it continues during the manufacturing and
field use phases as well. Thus, the manufacturers of medical devices should have an
effective programme for purposes such as the following:

• To highlight failure patterns.


• To analyse quality-related problems.
Medical Device Quality Assurance 201

• To have timely internal reporting of problems discovered either in-house or


in field use environment.
• To take appropriate corrective measures for preventing recurrence of high-
lighted problems.

Medical device manufacturers should also make a special effort for assuring that
failure data collected from service and complaint records relating to design-associ-
ated problems are reviewed by the design professionals.

13.4 TOOLS FOR ASSURING MEDICAL DEVICE QUALITY


Over the years, many tools/methods have been developed for use in quality-related
work [4]. Such tools can be applied equally for assuring quality of medical devices.
As per Ref. [5], 95% of quality-associated problems can be resolved by using seven
basic tools: flow charts, cause-and-effect diagram, Pareto diagram, control charts,
histograms, scatter diagrams, and check sheets. Six of these tools, in addition to
quality function deployment, are described below [4–8]:

13.4.1 Cause-and-Effect Diagram


This method/diagram was developed by Kaoru Ishikawa in 1943, thus sometimes
it is also referred to as the Ishikawa diagram. It is to be noted that another name
used for the diagram is “Fishbone diagram” because of its resemblance to the bones
of a fish. The diagram shows a desirable or undesirable outcome as an effect, and
associated causes as leading to or potentially leading to that effect. Thus, the dia-
gram can be used for investigating either (i) a “bad” effect and taking appropriate
measures for rectifying the causes, or (ii) a “good” effect and learning about the
causes responsible for it. For example, in the cause-and-effect diagram for the total
quality management (TQM) effort, the effect could be customer satisfaction and the
major causes could be methods, machines, materials, and manpower. The analysis
of such causes can serve as an effective tool to highlight possible quality-associated
problems and inspection points.
Fig. 13.2 shows the five main steps involved in developing a cause-and-effect
diagram. Visually, the diagram’s right side, or the “fish head”, denotes effect, and its
left side shows all possible causes linked to the central “fish” spine.
There are many advantages of the cause-and-effect diagram, including the fact
that it is quite useful for generating ideas and to highlight the root cause; it presents
an orderly arrangement of theories; and it is a quite effective tool to guide further
inquiry. Its main drawback is that users can overlook critical, complex interactions
between causes.

13.4.2 Quality Function Deployment


Quality function deployment (QFD) was developed in the early 1970s in Japan and in
1984, QFD was applied for the first time in the United States by Xerox Corp. [4, 6, 9].
QFD may simply be described as a planning tool used to satisfy customer expectations.
202 Applied Reliability, Usability, and Quality for Engineers

FIGURE 13.2 Five steps for developing a cause-and-effect diagram.

Furthermore, it is a systematic mechanism to item design and manufacture as well


as provides in-depth evaluation of a product/item. QFD emphasises customer expec-
tations or requirements and is often called the voice of the customer. It uses a set of
matrices for relating customer expectations to counterpart characteristics expressed
as technical-related specifications and process control requirements. In a nutshell,
the customer-or consumer-need planning matrix forms the QFD approach’s critical
element. Quite often, because of its resemblance to a house, QFD is referred to as
“The House of Quality”.
Some of the main steps associated with QFD are as follows:

• Highlight consumer expectations.


• Highlight the product characteristics that will satisfy the consumer’s needs.
• Associate consumer needs and counterpart characteristics.
• Evaluate competing products.
• Evaluate the competing products’ counterpart characteristics and develop
objectives.
• Choose counterpart characteristics to be used in the remaining process.

Some of the benefits of QFD include improvement in engineering knowledge, pro-


ductivity, and quality and reduction in product development time, cost, and engineer-
ing changes.
In contrast, its main drawback is that the exact needs must be highlighted in
complete detail.
Medical Device Quality Assurance 203

13.4.3 Pareto Diagram


This diagram is named after Italian economist Vilfredo Pareto (1848–1923) who
quite extensively studied the distribution of wealth in Europe and then concluded
that a large percentage of wealth is owned by a small percentage of the population.
Nevertheless, Joseph Juran, one of the quality gurus, recognised its application in
quality-related work and stated that 80% of quality-related problems are the result of
only 20% of the causes.
A Pareto diagram (i.e., type of frequency chart) arranges data in a hierarchical
order, thus helping to highlight the most significant problem to be corrected first.
Although the Pareto approach can summarise all types of data, it is utilised basically
to highlight and determine nonconformities. The five main steps involved in the
construction of a Pareto diagram are as follows [6]:

• Step I: Determine the approach to classify data, i.e., by cause, problem,


nonconformity.
• Step II: Decide what to use to rank characteristics, i.e., dollars of frequency.
• Step III: Obtain data for appropriate time intervals.
• Step IV: Summarise the data and rank classifications from largest to smallest.
• Step V: Construct the diagram and determine the significant few.

It is to be noted that this diagram could be extremely useful for improving the quality
of medical device designs.

13.4.4 Flowcharts
Flowcharts are used for describing processes in as much detail as feasible by graph-
ically showing the steps in proper order. A good flowchart generally displays all
process steps under consideration or analysis by the quality improvement team,
highlights crucial process points for control, suggests areas for improvement, and
serves as a useful tool for explaining and solving a problem.
A flowchart could be simple or quite complex, composed of many symbols, boxes,
etc. More clearly, the complex version indicates the process steps in the appropriate
sequence and associated step conditions and the related constraints by making the
use of elements such as arrows, yes/no choices, or if/then statements.

13.4.5 Scatter Diagram
This is the simplest way for determining how two variables are related or if a cause-
and-effect relationship exists between the two variables. However, it is to be noted
that the scatter diagram cannot prove that one variable causes the change in the
other, but only the existence of their relationship and its strength. In this diagram,
the horizontal axis denotes the measurement values of one variable and the vertical
axis denotes the measurements of the other variable.
If sometimes it is desirable to fit a straight line to the plotted data points for
obtaining a prediction equation, a line can be drawn on the scatter diagram either
204 Applied Reliability, Usability, and Quality for Engineers

visually or mathematically utilizing the least squares approach. Whenever the line
is extended beyond the plotted data points, a dashed line is used for indicating that
there are no data for the concerned area.

13.4.6 Control Charts


In quality control work, various types of control charts are used for monitoring the
processes’ state. A control chart simply shows statistically calculated upper and
lower limits on either side of a process mean value. More specifically, the control
chart displays if the collected data are within upper and lower limits calculated pre-
viously by using raw data obtained from earlier trials.
It is to be noted that the basis for a control chart’s construction is statistical prin-
ciples and distributions, in particular, the normal distribution. When utilised in con-
junction with a manufacturing process, the control chart is very useful to indicate
trends and signal when the process goes out of control. The results of the process are
monitored over a period of time. When they are not within the stated control limits,
an investigation is conducted to determine the cause and subsequently corrective
measures are taken.
All in all, a control chart is quite useful for determining variability and its reduc-
tion as much as economically possible.

13.4.7 Histogram
A histogram is employed when good clarity is desired. It plots data in a frequency
distribution table. Its main distinction from a check sheet is that its data are catego-
rised into rows for the purpose of losing the identify of individual values. It may be
said that the histogram is the first “statistical” process control method because it can
appropriately describe the variation in the process.
A histogram can provide a satisfactory amount of information concerning a qual-
ity-related problem, thus providing a basis for making decisions without additional
analysis. The histogram’s shape shows the nature of the distribution of the data, in
addition to central tendency and variability. Furthermore, specification limits may
be used for showing process capability.

13.5 QUALITY INDICES


Over the years, many indices have been developed to aid product manufacturers with
respect to quality. This section presents four of these indices that could be quite use-
ful to medical device or equipment manufacturers.

13.5.1 Quality Inspector Accuracy Index


As quality inspectors can accept bad items and reject good ones, the check inspec-
tors may be used for reexamining both the accepted and rejected items. Thus, this
index is concerned with measuring the regular inspectors’ accuracy. The index is
expressed by [10]
Medical Device Quality Assurance 205

(α − β)(100)
γ= (13.1)
(α − β + m)

where
γ is the percentage of defects accurately identified by the regular inspector.
α is the total number of defects found by the regular inspector.
β is the total number of items or units without defects rejected by the regular
inspector as found by the check inspector.
m is the total number of defects missed by the regular inspector as discovered by
the check inspector.

Example 13.1

Assume that a lot of medical devices were inspected by a regular inspector who
found 50 defects. Subsequently, the same lot was re-examined by the check
inspector, and the values of m and β were 10 and 4, respectively. Determine the
percentage of defects accurately discovered by the regular inspector.
By substituting the specified data values into Equation (13.1), we obtain

(50 − 4)(100)
γ= = 82.14%
(50 − 4 + 10)

Thus, the percentage of defects accurately discovered by the regular inspector is


82.14%.

13.5.2 Vendor Rating Programme Index


This index is concerned with evaluating supplier quality cost performance by using
the quality cost and is expressed by [11]

I qcp = (QCv + C p )/C p (13.2)

where
I qcp is the value of the quality cost performance index.
QCv is the vendor quality cost.
C p is the purchased cost.

It is to be noted that the value of this index equals only unity for the perfect vendor,
i.e., there is no vendor quality cost. For example, there is no complaint to investigate,
no receiving inspection, no defective rejection, etc. When the value of I qcp is 1.1 or
higher, it clearly indicates that there is an immediate need for the corrective action.
Interpretations for other values of the index are as follows [11]:

• 1.010 < I qcp < 1.03 Good performance


• 1.00 < I qcp < 1.009 Excellent performance
206 Applied Reliability, Usability, and Quality for Engineers

Example 13.2

Assume that we have the following data values:

• C p = $100,000
• QCv = $4,000

Calculate the value of the quality cost performance index, and comment on the
end result.
By inserting the specified data values into Equation (13.2), we obtain

Iqcp = (4,000 + 100,000)/(100,000)


= 1.04

It means that the vendor’s quality cost performance can be rated as fair.

13.5.3 Quality Inspector Inaccuracy Index


This index is concerned with calculating the percentage of good items or devices
rejected by the regular inspector and is expressed by [10, 11]

λ = β(100)/[θ − (α − β + m)] (13.3)

where
θ is the total number of items or devices inspected.

13.5.4 Quality Cost Index


This index involves measuring a manufacturer’s quality cost performance and is
expressed by [12]

TQC (100)
µ= + 100 (13.4)
TVO

where
µ is the value of the quality cost index.
TVO is the total value of output.
TQC is the total quality cost.

The value of this index may be estimated in six steps presented below:

• Step I: Establish time base, i.e., quarter, month, etc.


• Step II: Determine the total output’s value, i.e., the total value of all finished
products/items of an acceptable level of quality.
• Step III: Calculate the value of the scrap produced/generated.
• Step IV: Estimate the total value of all labour-related costs, for example,
cost of quality control inspection.
Medical Device Quality Assurance 207

• Step V: Add the end results of the previous two steps (i.e., Steps III and IV).
• Step VI: Compute the value of the quality cost index, µ, by using Equation
(13.4) as well as the resulting values of Steps II and V.

Interpretations of some µ values are presented below:

• 100 < µ < 130: This is the common range when quality-related costs are
ignored by manufacturers.
• µ = 105: This value is achievable in real life.
• µ = 100: There is no defective output, thus no money is spent to conduct
quality checks.

13.6 PROBLEMS
1. Write an essay on ISO 9000.
2. Discuss historical developments in quality control in general and medical
device quality assurance in particular.
3. List the elements of the FDAs “Preproduction Quality Assurance Planning:
Recommendations for Medical Device Manufacturers” programme.
4. Discuss in detail at least four elements listed in question 3.
5. List at least six tools/methods that can be used to assure medical device
quality.
6. Describe the following two tools/methods that can be used to assure medi-
cal devices’ quality:
• Pareto diagram
• Quality function deployment (QFD)
7. What are the main steps involved in developing a cause-and-effect dia-
gram? Also, what are its benefits and drawback?
8. Mathematically define the quality inspector inaccuracy index.
9. Assume that a lot of medical devices was inspected by a regular inspector
who discovered 70 defects. Subsequently, the same lot was re-examined by
the check inspector and the values of m and β were 12 and 6, respectively.
Determine the percentage of defects accurately discovered by the regular
inspector using Equation (13.1).
10. Assume that we have the following data values:
• QCv = $5, 000
• C p = $110, 000

Calculate the value of the quality cost performance index using Equation (13.2), and
comment on the final result.

REFERENCES
1. Fries, R.C., Medical Device Quality Assurance and Regulatory Compliance, Marcel
Dekker Inc, New York, 1998.
2. Montanez, J., Medical Device Quality Assurance Manual, Interpharm Press Inc,
Buffalo Grove, Illinois, 1996.
208 Applied Reliability, Usability, and Quality for Engineers

3. Hooten, W.F., A Brief History of FDA Good Manufacturing Practices, Medical Device
and Diagnostic Industry Magazine, Vol. 18, No. 5, 1996, p. 96.
4. Mears, P., Quality Improvement Tools and Techniques, McGraw-Hill Inc, New York,
1995.
5. Sahni, A., Seven Basic Tools that Can Improve Quality, Medical Device and Diagnostic
Industry Magazine, April 1998, pp. 89–98.
6. Besterfield, D.H., Besterfield-Michna, C., Besterfield, G.H., Besterfield-Sacre, M., Total
Quality Management, Prentice-Hall Inc, Englewood Cliffs, New Jersey, 1995.
7. Dhillon, B.S., Advanced Design Concepts for Engineers, Technomic Publishing Com­
pany, Lancaster, PA, 1998.
8. Bracco, D., How to Implement a Statistical Process Control Program, Medical Device
and Diagnostic Industry Magazine, March 1998, pp. 129–139.
9. Yoji, K., Ed., Quality Function Deployment, Productivity Press, Cambridge, MA, 1990.
10. Juran, J.M., Gryna, F.M., Bingham, R.S., Quality Control Handbook, McGraw-Hill,
New York, 1974.
11. American Society for Quality Control. Guide for Managing Vendor Quality Costs,
American Society for Quality Control, Milwaukee, WI, 1980.
12. Lester, R.H., Enrick, N.L., Mottley, H.E., Quality Control for Profit, Industrial Press
Inc, New York, 1977.
14 Software Quality

14.1 INTRODUCTION
Nowadays, computers are widely used for applications ranging from day-to-day per-
sonal use to control of space systems. As the computers are made up of both hard-
ware and software elements, over the decades, the percentage of the total computer
cost spent on software has changed dramatically. For example, in 1955 the software
element (i.e., including software maintenance) accounted for approximately 20% of
the total computer cost and three decades later, in 1985, this percentage increased to
about 90% [1]. Needless to say, the introduction of computers into systems/products
in the late 1970s has, directly or indirectly, led to the software quality assurance for
all types of software [2].
Thus, the main objective of a quality assurance program with respect to software
quality is to ensure that the final software products are of good quality, through prop-
erly planned and systematic actions for achieving, maintaining, and determining that
quality [3, 4]. This chapter presents various important aspects of software quality.

14.2 SOFTWARE QUALITY-RELATED TERMS AND DEFINTIONS


There are many terms and definitions used in the area of software quality. Some of
these terns and definitions are as follows [2, 5–7]:

• Software: This is computer programs, procedures, and possibly related data


and documentation pertaining to the operation of a computer.
• Software quality. This is the fitness for use of the software item/product.
• Software quality testing. This is a systematic series of evaluation activi-
ties or actions performed to validate that the software fully satisfies perfor-
mance and technical requirements.
• Software process management. This is the effective utilisation of avail-
able resources both to produce properly engineered products/items and to
enhance the software engineering capability of the organisation.
• Software quality assurance. This is the set of systematic activities or actions
providing evidence of software process’s capability to produce a software
item/product that is fit to use.
• Verification and validation. This is the systematic process of analysing,
evaluating, and testing system and software code and documentation for
ensuring maximum possible quality, reliability, and satisfaction of system
needs and goals.
• Software quality control. This is the independent evaluation of the capabil-
ity of the software process to produce a usable software product/item.
• Software reliability. This is the ability of the software to conduct its speci-
fied function under stated conditions for a given period of time.
DOI: 10.1201/9781003298571-14 209
210 Applied Reliability, Usability, and Quality for Engineers

14.3 SOFTWARE QUALITY FACTORS AND THEIR CATEGORIES


There are a large number of issues concerning various attributes of computer soft-
ware and its maintenance and use, as outlined in software requirement-related docu-
ments, may be categorised under content groups called quality factors. In turn, all
software quality factors can be grouped under three categories shown in Fig. 14.1 [8].
The three categories shown in Fig. 14.1 are described in Sections 14.3.1–14.3.3.

14.3.1 Product Operation Factors


Five quality factors pertaining to this category are as follows [9, 10]:

• Factor I: Reliability. Reliability-associated requirements are concerned


with failures to provide an appropriate level of service. Moreover, they
determine the maximum allowed failure rate for the software system and
can refer to the whole system or to one or more of its distinct functions. Four
subfactors/elements of the reliability are hardware failure recovery, applica-
tion reliability, computational failure recovery, and system reliability.
• Factor II: Efficiency. Efficiency-associated requirements are concerned
with hardware resources needed for performing the entire software sys-
tem functions in conformance with all other requirements. Four subfactors/
elements of the efficiency are processing efficiency, storage efficiency, com-
munication efficiency, and power usage efficiency (for portable units).
• Factor III: Integrity. Integrity-associated requirements are concerned with
the software system’s security, i.e., requirements for preventing access to
unauthorised people as well as distinguish between the majority of individu-
als allowed to view the information (“read permit”) and a limited number of
individuals who will be allowed to change and add data (“write permit”), etc.
Two subfactors/elements of the integrity are access control and access audit.

FIGURE 14.1 Categories of the software quality factors.


Software Quality 211

• Factor IV: Correctness. Correctness-associated requirements are outlined


in a list of the software system required outputs. Six subfactors/elements of
the correctness are coding and documentation guidelines, completeness,
availability (response time), accuracy, up-to-dateness, and compliance
(consistency).
• Factor V: Usability. Usability-associated requirements are concerned with
the scope of the staff resources required for training a newly hired employee
to operate the software system. Two subfactors/elements of the usability are
operability and training.

14.3.2 Product Revision Factors


Three quality factors pertaining to this category are as follows [9, 10]:

• Factor I: Maintainability. Maintainability-associated requirements are


concerned with determining the efforts that will be needed by maintenance
personnel and users for identifying the reasons for the occurrence of soft-
ware failures, for correcting or rectifying the failures, and for verifying the
success of the corrections. Six subfactors/elements of the maintainability
are document accessibility, coding and documentation guidelines, simplic-
ity, compliance (consistency), self-descriptiveness, and modularity.
• Factor II: Flexibility. Flexibility-associated requirements are concerned
with the efforts and capability needed for supporting adaptive mainte-
nance-associated activities. Four subfactors/elements of the flexibility are
modularity, generality, simplicity, and self-descriptiveness.
• Factor III: Testability. Testability-associated requirements are concerned
with an information system’s testing as well as with its stated operation.
Three subfactors/elements of the testability are traceability, user testability,
and failure maintenance testability.

14.3.3 Product Transition Factors


Three quality factors pertaining to this category are as follows [9, 10]:

• Factor I: Reusability. Reusability-associated requirements are concerned


with the software modules’ use, originally designed for one specific project,
in a new project under development. Seven subfactors/elements of reusabil-
ity are document accessibility, simplicity, software system independence,
application independence, generality, self-descriptiveness, and modularity.
• Factor II: Portability. Portability-associated requirements are concerned
with the adaptation of a software system under consideration to other envi-
ronments made up of different hardware, operating systems, etc. Three
subfactors/elements of the portability are modularity, software system inde-
pendence, and self-descriptiveness.
• Factor III: Interoperability. Interoperability-associated requirements are
concerned with creating appropriate interfaces with other software systems
212 Applied Reliability, Usability, and Quality for Engineers

or with other product/equipment firmware. Four subfactors/elements of the


interoperability are commonality, modularity, system compatibility, and
software system independence.

14.4 USEFUL QUALITY METHODS FOR USE DURING


THE SOFTWARE DEVELOPMENT PROCESS
Over the years, many quality methods have been developed for improving software
quality during the software development process. Seven of these methods are as
follows [11]:

• Run charts
• Pareto diagram
• Scatter diagram
• Histogram
• Control chart
• Cause and effect diagram
• Checklist

The first two of the above seven methods are described in Sections 14.4.1 and 14.4.2,
and detailed information on the remaining five methods is available in Refs. [12–14].

14.4.1 Run Charts
These charts are normally used for software project management, serving as real-
time statements of quality and workload. An example of run charts’ application is
tracking the percentage of software fixes that exceed the stated response-time crite-
ria, in order to ensure deliveries of fixes to all involved customers in a timely manner.
Run charts are also used for monitoring the weekly arrival of software defects as
well as the defect backlog during the formal testing phases of a machine under con-
sideration. During the software development process, run charts are often compared
to the relevant projection models and historical data so that the related interpreta-
tions can be placed into appropriate perspective.
Additional information on run charts with respect to their application during the
software development process is available in Ref. [11].

14.4.2 Pareto Diagram


This is probably the most effective method in software quality area, because past
experiences over the years clearly indicate that software defects or density never
follow a uniform distribution. Thus, a Pareto diagram is a very useful tool for high-
lighting focus areas that cause most of the problems in a given software project.
For example, Hewlett-Packard has used Pareto diagrams for achieving significant
improvements in software quality [15]. Motorola has also successfully utilised Pareto
Diagrams for highlighting the main sources of software-requirements-associated
changes that enabled in-process corrective actions to be conducted [16].
Software Quality 213

Additional information on Pareto diagrams is available in Chapter 13 and addi-


tional information with respect to their application during the software development
process is available in Ref. [11].

14.5 QUALITY-RELATED MEASURES DURING THE


SOFTWARE DEVELOPMENT LIFE CYCLE
In order to produce a quality software product, it is very important to take appro-
priate quality-related measures during the software development life cycle (SDLC).
A SDLC is made up of five stages shown in Fig. 14.2. All of the five stages shown in
Fig. 14.2 are described in Sections 14.5.1–14.5.5 [17].

14.5.1 Stage I: Requirements Analysis


Past experiences, over the years, indicate that around 60%–80% of system devel-
opment failures are due to poor understanding of user-related requirements [18].
In this regard, during the software development process, major software ven-
dors generally use quality function deployment (QFD). Software quality func-
tion deployment (SQFD) is a very useful method for focusing on improving the
quality of the software development process by implementing appropriate qual-
ity improvement-related approaches to the SDLC requirements solicitation phase.
More specifically, SQFD is a front-end requirements collection method that quanti-
fiably solicits and defines the customer’s critical requirements.
Thus, during SDLC, SQFD is considered a quite useful tool for solving the problem
of poor systems specification. Some of the main advantages of SQFD are quantify-
ing qualitative customer requirements, fostering better attention to the requirements
of customers, and establishing better communications among departments and with
customers [17].

FIGURE 14.2 Software development life cycle stages.


214 Applied Reliability, Usability, and Quality for Engineers

14.5.2 Stage II: Systems Design


This is considered quality software development’s most critical stage because a
defect or problem in design is many times more costly to rectify than a defect dur-
ing the production phase or stage. More clearly, it basically means that each and
every dollar spent for increasing design quality has at least a hundred-fold payoff
during the implementation and operation stages [19]. Concurrent engineering is an
often used method for making changes to systems design, and it is also considered
a quite useful tool in implementing total quality management (TQM) [17].
Additional information on this method (i.e., concurrent engineering) is available
in Refs. [20–22].

14.5.3 Stage III: Systems Development


Software TQM needs the appropriate integration of quality into the total software
development process. After the establishment of an effective quality process into
Stage I and Stage II of SDLC, the coding’s task becomes quite simple and straight-
forward [17]. However, for document inspections, the design and code-inspections
approach can be utilised [23]. Furthermore, control charts can be utilised by tracking
the metrics of the effectiveness of code inspections.

14.5.4 Stage IV: Testing


In addition to designing testing-related activities with care at each stage of the
SDLC, such activities must be planned and managed appropriately right from the
start of software development [24]. Furthermore, a TQM-based software develop-
ment process must have a set of testing-related objectives. A six-step metric-driven
approach can fit quite well with such testing-related objectives [17]. Its six steps are
as follows [17, 25]:

• Step I: Establish structured test objectives.


• Step II: Choose appropriate functional methods to derive test-case suites.
• Step III: Run functional tests as well as assess the degree of structured
coverage achieved.
• Step IV: Extend the text suites until the desired coverage is fully achieved.
• Step V: Calculate the test scores.
• Step VI: Validate testing by recording errors not discovered during the
testing process.

14.5.5 Stage V: Implementation and Maintenance


Most of the software maintenance-associated activities are generally reactive.
Programmers quite often zero in on the immediate problem, fix it, and wait until the
next problem [17, 25]. As statistical process control (SPC) can be used for monitoring
the quality of software system maintenance, a TQM-based system must be able to
Software Quality 215

adapt to the SPC process for ensuring the quality maintenance. Additional informa-
tion on quality software maintenance is available in Refs. [17, 26].

14.6 SOFTWARE QUALITY-ASSOCIATED METRICS


There are a large number metrics that can be used to improve/assure software quality.
Two principal objectives of software quality metrics are as follows [10, 25]:

• To facilitate an appropriate level of management control, including execut-


ing and planning of necessary management interventions.
• To highlight conditions that need or enable development of maintenance
process-related improvements in the form of preventive or corrective
actions initiated within the organisational structure.

For the successful application of the above two principals, it is absolutely essential
that the metrics satisfy the following eight requirements [10, 25]:

• Requirement I: Comprehensive (i.e., applicable to a wide variety of situa-


tions and implementations).
• Requirement II: Mutually exclusive (i.e., does not measure attributes mea-
sured by other metrics).
• Requirement III: Easy and simple (i.e., implementation of the metrics data
collection is simple and straightforward and is conducted with minimal
resources).
• Requirement IV: Immune to biased interventions by interested parties.
• Requirement V: Reliable (i.e., generates similar results when utilised under
similar environments).
• Requirement VI: Does not need independent data collection.
• Requirement VII: Relevant (i.e., associated to an attribute of substantial
importance).
• Requirement VIII: Valid (i.e., successfully measures the required attribute).

Ten software quality metrics considered quite useful are presented in Sections
14.6.1–14.6.10 [10, 27].

14.6.1 Metric I
This is one of the error-severity metrics and is expressed by
α1
CEas = (14.1)
α2
where
CEas is the average severity of code errors.
α1 is the number of weighted code errors detected.
α 2 is the number of code errors detected in the software code through testing and
inspections.
216 Applied Reliability, Usability, and Quality for Engineers

14.6.2 Metric II
This is one of the error-density metrics and is expressed by

α3
CEd = (14.2)
α2
where
CEd is the code error density.
α 3 is the number of code errors detected in the software code through testing and
inspections.
α 2 is the thousands of lines of code.

14.6.3 Metric III


This is concerned with measuring the effectiveness of the software corrective main-
tenance and is expressed by

α4
CM e = (14.3)
α5
where
CM e is the corrective maintenance effectiveness.
α 4 is the total number of annual working hours invested in corrective mainte-
nance of the software system.
α 5 is the total number of software failures detected during a 1-year period of
maintenance service.

14.6.4 Metric IV
This metric is concerned with measuring the success of help-desk service (HDS)
and is defined by

α6
HDSsf = (14.4)
α7
where
HDSsf is the HDS success factor.
α 6 is the number of HDS calls completed on time during a 1-year period.
α 7 is the total number of HDS calls during a 1-year period.

14.6.5 Metric V
This metric is concerned with measuring the mean severity of the HDS calls and is
expressed by

α8
HDSmsc = (14.5)
α7
Software Quality 217

where
HDSmsc is the mean severity of HDS calls.
α 8 is the number of weighted HDS calls received during a 1-year period.
α 7 is the total number of HDS calls during a 1-year period.

14.6.6 Metric VI
This metric is one of the software process timetable metrics and is expressed by
α9
TO f = (14.6)
α10
where
TO f is the timetable observance factor.
α 9 is the number milestones completed on time.
α10 is the total number of milestones.

14.6.7 Metric VII


This metric is one of the software process productivity metrics and is defined by
α11
SD p = (14.7)
α12
where
SD p is the software development productivity.
α11 is the number of working hours invested in the software system development.
α12 is the thousands of lines of code.

14.6.8 Metric VIII


This metric is one of the error-removal effectiveness metrics and is defined by
α13
DERe = (14.8)
α13 + α14
where
DERe is the development error removal effectiveness.
α13 is the total number of design and code errors detected in the software develop-
ment process.
α14 is the total number of software failures detected during a 1-year period of
maintenance service.

14.6.9 Metric IX
This metric is one of the HDS calls-density metrics and is expressed by
α15
HDScd = (14.9)
α16
218 Applied Reliability, Usability, and Quality for Engineers

where
HDScd is the HDS calls density.
α15 is the total number of HDS calls during a 1-year period.
α16 is the thousands of lines of maintained software code.

14.6.10 Metric X
This metric is one of the HDS productivity metrics and is defined by

α17
HDS pf = (14.10)
α18

where
HDS pf is the HDS productivity factor.
α17 is the total number of yearly working hours invested in help-desk servicing of
the software system.
α18 is the thousands of lines of maintained software code.

14.7 SOFTWARE QUALITY ASSURANCE MANAGER’S


RESPONSIBILITIES AND A SUCCCESSFUL SOFTWARE
QUALITY ASSURANCE PROGRAM’S ELEMENTS
There are many responsibilities of a software quality assurance manager. The 12 main
responsibilities are as follows [3, 28]:

• Responsibility I: Preparing a software quality program on an annual basis


in regard to goals, manpower, budgets, etc.
• Responsibility II: Establishing the software quality control organisation,
procedures, and practices.
• Responsibility III: Keeping management well informed of matters directly
or indirectly concerning software quality.
• Responsibility IV: Auditing conformance to the software quality policy on
a regular basis.
• Responsibility V: Disseminating new software information to all concerned
individuals and groups.
• Responsibility VI: Training and recruiting personnel familiar with software
quality.
• Responsibility VII: Keeping abreast of current software quality-related
matters.
• Responsibility VIII: Providing appropriate consulting services to others.
• Responsibility IX: Liaising with standardisation and regulatory bodies.
• Responsibility X: Participating in software design-associated reviews.
• Responsibility XI: Developing new concepts and procedures.
• Responsibility XII: Interfacing with customers.
Software Quality 219

The following elements/procedures are very important for the success of a


software quality assurance program [3, 29]:

• Ensure that the quality assurance activity starts at an early stage of the
software development cycle.
• Develop a quality assurance activity and appropriately ensure its
independence.
• Ensure that, prior to the initiation of the testing process, the development
testing is well planned and organised.
• Carry out appropriate analysis of the code in addition to testing it.
• Try to carry out some type of quality assurance activity/activities in an
environment where you are unable to carry out the ideal maximum of
activities.
• Carefully evaluate the interfaces between any two elements/parts in the
system and appropriately resolve any misunderstandings, ambiguities, and
incompatibilities.
• Carry out an appropriately detailed verification analysis of the design and
requirements.
• Make sure that whole documentation is well controlled and cannot be
changed without proper controls.
• Always remain sceptical of errors in software received from any developer.
• Keep a careful track of the computer resources needed by the end program.

14.8 SOFTWARE QUALITY-RELATED COST


Software quality-related cost can be classified under the following two classifica-
tions [10, 25]:

• Classification I: Cost of controlling failures. This cost is associated with


activities performed for detecting and preventing software errors, in order
to reduce them to an acceptable level. Two subcategories of this cost (i.e.,
cost of controlling failures) are as follows [10, 25]:
• Subcategory I: Prevention costs. These costs associated with activi-
ties such as developing a software quality infrastructure, improving
and updating that infrastructure, and conducting the regular activities
needed for its operation.
• Subcategory II: Appraisal costs. These costs are concerned with activi-
ties pertaining to the detection of software errors in specific software
systems/projects. Typical components of appraisal costs are the cost of
reviews, cost of software testing, and cost of assuring quality of exter-
nal participants (e.g., subcontractors).
• Classification II: Cost of the failure of control. This cost is concerned with
the cost of failures that occurred because of failure to detect and prevent
software errors. Two subcategories of this cost (i.e., cost of the failure of
control) are as follows [10, 25]:
220 Applied Reliability, Usability, and Quality for Engineers

• Subcategory I: Internal failure costs. These costs are associated with cor-
recting errors found through design reviews, software tests, and accep-
tance tests, prior to the installation of the software at customer sites.
• Subcategory II: External failure costs. These costs are associated with
correcting failures detected by customers/maintenance teams after the
installation of the software system at customer sites.

14.9 SOFTWARE QUALITY ASSURANCE


STANDARDS AND BENEFITS
Over the years, many software quality assurance-associated standards have been
developed by various organisations for use in developing various types of software
products. Some of the main reasons for having several software quality assurance-
associated standards are as follows [3]:

• Great variety in the technological procedures, approaches, and methods


being utilised.
• Great diversity in the management procedures, approaches, and methods
being used.
• Great diversity in the types of software products being maintained.
• Great variety in the types of software products being developed.

The principal objective of any software quality assurance standard is to product


cost-effective and good-quality products. Thus, top-level management and custom-
ers expect that all the software quality assurance personnel and management are
fully aware of and clearly comprehend the importance of software quality assurance
standards.
Some of the published standards that directly or indirectly concern software qual-
ity assurance are presented below [30, 31].

• NASA STD 8739.8: Software assurance standard, prepared by the National


Aeronautics and Space Administration (NASA). This standard specifies
the software assurance-associated requirements for software developed or
acquired by NASA.
• ISO 9126: Software quality characteristics, prepared by the International
Organization for Standardization (ISO). This is an international standard
for the evaluation of software and is divided into four parts: quality in use
metrics, external metrics, internal metrics, and quality model.
• MIL-HDBK-334: Evaluation of contractor’s software quality assurance pro-
gram, prepared by the U.S. Department of Defense. This document is used
by the government procurement agency for evaluating the final software
quality assurance program in a situation when MIL-S-52779 is applicable
to a contract under consideration.
• IEEE-Std-730: IEEE standard for software quality assurance plans, pre-
pared by the Institute of Electrical and Electronics Engineers (IEEE). This
Software Quality 221

standard basically applies to the development and maintenance of critical


software.
• AQAP-14: This document is the North Atlantic Treaty Organization (NATO)
equivalent of the preceding document (i.e., MIL-HDBK-334). This document
is used for evaluating the software quality control of NATO contractors.

Additional information on software quality assurance-related standards is available


in Refs. [3, 30].
There are many benefits of software quality assurance. Some of these benefits
are as follows [32]:

• Is quite useful in enhancing the management’s visibility into the software


development process through reviews and audits.
• Reduces project-associated risks because of better requirements traceabil-
ity and thorough testing.
• Ensures that fulfilment of contractual-related requirements of deliverable
items is reviewed by an independent body.
• Centralises the records related to quality assurance.
• Is quite useful in enforcing software standards.
• Centralizes the development and maintenance of software-related approaches.

14.10 PROBLEMS
1. Define the following four terms:
• Software
• Software quality
• Software quality assurance
• Software quality control
2. Write an essay on software quality.
3. What are the software quality factors? List at least ten of them.
4. Discuss at least two quality methods that can be used during the software
development process.
5. Discuss quality-related measures during the SDLC.
6. List at least six requirements that must be satisfied by software metrics for
their successful applicability.
7. Define at least four software quality metrics.
8. List at least nine main responsibilities of a software quality assurance manager.
9. What are the four subcategories of the software quality cost? Describe each
of these subcategories.
10. What are the main benefits of software quality assurance?

REFERENCES
1. Keene, S.J., Software Reliability Concepts, Annual Reliability and Maintainability
Symposium Tutorial Notes, 1992, pp. 1–21.
2. Dunn, R., Ullman, R., Quality Assurance for Computer Software, McGraw-Hill Book
Company, New York, 1982.
222 Applied Reliability, Usability, and Quality for Engineers

3. Dhillon, B.S., Reliability in Computer System Design, Ablex Publishing Corporation,


Norwood, New Jersey, 1987.
4. Mendis, K.S., A Software Quality Assurance Program for the 80s, Proceedings of the
Annual Conference of the American Society for Quality Control, 1980, pp. 379–388.
5. IEEE-STD-610.12-1990, IEEE Standard Glossary of Software Engineering Termi­
nology, Institute of Electrical and Electronics Engineers (IEEE), New York, 1991.
6. Ralston, A., Reilly, E.D., Eds., Encyclopaedia of Computer Science, Van Nostrand
Reinhold Company, New York, 1993.
7. Schulmeyer, G.G., Software Quality Assurance: Coming to Terms, in Handbooks of
Software Quality Assurance, edited by Schulmeyer, G.G., McManus, J.E., Prentice
Hall, Inc., Upper Saddle River, New Jersey, 1999, pp. 1–27.
8. McCall, J., Richards, P., Walters, G., Factors in Software Quality, NTIS Report
No. AD-A049-014, 015, 055, November 1977. Available from the National Technical
Information Service (NTIS), Springfield, Virginia, USA.
9. Evans, M.W., Marciniak, J.J., Software Quality Assurance and Management, John
Wiley and Sons, New York, 1987.
10. Galin, D., Software Quality Assurance, Person Education Ltd., Harlow, Essex, U.K.,
2004.
11. Kan, S.H., Metrics and Models in Software Quality Engineering, Addison-Wesley,
Reading, MA, 1995.
12. Kanji, G.K., Asher, M., 100 Methods for Total Quality Management, Sage, London,
1996.
13. Mears, P., Quality Improvement Tools and Techniques, McGraw-Hill, New York, 1995.
14. Ishikawa, K., Guide to Quality Control, Asian Productivity Organization, Tokyo, 1976.
15. Grady, R.B., Caswelll, D.L., Software Metrics: Establishing a Company Wide Program,
Prentice Hall, Englewood Cliffs, New Jersey, 1986.
16. Daskalantonakis, M.K., A Practical View of Software Measurement and Implementation
Experiences with Motorola, IEEE Transactions on Software Engineering, SE-18, 1992,
pp. 998–1010.
17. Gong, B., Yen, D.C., Chou, D.C., A Manager’s Guide to Total Quality Software Design,
Industrial Management and Data Systems, Vol. 98, No. 3, 1998, pp. 100–107.
18. Gupta, Y.P., Directions of Structured Approaches in System Development, Industrial
Management and Data Analysis, Vol. 88, No. 7, 1988, pp. 11–18.
19. Zahedi, F., Quality Information Systems, Boyd and Fraser, Inc., Danvers, Massachusetts,
1995.
20. Salomone, T.A., Concurrent Engineering, Marcel Dekker, New York, 1995.
21. Rosenblatt, A., Watson, G.F., Concurrent Engineering, IEEE Spectrum, Vol. 28, No. 7,
1991, pp. 22–23.
22. Dhillon, B.S., Engineering and Technology Management: Tools and Applications,
Artech House, Boston, Massachusetts, 2002.
23. Fagan, M.E., Advances in Software Inspection, IEEE Transactions on Software
Engineering, Vol. 12, No. 7, 1986, pp. 744–751.
24. Graham, D.R., Testing and Quality Assurance: The Future, Information and Software
Technology, Vol. 34, No. 10, 1992, pp. 694–697.
25. Dhillon, B.S., Applied Reliability and Quality: Fundamentals, Methods, Procedures,
Springer-Verlag, London, 2007.
26. Schulmeyer, G.G., McManus, J.I., Total Quality Management for Software, Van
Nostrand Reinhold, New York, 1992.
27. Schulmeyer, G.G., Software Quality Assurance Metrics, in the Handbook of Quality
Assurance, edited by Schulmeyer, G.G., McManus, J.I., Prentice Hall, Upper Saddle
River, New Jersey, 1999, pp. 403–443.
Software Quality 223

28. Tice, G.D., Management Policy and Practices for Quality Software, Proceedings of the
Annual American Society for Quality Control Conference, 1983, pp. 369–372.
29. Rubey, R.J., Planning for Software Reliability, Proceedings of the Annual Reliability
and Maintainability Symposium, 1977, pp. 495–499.
30. Fisher, M.J., Software Quality Assurance Standards: The Coming Revolution, Journal
of Systems and Software, Vol. 2, 1981, pp. 357–362.
31. Dunn, R., Ullman, R., Quality Assurance for Computer Software, McGraw-Hill,
New York, 1982.
32. Fischer, K.F., A Program for Software Quality Assurance, Proceedings of the Annual
Conference of the American Society for Quality Control, 1978, pp. 333–340.
Index
A Laplace transform, 14
probability, 13
Absorption law, 12 probability density function, 14
Administration kit for peritoneal dialysis, 147 Deming approach, 45–46
Advanced Research Projects Agency Network, 103 Design phase measure, 115
American Society for Quality Control, 1 Devol, G., 83
Arial, 170 Distributive law, 12
Arithmetic mean, 9 Documents for improving medical device
Associative law, 12 usability, 152–154
Average service availability index, 128–129

E
B
Electric robot, 93–96
Balloon catheter, 147 Electrosurgical cutting and coagulation device, 147
Bathtub hazard rate concept, 25–26 Emergency Care Research Institute, 68
Bathtub hazard rate curve distribution, 19–20 European laboratory for particle physics, 167
Bell Laboratories, 103 Exponential distribution, 17
Bell Telephone Laboratories, 51
Bernoulli, J., 17
Binomial distribution, 16 F
Boolean algebra laws, 11–12 Failure density function, 26
absorption law, 12 Failure modes and effect analysis, 49–50, 69
associative law, 12 Fault tree analysis, 51–54, 70
commutative law, 12 Federal, Food, Drug, and Cosmetic Act, 195
distributive law, 12 Fermat, P., 9
idempotent law, 12 Flowcharts, 203
Brainstorming, 188, 189–190 Food and Drug Administration, 67, 147, 195
Bridge network, 37–39 Force field analysis, 188, 190
Frazee, C. N., 1
C
Cardano, G., 9 G
Cause and effect diagram, 61–62, 201, 202 Gauss, C.F., 190
Center for Devices and Radiological Health, 71 General approach, 70–71
Check sheets, 188, 190 General reliability function, 28
Code and unit test phase measure, 114 Generic hypertext markup language, 177
Codman, E.A., 183 Glucose meter, 147
Cognitive walkthroughs, 58, 161 Guidelines checklists, 161
Commutative law, 12
Control charts, 204
Cost-benefit analysis, 188, 189 H
Cumulative trauma disorder, 151–152 Hazard rate function, 27
Customer average interruption duration index, 128 Helvetica, 170
Customer average interruption frequency index, 128 Heuristic evaluation, 161
Hindu-Arabic numeral system, 9
D Histogram, 204
Human Factors Society of America, 1
Definitions, 13–16 Human sensory capacities, 41–42
cumulative distribution function, 14 Human-computer interface fundamental
expected value, 14 principles, 158, 159
final value theorem Laplace transform, 16 Hydraulic robot, 90–93

225
226 Index

I Parts count method, 69–70


Pascal, B., 9
Idempotent law, 12 Permanent pacemaker electrode, 147
Implantable spinal core stimulator, 147 Planet Corporation, 83
Infusion pump, 147 Pluralistic walk-through, 160
International Organization for Standardization, 195 Power system service performance-related
Intravascular catheter, 147 indices, 126–129
average service availability index, 128–129
K customer average interruption duration
index, 128
k-out-of-n network, 34–36
customer average interruption frequency
index, 128
L measuring service quality index, 127
Laplace, P.S., 9 system average interruption duration index, 128
Lift, 178 system average interruption frequency index, 127
Loss-of-load probability, 126 Probability distributions, 16–20
bathtub hazard rate curve distribution, 19–20
M binomial distribution, 16
exponential distribution, 17
Markov, A.A., 55 normal distribution, 19
Markov method, 55–57, 70 Raleigh distribution, 18
Max, 177 Weibull distribution, 18–19
Mean deviation, 10 Probability tree analysis, 59–61
Mean time to failure, 28–29 Product operation factors, 210–211
Mean time to robot failure, 88–89 Product revision factors, 210, 211
Mean time to robot-related problems, 88 Product transition factors, 210, 211–212
Measuring service quality index, 127
Medical device for old users, 150–151 Q
Medical device use environments, 146–147
Medical device user interfaces, 146 Quality cost index, 206–207
Medical device users, 145 Quality function deployment, 201–202, 213
Medical device/equipment operator errors, 71 Quality indices, 204–207
Medical devices with high incidence of human quality cost index, 206–207
error, 72 quality inspector accuracy index, 204–205
Medical equipment maintainability, 73–75 quality inspector inaccuracy index, 206
Medical equipment maintenance, 75–78 vendor rating program index, 205–206
Mills model, 113–114 Quality inspector accuracy index, 204–205
Minuteman launch control system, 51 Quality inspector inaccuracy index, 206
Multivoting, 188, 190 Quality methods for use during the software
Musa model, 112–113 development process, 212–213
cause and effect diagram, 212
N checklist, 212
control chart, 212
National Institute of Standards and Technology, 176 histogram, 212
NetRaker, 177 Pareto diagram, 212–213
Neumann, V., 103 run charts, 212
Nightingale, F., 183 scatter diagram, 212
N-modular redundancy, 111 Quality tools for use in health care, 188–190
Normal distribution, 19 affinity diagram, 188
brainstorming, 188, 189–190
O cause and effect diagram, 188
check sheets, 188, 190
Orthodontic bracket aligner, 147 control charts, 189
cost-benefit analysis, 188, 189
P force field analysis, 188, 190
histogram, 189
Parallel network, 32–34 multivoting, 188, 190
Pareto diagram(s), 212–213, 203 Pareto chart, 189
Index 227

prioritization matrix, 189 heuristic evaluation, 161


process flowchart, 189 pluralistic walk-through, 160
proposed options matrix, 189 standards inspection, 161
scatter diagram, 189 Software usability testing methods, 162–163
co-discovery, 162
R in-field studies, 162
performance measurement, 162
Radford, G.S., 1 thinking-aloud protocol, 162
Rayleigh distribution, 18 Sources for obtaining information on reliability,
Reliability networks, 30–39 usability, and quality, 4
bridge network, 37–39 Sources for obtaining medical equipment
k-out-of-n network, 34–36 reliability-associated data, 78–79
parallel network, 32–34 Standard deviation, 11
series network, 30–32 Standards inspection, 161
standby system, 36–37 Standby system, 36–37
Robot hazard rate, 87 System average interruption duration index, 128
Run charts, 212 System average interruption frequency index, 127

S T
Sans-serif fonts, 170 Task analysis, 58–59
Arial, 170 The P-charts, 62–64
Helvetica, 170 Tools for assuring medical device
Scatter diagram, 203–204 quality, 201–204
Scythian, 9 cause and effect diagram, 201, 202
Series network, 30–32 control charts, 204
Shewhart, W., 1, 62 flowcharts, 203
Six sigma methodology, 190–192 histogram, 204
Software development life cycle, 213–215 Pareto diagram, 203
Software metrics, 114–115 quality function deployment, 201–202
code and unit test phase measure, 114 scatter diagram, 203–204
design phase measure, 115 Total quality management, 184–185
Software quality assurance benefits, 220, 221 Total quality management elements, 43–44
Software quality assurance manager, 218 Traditional quality assurance, 184–185
Software quality assurance program, 219 Triple modular redundancy, 107–110
Software quality assurance standards, 220–221 Typical human behaviours, 39–41
Software quality factors, 210–212
product operation factors, 210–211 U
product revision factors, 210, 211
product transition factors, 210, 211–212 Urological catheter, 147
Software quality function deployment, 213
Software quality-associated metrics, 215–218 V
metric I, 215
metric II, 216 Vendor rating program index, 205–206
metric III, 216
metric IV, 216 W
metric IX, 217–218
metric V, 216–217 Web design-related errors, 168
metric VI, 217 Web SAT, 176–177
metric VII, 217 Web usability evaluation tools, 176–178
metric VIII, 217 Lift, 178
metric X, 218 Max, 177
Software reliability models, 112–114 NetRaker, 177
Mills model, 113–114 Web SAT, 176–177
Musa model, 112–113 Weibull distribution, 18–19
Software usability inspection methods, 160–161 Western Electric Company, 1
cognitive walkthrough, 161 World Wide Web, 167
guidelines checklists, 161 Wright Brothers’ airplane, 1

You might also like