You are on page 1of 274

Foundation Skills in

Integrated Product Development (FSIPD)

Student Handbook

FSIPD i
Every effort has been made to trace the owners of copyright material included in this document. NASSCOM
would be grateful for any omissions brought to their notice for acknowledgement in future editions of the
book.

NASSCOM

First published in 2013

All rights reserved. No part of this document or any related material provided may be circulated, quoted, or re-
produced for distribution without prior written approval from NASSCOM subject to statutory exception and to
the provision of relevant collective licensing agreements.

FSIPD ii
TABLE OF CONTENTS

Title Page No.

Foreword iv

Introduction to the Program v

Acknowledgements vi

Module 1: Fundamentals of Product Development 1

Module 2: Requirements and System Design 69

Module 3: Design and Testing 141

Module 4: Sustenance Engineering and End-of-Life (EOL) Support 211

Module 5: Business Dynamics-Engineering Services Industry 237

FSIPD iii
FOREWORD
IT-BPM industry in India has been undergoing constant evolution. The year 2013 is an important year for the
Indian IT-BPM Industry as global markets struggle to emerge from their economic instabilities and
environmental challenges. The situation, though challenging, also presents new opportunities for the Indian
IT-BPM Industry. One of the key imperatives for the industry is to continuously seek and develop the right
talent to drive its growth.

India has a large talent base that can be skilled to take up jobs with the industry. This can be achieved by
reducing the skill gap that exists between industry requirements and academic outcome. Industry, on its part,
has been training people to address their requirements, but orientation to skill development needs to be
addressed at the college and school levels.

In order to meet the growing demand for skilled people for this sector, IT-ITeS Sector Skills Council NASSCOM
(SSC NASSCOM), through NASSCOM member companies, has facilitated the development of the Foundation
Skills in Integrated Product Development (FSIPD). The program outlines the methodology through the
courseware developed by member companies that aims to empower students to achieve the above objective.

The program has been developed under the aegis of the Engineering Talent Council, comprising of companies
like Alcatel Lucent, Aricent, EMC, Geometric Limited, HCL Technologies, Huawei, iGate, Infosys Technologies
Limited, KPIT Technologies Limited, Microsoft Corporation India (Private) Limited, Quest Global, Robert Bosch
Engineering & Business Solutions Limited, SAP Labs India, Sasken, Synapse, Tata Consultancy Services, Tata
Technologies, UTC Aerospace Systems. The key contributors for the development of this program are Tata
Consultancy Services, Tata Technologies and NIT Silchar. The curriculum for the program has been fine-tuned
to help students enhance their skills in the subject.

We acknowledge with sincere gratitude the contribution from these organizations in putting together the
training material. Last, but not the least, I would like to congratulate the Sector Skills Council Secretariat for
facilitating the development of this program.

We encourage universities and colleges to take this program for their students, and wish them all the very
best in their endeavour.

Som Mittal
President
NASSCOM

FSIPD iv
INTRODUCTION TO THE PROGRAM
In order to help enhance the skills of students and make them industry-ready, NASSCOM has facilitated the
design and development of the foundation skills course for students titled the Foundation Skills in Integrated
Product Development (FSIPD). The program has been prepared as part of the larger program Engineering
Proficiency program (EPP).
The course has been designed by experts from organizations like TCS, Tata Technologies and NIT Silchar.

Objective
The objective of the course is to train students on basic foundation skills in the subject to help enhance
employability and also to make students industry ready.

About the Course


The course will be interactive and will involve experiential learning. Students will be expected to supplement
their classroom sessions with self-paced study to enhance learning from the course.
The skills acquired through this course will not only help the students prepare for employment at this stage,
but orient them towards life-long learning.

The course will encompass the following modules:


Fundamentals of Product Development
Requirements and System Design
Design and Testing
Sustenance Engineering and End-of-Life (EOL) Support
Business Dynamics-Engineering Services Industry

Eligibility
Engineering students from 6th semester onwards across all streams are eligible for the course. To enhance
learning, we suggest an optimum batch size of up to 30 students.

Course Duration
The course has been designed to be conducted over 50 hours, including classroom training and self-paced
learning by students. The program currently covers FSIPD.
FSIPD Mechanical Tools, Software Tools and Electronics Tools shall be covered in due course of time.

Disclaimer: No part of this document or any related material provided may be circulated, quoted or reproduced
for distribution without prior written approval from NASSCOM.

FSIPD v
ACKNOWLEDGEMENTS
NASSCOM would like to thank its member companies that are a part of the Engineering Talent Council for
believing in its vision to enhance the employability of the available Engineering student pool, by developing
and facilitating the implementation of courses of educational relevance. The aim is to address two key
requirements, the generic industry-academia skill gaps and to future proof the talent for the Engineering
sector.

NASSCOM recognizes that this is an initiative of great importance for all the stakeholders concerned - the
industry, academia and the students. The tremendous work and ceaseless support offered by members of
this working group in strategizing and designing the training material for the FSIPD program is commendable.

The development of Foundation Skills in Integrated product Development (FSIPD) is aimed at the
empowerment of students with the skills demanded by the Engineering industry at the entry level. This is a
part of the program called is Engineering Proficiency Program (EPP) that is being developed by the team.

We would like to particularly thank Tata Consultancy Services, Tata Technologies and NIT Silchar for providing
a focused effort towards development of the program.

NASSCOM recognizes the contribution from Mr. Senkathir Selvan Suriaprakasam, Mr Arockiam Daniel,
Mr Veerasekaran and Mr Deb Kumar Ghosh from TCS, Prof. Nishikant Deshpande & his team from NIT Silchar
and other members of the Engineering Talent Council who stitched this course together.

Last, but not the least, NASSCOM would also like to thank the leadership of these member companies,
especially Mr. Samir Yajnik, President Sales & COO Asia Pacific, Tata Technologies for orchestrating the FSIPD
program.

Dr. Sandhya Chintala


EXECUTIVE DIRECTOR -Sector Skills Council NASSCOM
VICE PRESIDENT- NASSCOM

FSIPD vi
Module 1
Fundamentals of Product Development

FSIPD 1
Fundamentals of product development
A product is defined as any object which is exchanged by a consumer/ customer in exchange of anything, for
example money; for the goods/ products produced by the producer or the supplier. The product may be a good
or a service, e.g. a pen or travelling by a bus. Product development can be simply defined as the creation of
products and services which may have new or different characteristics that can offer new or added benefits to
a customer. It may involve modification of a prevailing product/service or their presentation, or design of an
entirely a new product or service that satisfies a newly defined customer need or demand of the market. In
the field of engineering and business new product development (NPD) is a process to bring a product, which is
completely new to the market.

The NPD process involves market research and market analysis followed by generation of idea/s, design of
the product, detail engineering and then launching the new product in the market. The growth potential of
any company highly depends on product development. Thus it plays a vital role in a companys future.

Now the question arises that how to approach towards the launch of a new product. So this can be done by
strategically analyzing all the factors that could influence the demand for the product or service. For example,
needs of the customer, activity of the competitor, stability of the market etc. Analysis of the potential of the
organisation to support the development and launch of an innovative concept has to be done. Investing on
the development and prototyping of approved innovative concepts has to take effect and then the
preparation for the commercialization of a product is the final stage.

For any product development strategy to be successful depends upon timing, planning, and realistic
expectations. To achieve effective product development results, one has to successfully address the pressure
to bring the innovative new products to market faster and more importantly cost effectively.

A number of factors such as political aspects, environmental aspects, social aspects, etc. which affects the
decisions to be taken in product development and as such these effects has to be studied.

Objectives
After studying this unit, you should be able to:
Explain the effects of various trends on product decision
Understand PESTLE analysis
Get an overview of various types of products and services
Explore through different product development methodologies
Define product life cycle
Describe product development planning and management

FSIPD 2
1.1 Types of various trends affecting product decision

1.1.1 Global Trend Analysis

Global issues are those that have worldwide significance. The world economy has undergone radical changes
during the last quarter of a century. Some of the changes are as follows
Faster communication leads to the shrunk of geographical and cultural distances,
More efficient transportation and
Major advances in technology.

These changes have resulted in a more complex marketing environment that has changed consumers needs
and the types of products produced. Global competition is intense and has an impact on domestic markets.

The following issues have impact in designing and producing of products by an organization
Social
Technical
Political
Economical
Environmental

The above issues are explained in brief in the following sessions

Social Trends:
Social factors and cross-cultural communication plays a vital role in international and global markets.
It includes the following features (figure 1.1)
Demographic
Behavioural
Psychographic
Geographic

Demographic(what?)

Geographic(where?) Behavioral(How?)

Psychographic(who?)

Figure 1.1. Various Social Trends

FSIPD 3
Demographic features:
A demographic environment is a set of demographic factors such as gender or ethnicity. Companies use
demographic environments to identify target markets for specific products or services. This practice has both
advantages and disadvantages. Marketers have to take both sides of the demographic environment coin into
account when deciding what strategy to apply. Demographics are the quantifiable statistics of a given
population. Demographics is also used to identify the study of quantifiable subsets within a given population
which characterize that population at a specific point in time. These types of data are used widely in public
opinion polling and marketing. Commonly examined demographics include
gender
age
ethnicity
knowledge of languages
disabilities
mobility
home ownership
employment status and
Location.

Demographic trends describe the historical changes in demographics in a population over time (for example,
the average age of a population may increase or decrease overtime). Both distributions and trends of values
within a demographic variable are of interest. Demographics are very essential about the population of a
region and the culture of the people there.

Focus
When a company looks at a demographic environment, it focuses its attention on the people who are most
likely to buy a product. This is good from the marketing standpoint because it means the company does not
waste money trying to get people to buy who have no interest in the product.

Branding and Strategy


Demography provides very specific information about different populations. Once a company has this data,
the company can develop well-defined strategies about how to reach each population -- that is, it tells
companies exactly how to market and develop their brands so people in the demographic environment will
respond. For instance, if people in the demographic environment tend to be busy, young workers, then a
company might promote the quick use and convenience available with the product.

Trending and Comparison


When companies examine demographic environments, they usually do so under the same lenses, such as age
or gender. By collecting demographic data over extended periods of time and comparing information from
different points, companies can identify trends within the population. This lets them forecast what might
happen with sales in the future and make some decisions about upcoming production or offered services.

Assumption and Culture


Perhaps the largest problem with a demographic environment in terms of marketing is that even though
marketers use accurate data to make predictions about what will happen with consumers, there is no
guarantee that what the company predicts actually will come to pass. In other words, much of marketing with
demographic data is based on assumptions. Additionally, those assumptions are based largely on the cultural
norms surrounding the company. Demographic information has little meaning unless marketers examine it
with this in mind, as culture has such a large influence on what those in the demographic environment do.

FSIPD 4
Change
Populations are never constant. People migrate from place to place, and people pass away and are born.
Subsequently, marketers cannot simply collect demographic data one time. They have to collect the
information constantly in order to have a realistic picture of what is happening at any given point. This
requires a great deal of effort and means a constant expense to a business.

Psychographic features
Psychographics comes into play to better analyse and classify target buyers by psychological attitudes such as
aspirations, interests, attitudes, opinions, lifestyle, behavior, etc. Demographics provide information on who
typically buys or will buy a particular product or service based on tangible characteristics. Psychographics
provides more insight into who is most likely motivated to buy.

Combining the demographic and psychographic views provides much improved targeting and effectiveness for
marketing and sales. From a marketing perspective, demographics define what buyers commonly need
whereas psychographics define what buyers want. Psychographics identifies aspirational behaviors that are
much more powerful drivers than physical demographics.

Technological Trends

Technology:
Technology is the making, modification, usage, and knowledge of tools, machines, techniques, crafts,
systems, and methods of organization, in order to solve a problem, improve a pre-existing solution to a
problem, achieve a goal, handle an applied input/output relation or perform a specific function. It can also
refer to the collection of such tools, including machinery, modifications, arrangements and procedures.
Technologies significantly affect human as well as other animal species' ability to control and adapt to their
natural environments. The term can either be applied generally or to specific areas: examples include
construction technology, medical technology, and information technology.

The human species' use of technology began with the conversion of natural resources into simple tools. The
prehistorical discovery of the ability to control fire increased the available sources of food and the invention of
the wheel helped humans in travelling in and controlling their environment.

Recent technological developments, including the printing press, the telephone, and the Internet, have
lessened physical barriers to communication and allowed humans to interact freely on a global scale.
However, not all technology has been used for peaceful purposes; the development of weapons of ever-
increasing destructive power has progressed throughout history, from clubs to nuclear weapons.

Technology has affected society and its surroundings in a number of ways. In many societies, technology has
helped develop more advanced economies (including today's global economy) and has allowed the rise of a
leisure class. Many technological processes produce unwanted by-products, known as pollution, and deplete
natural resources, to the detriment of Earth's environment. Various implementations of technology influence
the values of a society and new technology often raises new ethical questions. Examples include the rise of
the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the
challenge of traditional norms.

Technology is often a consequence of science and engineering although technology as a human activity
precedes the two fields. For example, science might study the flow of electrons in electrical conductors, by
using already-existing tools and knowledge. This new-found knowledge may then be used by engineers to
create new tools and machines, such as semiconductors, computers, and other forms of advanced technology.
In this sense, scientists and engineers may both be considered technologists; the three fields are often
considered as one for the purposes of research and reference.

FSIPD 5
Tools:
Innovations continued through the middle Ages with innovations such as silk, the horse collar and horseshoe
in the first few hundred years after the fall of the Roman Empire. Medieval technology saw the use of simple
machines (such as the lever, the screw, and the pulley) being combined to form more complicated tools, such
as the wheelbarrow, windmills and clocks. The Renaissance brought forth many of these innovations,
including the printing press (which facilitated the greater communication of knowledge), and technology
became increasingly associated with science, beginning a cycle of mutual advancement. The advancements in
technology in this era allowed a more steady supply of food, followed by the wider availability of consumer
goods.

Starting in the United Kingdom in the 18th century, the Industrial Revolution was a period of great
technological discovery, particularly in the areas of agriculture, manufacturing, mining, metallurgy and
transport, driven by the discovery of steam power. Technology later took another step with the harnessing of
electricity to create such innovations as the electric motor, light bulb and countless others. Scientific
advancement and the discovery of new concepts later allowed for powered flight, and advancements in
medicine, chemistry, physics and engineering. The rise in technology has led to the construction of
skyscrapers and large cities whose inhabitants rely on automobiles or other powered transit for
transportation.

Communication was also greatly improved with the invention of the telegraph, telephone, radio and
television. The late 19th and early 20th centuries saw a revolution in transportation with the invention of the
steam-powered ship, train, airplane, and automobile. The 20th century brought a host of innovations. In
physics, the discovery of nuclear fission has led to both nuclear weapons and nuclear power. Computers were
also invented and later miniaturized utilizing transistors and integrated circuits. The technology behind got
called information technology, and these advancements subsequently led to the creation of the Internet,
which ushered in the current Information Age.

Humans have also been able to explore space with satellites (later used for telecommunication) and in
manned missions going all the way to the moon. In medicine, this era brought innovations such as open-heart
surgery and later stem cell therapy along with new medications and treatments. Complex manufacturing and
construction techniques and organizations are needed to construct and maintain these new technologies, and
entire industries have arisen to support and develop succeeding generations of increasingly more complex
tools. Modern technology increasingly relies on training and education - their designers, builders, maintainers,
and users often require sophisticated general and specific training. Moreover, these technologies have
become so complex that entire fields have been created to support them, including engineering, medicine,
and computer science, and other fields have been made more complex, such as construction, transportation
and architecture.

Invention of new technologies and applications has made ease of access, availability. Technological factors
are broadly divided into two areas:
Manufacture
Infrastructure

An organization can gain market share and can attain a strong competitive advantage by updating
opportunities or alter their production. Such activities include
Automation
Improved quality of parts and end products
Incentives
Significant cost savings

Application of new technologies have made the following advantages


Car life has been diminished from 30 years to less than 5 years
New Automobile development has been within 24 months
Mobile phone market life is less than a month

FSIPD 6
Political/Policy trends
It is always advisable to keep a track of potential policy changes in any government because where the
Political situation is relatively stable; there may be changes in policy at the highest level which has serious
implications.
This may result in change in government priorities which results in new initiatives. These can include changes
in
Employment laws
Consumer Protection laws
Environmental regulations
Taxation regulations
Health and Safety Requirements
Trade restrictions or reforms

IP Trends:
Intellectual property (IP) is a legal concept which refers to creations of the mind for which exclusive rights are
recognized. Under intellectual property law, owners are granted certain exclusive rights to a variety of
intangible assets, such as musical, literary, and artistic works; discoveries and inventions; and words, phrases,
symbols, and designs.

Types of IP rights:
Common types of intellectual property rights include copyright, trademarks, patents, industrial design rights,
trade dress, and in some jurisdictions trade secrets. Although many of the legal principles governing
intellectual property rights have evolved over centuries, it was not until the 19th century that the term
intellectual property began to be used, and not until the late 20th century that it became commonplace in the
majority of the world. For example, the British Statute of Anne (1710) and the Statute of Monopolies (1624)
are now seen as the origins of copyright and patent law respectively.

IP is divided into two categories:


Industrial property, which includes inventions (patents), trademarks, industrial designs, and geographic
indications of source; and
Copyright, which includes literary and artistic works such as novels, poems and plays, films, musical
works, artistic works such as drawings, paintings, photographs and sculptures, and architectural designs.

Registration of intellectual property:


Registration of intellectual property in the USA is done via the United States Patent and Trademark Office
(USPTO). Registration forms can be obtained from the USPTO website. For registration of copyrights in the
USA, the Copyright Office in the Library of Congress must be contacted.

It costs roughly $15,000 - 50,000 to obtain and hold a US patent, although fees vary depending on the type of
application. It takes almost 25 months to receive a patent from the date of application. For example, in 2001,
a global IT company spent only $600 million for R&D, but generated $1.9 billion revenue through royalty
payments for their patents.

Intellectual property (IP) especially patents have acquired considerable significance in modern era. In order to
maintain a consistent rate of development, protection of IP to maximum is essential. Human intellect is a
prime and major resource for the astounding economic development and its exploitation is beyond
geographical and political boundaries. General Agreement on Tariffs and Trade (GATT) gave shape to such
thinking in the form of World Trade Organisation (WTO) and an agreement in the realm of IP, viz., TRIPS
(Trade Related aspects of Intellectual Property Rights) aims at harmonizing IP protection and enforcement
standards in Member states.

FSIPD 7
Political scenario and stability in India:
Political stability helps in making economic decisions and reducing the risk of imbalance in the economy. In
May 2004, elections brought the United Progressive Alliance (UPA) into power. Growth, stability and equity
are mutually reinforcing objectives. The quest of the UPA Government is to eliminate poverty by giving every
citizen an opportunity to be educated, to learn a skill, and to be gainfully employed. The economic strategy of
the UPA is composed of four main elements:

maintaining macroeconomic balances;


improving the incentives operating upon firms;
enhancing physical infrastructure; and
a range of initiatives aimed at empowering millions of poor households to participate in the growing
prosperity.

The major concern remains on commitment towards national interest, reduction of interference of unlawful
elements in politics, public accountability and growth oriented policies of the government. Under the
leadership of Dr. Manmohan Singh the focus of the government is appropriate and will not be cause of
distress. It has positive effect on economic growth but many times due to other factors it may be negative. In
India in last 20 years many governments were made. India is a developing country and it grows very fast.
Whenever the govt. changes economic effected very much. India is the world's largest democracy. In India, the
prime minister is identified as the head of government of the nation, while the president is said to be the
formal head of state and holds substantial reserve powers, placing him or her in approximately the same
position as the British monarch. Executive power is enforced by the government. It can be noted that federal
legislative power is vested in both the government of India and the two characteristic chambers of the
Parliament of India. Also, it can be said that the judiciary is independent of both the executive and the
legislature.

Political uncertainty is an investors nightmare. It does disturb the flow of foreign direct investment plans
both into the private sector as well as the government owned public sector units and that surely affects
economic growth. Political stability is not necessarily an essential pre-requisite item for good economic
growth. In actual practice, it is the other way around as it can be argued, that it is good economic growth that
essentially leads to political stability.

Indias Growth
Since Independence India has moved from a moderate growth path of the first three decades (1950 to 1980)
to a higher growth trajectory since 1980s. Over the last two and a half decades, India has emerged as one of
the fastest growing economies of the world, averaging about 6 percent growth rate per annum and ranking of
the country in terms of size of the economy, especially in Purchasing Power Parity (PPP) Terms have
improved. In the last three years. We have averaged a growth rate of 8 percent. Apart from registering
impressive growth rate over the last two and a half decades, Indias growth process has been stable. Studies
indicate that the yearly variation in growth in India has been one of the lowest. During the period, we have
faced only one crisis in 1991. The crisis was followed by a credible macroeconomic structural and stabilization
program encompassing trade, industry, foreign investment, exchange rate, public finance and financial sector.
The evidence of stable economic condition is the successful avoidance of any adverse contagon impact of
shocks from the East Asian crisis, the Russian crisis during 1997-98, and sanction like situation in post
Pokhran scenario, and border conflict during May-June 1999.

The performance of the Indian economy during the current fiscal year has exceeded expectations. Initial
growth projections for the period April 2004 to March 2005 were around 6.8%. Expectation was paired with a
percentage point due to low rainfall from July 2004. Global price shocks in oil, steel and coal added to
apprehension, particularly about inflation. However, shaking off these fears, the economy has grown by a
robust 6.9%. There are two aspects to the "emergence of India." First, there are signs of vigorous growth in
manufacturing. High growth rates in exports have been extended beyond the now-familiar services story to
skill-intensive sectors like automobiles and drugs. Manufacturing growth accelerated every month after May
2004 to reach double-digit levels in September and October. Merchandise export growth in the first 10

FSIPD 8
months of 2004-05 was 25.6%. For three quarters running, revenue growth in the corporate sector has been
above 20% and net profit growth has been around 30%. Second, there is a pronounced pickup in investment.
From 2001-02, the investment rate in India, low by East Asian standards, rose by 3.7 percentage points to
26.3% of GDP in 2003-04.

Central and State Governments


The central government exercises its broad administrative powers in the name of the President, whose duties
are largely ceremonial. The president and vice president are elected indirectly for 5- year terms by a special
electoral college. The vice president assumes the office of president in case of the death or resignation of the
incumbent president. The constitution designates the governance of India under two branches namely the
executive branch and real national executive power is centered in the Council of Ministers, led by the prime
minister of India. The President appoints the Prime Minister, who is designated by legislators of the political
party or coalition commanding a parliamentary majority. The President then appoints subordinate ministers
on the advice of the Prime Minister. In reality, the President has no discretion on the question of whom to
appoint as Prime Minister except when no political party or coalition of parties gains a majority in the Lok
Sabha. Once the Prime Minister has been appointed, the President has no discretion on any other matter
whatsoever, including the appointment of ministers. But all Central Government decisions are taken by
president.

Political stability and Economic Growth:


The politicians should realize that in the last decade or so, the scene in the country has undergone a sea
change:

India is a young country, where the average age is less than 26years.
The literacy rate is continuously rising.
The Primary Health Care services are improving.
Female life expectancy rate and infantile survival rate are improving.
There is a growing awareness of the need to let market forces decide on their role in the development of
infrastructure projects.
The power distribution has shifted from a centralized command structure to one where even the leader at
local level has an opportunity to address his local aspirations at the national level.

Privatization and Disinvestment:


Vajpayee had a vision of the 21st century information age. So, he privatized the Internet, reformed the flawed
telecom policy, opened radio broadcasting in 40 cities and allowed up-linking facilities to satellite channels.
Congress has yet to realize the impact of global market and address issues on taxes, subsidy etc so that the
effects of globalization do not come as a jolt to the common man in the street.

Mr. Narasimhas government approach on globalization lacked this humane approach. There was progress on
other incremental reforms - cut the diesel subsidy, de-licensed petroleum products and oil refining, set up a
power regulatory authority, threw open transmission to the private sector. Moreover, he surprised us by
squashing the irrational swadeshi forces within his own party.

Impact in India: -
There are many affect in Indian economy due to political stability every factor is affected. India opened up the
economy in the early nineties following a major crisis that led by a foreign exchange crunch that dragged the
economy close to defaulting on loans. The response was a slew of Domestic and external sector policy
measures partly prompted by the immediate needs and partly by the demand of the multilateral
organisations. The new policy regime radically pushed forward in favour of a more open and market oriented
economy. Major measures initiated as a part of the liberalisation and globalisation strategy in the early
nineties included scrapping of the industrial licensing regime, reduction in the number of areas reserved for
the public sector, amendment of the monopolies and the restrictive trade practices act, start of the
privatisation programme, reduction in tariff rates and change over to market determined exchange rates

FSIPD 9
Economic Trends
Extremely Dynamic markets have been Boon for Start-ups; Bane for Innovation; Companies are Measured by
Quarterly Profits; Global Markets are inter connected; The Future of organization is decided by the stock
market.

Official economic indicators, most of which are available such as


GDP (Gross Domestic Product)
GNP (Gross National Product)

The economic environment consists of factors that affect consumer purchasing power and spending.
Designers need to consider buying power as well as the people they are designing for. Total buying power
depends on current income, prices, savings and credit. When the economy is more confident, people will
accept a design that is less of a need and more of a want. There was a surge in the 1990s for the demand of
cut price items and hence a massive growth in stores such as Go-Lo, the Reject Shop etc.

Another economic issue is the cost of manufacturing. In Australia, manufacturing costs are often increased by
the cost of wages. Many Australian companies produce their products offshore, usually in Southeast Asia, to
take advantage of low wages that make products much cheaper.

Market:
A market is one of the many varieties of systems, institutions, procedures, social relations and infrastructures
whereby parties engage in exchange. While parties may exchange goods and services by barter, most markets
rely on sellers offering their goods or services (including labor) in exchange for money from buyers. It can be
said that a market is the process by which the prices of goods and services are established. For a market to be
competitive there must be more than a single buyer or seller. It has been suggested that two people may
trade, but it takes at least three persons to have a market, so that there is competition in at least one of its
two sides. However, competitive markets rely on much larger numbers of both buyers and sellers.

A market with single seller and multiple buyers is a monopoly. A market with a single buyer and multiple
sellers is a monopsony. These are the extremes of imperfect competition. Markets vary in form, scale (volume
and geographic reach), location, and types of participants, as well as the types of goods and services traded.

Examples include:
Physical retail markets, such as local farmers' markets (which are usually held in town squares or parking
lots on an ongoing or occasional basis), shopping centers, market restaurants, and shopping malls
(Non-physical) internet markets (see electronic commerce)
Ad hoc auction markets
Markets for intermediate goods used in production of other goods and services
Labor markets
International currency and commodity markets
Stock markets, for the exchange of shares in corporations
Artificial markets created by regulation to exchange rights for derivatives that have been designed to
ameliorate externalities, such as pollution permits (see carbon trading)
Illegal markets such as the market for illicit drugs, arms or pirated products

In mainstream economics, the concept of a market is any structure that allows buyers and sellers to exchange
any type of goods, services and information. The exchange of goods or services for money is a transaction.
Market participants consist of all the buyers and sellers of a good who influence its price. This influence is a
major study of economics and has given rise to several theories and models concerning the basic market
forces of supply and demand. There are two roles in markets, buyers and sellers. The market facilitates trade
and enables the distribution and allocation of resources in a society. Markets allow any tradable item to be

FSIPD 10
evaluated and priced. A market emerges more or less spontaneously or may be constructed deliberately by
human interaction in order to enable the exchange of rights (cf. ownership) of services and goods.

Economy:
An economy or economic system consists of the production, distribution or trade, and consumption of limited
goods and services by different agents in a given geographical location. The economic agents can be
individuals, businesses, organizations, or governments. Transactions occur when two parties agree to the
value or price of the transacted good or service, commonly expressed in a certain currency.

In the past, economic activity was theorized to be bounded by natural resources, labor, and capital. This view
ignores the value of technology (automation, accelerator of process, reduction of cost functions), and
creativity (new products, services, processes, new markets, expands markets, diversification of markets, niche
markets, increases revenue functions), especially that which produces intellectual property.

A given economy is the result of a set of processes that involves its culture, values, education, technological
evolution, history, social organization, political structure and legal systems, as well as its geography, natural
resource endowment, and ecology, as main factors. These factors give context, content, and set the
conditions and parameters in which an economy functions. Some cultures create more productive economies
and function better than others, creating higher value, or GDP.

A market-based economy is where goods and services are produced without obstruction or interference, and
exchanged according to demand and supply between participants (economic agents) by barter or a medium of
exchange with a credit or debit value accepted within the network, such as a unit of currency and at some free
market or market clearing price. Capital and labor can move freely to any area of emerging shortage, signaled
by rising price, and thus dynamically and automatically relieve any such threat. Market based economies
require transparency on information, such as true prices, to work, and may include various kinds of immaterial
production, such as affective labor that describes work carried out that is intended to produce or modify
emotional experiences in people, but does not have a tangible, physical product as a result.

A command-based economy is where a central political agent commands what is produced and how it is sold
and distributed. Shortages are common problems with a command-based economy, as there is no mechanism
to manage the information (prices) about the systems natural supply and demand dynamics.

GDP (Gross Domestic product):


Gross domestic product (GDP) is the market value of all officially recognized final goods and services produced
within a country in a given period of time. GDP per capita is often considered an indicator of a country's
standard of living.

GDP per capita is not a measure of personal income. Under economic theory, GDP per capita exactly equals the
gross domestic income (GDI) per capita. GDP is related to national accounts, a subject in macroeconomics.
GDP is not to be confused with gross national product (GNP) which allocates production based on
ownership.GDP was first developed by Simon Kuznets for a US Congress report in 1934. In this report,
Kuznets warned against its use as a measure of welfare. After the Bretton Woods conference in 1944, GDP
became the main tool for measuring a country's economy.

GDP can be determined in three ways, all of which should, in principle, give the same result. They are
Product (or output) approach
Income approach and
Expenditure approach.

FSIPD 11
Income:
Income is the consumption and savings opportunity gained by an entity within a specified timeframe, which
is generally expressed in monetary terms. However, for households and individuals, "income is the sum of all
the wages, salaries, profits, interests payments, rents and other forms of earnings received... in a given period
of time."

In the field of public economics, the term may refer to the accumulation of both monetary and non-monetary
consumption ability, with the former (monetary) being used as a proxy for total income. Income per capita has
been increasing steadily in almost every country. Many factors contribute to people having a higher income
such as Education, globalisation and favorable political circumstances such as economic freedom and peace.
Increase income also tends to lead to people choosing to work less working hours. Developed countries
defined as countries with a "developed economy" have higher incomes as opposed to developing countries
tend to have lower incomes.

Income inequality refers to the extent to which income is distributed in an uneven manner. Within a society
can be measured by various methods, including the Lorenz curve and the Gini coefficient. Economists
generally agree that certain amounts of inequality are necessary and desirable but that excessive inequality
leads to efficiency problems and social injustice. National income, measured by statistics such as the Net
National Income (NNI), measures the total income of individuals, corporations, and government in the
economy.

Target Cost:
Target costing is a pricing method used by firms. It is defined as "a cost management tool for reducing the
overall cost of a product over its entire life-cycle with the help of production, engineering, research and
design". A target cost is the maximum amount of cost that can be incurred on a product and with it the firm
can still earn the required profit margin from that product at a particular selling price. In the traditional cost-
plus pricing method materials, labor and overhead costs are measured and a desired profit is added to
determine the selling price. Target costing involves setting a target cost by subtracting a desired profit margin
from a competitive market price. Target Costing is a disciplined process for determining and achieving a full-
stream cost at which a proposed product with specified functionality, performance, and quality must be
produced in order to generate the desired profitability at the products anticipated selling price over a
specified period of time in the future.

This definition encompasses the principal concepts: products should be based on an accurate assessment of
the wants and needs of customers in different market segments, and cost targets should be what result after
a sustainable profit margin is subtracted from what customers are willing to pay at the time of product
introduction and afterwards. These concepts are supported by the four basic steps of Target Costing:
Define the Product
Set the Price and Cost Targets
Achieve the Targets
Maintain Competitive Costs.

Japanese companies have developed target costing as a response to the problem of controlling and reducing
costs over the product life cycle.

Objectives of Target Costing


The fundamental objective of target costing is very straightforward. It is to enable management to
manage the business to be profitable in a very competitive marketplace.
In effect, target costing is a proactive cost planning, cost management, and cost reduction practice
whereby costs are planned and managed out of a product and business early in the design and
development cycle, rather than during the latter stages of product development and production

FSIPD 12
TCO (Total Cost of Ownership):
Total cost of ownership (TCO) is a financial estimate intended to help buyers and owners determine the direct
and indirect costs of a product or system. It is a management accounting concept that can be used in full cost
accounting or even ecological economics where it includes social costs. For manufacturing, as TCO is typically
compared with doing business overseas, it goes beyond the initial manufacturing cycle time and cost to make
parts. TCO includes a variety of cost of doing business items, For example, ship and re-ship, and opportunity
costs, while it also considers incentives developed for an alternative approach. Incentives and other variables
include tax credits, common language, expedited delivery, and customer-oriented supplier visits.

Environmental Trends
The natural environment has become a major issue since the 1960s. Air and water pollution, massive waste
disposal problems, concern about the depletion of the ozone layer, extinction of species and the greenhouse
effect are issues that are constantly being discussed by politicians, environmental groups and individuals.

There are four environmental trends that have long-term implications on designing and producing:
shortage of raw materials both renewable and non-renewable resources
increasing energy costs
increasing levels of pollution in the environment caused by the build-up of substances that do not
decompose or only decompose slowly
increasing government intervention in natural resource management

In general, compliance means conforming to a rule, such as a specification, policy, standard or law.
Environmental Compliance means conforming to environmental laws, regulations, standards and other
requirements. In recent years, environmental concerns have led to a significant increase in the number and
scope of compliance imperatives across all global regulatory environments. Being closely related,
environmental concerns and compliance activities are increasingly being integrated and aligned to some
extent in order to avoid conflicts, wasteful overlaps and gaps.

Some of the environmental regulations are


Clean Water Act (CWA)
Resource Conservation and Recovery Act (RCRA)
Emergency Planning and Community Right-to-Know Act (EPCRA)
Oil Pollution Act
Toxics Substances Control Act (TSCA)
National Environmental Policy Act of 1969 (NEPA)

A good environment is a constitutional right of the Indian Citizens. Environmental Protection has been given
the constitutional status. Directive Principles of State Policy states that, it is the duty of the state to 'protect
and improve the environment and to safeguard the forests and wildlife of the country'. It imposes
Fundamental duty on every citizen 'to protect and improve the natural environment including forests, lakes,
rivers and wildlife'.

In India, the Ministry of Environment and Forests (MoEF) is the apex administrative body for: -
regulating and ensuring environmental protection;
formulating the environmental policy framework in the country;
undertaking conservation & survey of flora, fauna, forests and wildlife; and Planning, promotion,
co-ordination and overseeing the implementation of environmental and forestry programmes.

The Ministry is also the Nodal agency in the country for the United Nations Environment Programme (UNEP).
The organizational structure of the Ministry covers number of Divisions, Directorate, Board, Subordinate
Offices, Autonomous Institutions, and Public Sector Undertakings to assist it in achieving all these objectives.

FSIPD 13
Besides, the responsibility for prevention and control of industrial pollution is primarily executed by the
Central Pollution Control Board (CPCB) at the Central Level, which is a statutory authority, attached to the
MoEF. The State Departments of Environment and State Pollution Control Boards are the designated
agencies to perform this function at the State Level. Central Government has enacted several laws for
Environmental Protection:-
The Environment (Protection) Act, 1986, is the umbrella legislation which authorizes the Central
Government to protect and improve environmental quality, control and reduce pollution from all sources,
and prohibit or restrict the setting and /or operation of any industrial facility on environmental grounds.
According to the Act, the term "environment" includes water, air and land and the inter- relationship
which exists among and between water, air and land, and human beings, other living creatures, plants,
micro-organism and property. Under the Act, the Central Government shall have the power to take all
such measures as it deems necessary or expedient for the purpose of protecting and improving the
quality of environment and preventing, controlling and abating environmental pollution.
Acts relating to Water Pollution are comprehensive in their coverage, applying to streams, inland
waters, subterranean waters, and seas or tidal waters. These acts also provide for a permit system or
consent' procedure to prevent and control water pollution. They generally prohibit disposal of polluting
matter in streams, wells and sewers or on land in excess of the standards established by the state
boards.
Acts relating to Air Pollution are aimed at prevention, control and abatement of air pollution.
Acts relating to Forest Conservation provide for the conservation of forests and for matters connected
therewith or ancillary or incidental thereto.
Acts relating to Wildlife Protection provide for the protection of wild animals, birds and plants and for
matters connected therewith or ancillary or incidental thereto with a view to ensuring the ecological and
environmental security of the country.
Acts relating to Biological Diversity provide for conservation of biological diversity, sustainable use of
its components as well as fair and equitable sharing of the benefits arising out of the use of biological
resources and knowledge associated with it.
Acts relating to Public Liability Insurance provide for public liability insurance ( immediate relief) to the
persons affected by accidents occurring while handling any hazardous substances.
Rules relating to Noise pollution, aim at controlling noise levels in public places from various sources
like industrial activity, construction activity, generator sets, loud speakers, public address systems, music
systems, vehicular horns and other mechanical devices having deleterious effects on human health and
the psychological well-being of the people.
Rules relating to Management of Hazardous Substances, aim to control the generation, collection,
treatment, import, storage, and handling of hazardous waste. The term hazardous substances" include
flammables, explosives, heavy metals such as lead, arsenic and mercury, nuclear and petroleum fuel by-
product, dangerous microorganism and scores of synthetic chemical compounds like DDT and dioxins.

The Central Pollution Control Board (CPCB) has developed National Standards for Effluents and Emission
under the statutory powers of the Water (Prevention and Control of Pollution) Act, 1974 and the Air
(Prevention and Control of Pollution) Act, 1981. These standards have been approved and notified by the
Government of India, Ministry of Environment & Forests, under Section 25 of the Environmental (Protection)
Act, 1986. Besides, standards for ambient air quality, ambient noise, automobile and fuels quality
specifications for petrol and diesel. Guidelines have also been developed separately for hospital waste
management.

Also, an Environmental Information System (ENVIS) has been established as a plan programme and as a
comprehensive network in environmental information collection, collation, storage, retrieval and
dissemination to varying users. The focus of ENVIS since inception has been on providing this environmental
information to decision makers, policy planners, scientists and engineers, research workers, etc. all over the
country. ENVIS has developed itself with a network of participating institutions/organisations. A large
number of nodes, known as ENVIS Centres, have been established in this network to cover the broad subject
areas of environment with the focal point at the Ministry of Environment and Forest. These Centres have

FSIPD 14
been set up in the areas of pollution control, toxic chemicals, central and offshore ecology, environmentally
sound and appropriate technology, biodegradation of wastes and environment management, etc. The tool
used for the identification of the various external factors affecting the product development can be termed as
PESTLE analysis which is discuss in the subsequent sections.

1.1.2 PESTLE Analysis

Todays organizations are functioning in an environment that changes rapidly than before. The method of
analysing these changes and modification of the ways that the organization reacts to them is known as
business strategy.

Strategy is the direction and scope of an organization over the long term, which achieves advantage in a
changing environment through its configuration of resources and competences Johnson et al (2009).

Role of a manager in any type of organization are as follows:


Making decisions at strategic level
Contribution of expertise to the discussion of strategic concerns
Comment on pilot schemes, presentation ,reports or statistics

A good understanding of the appropriate strategic techniques for decision making can be done through
Meetings
Pilot Schemes
Presentations
Reports and
Statistics

The process of strategic decision making (figure 1.2) includes the following steps
a. Analyze of the organizations external environment
b. Assessment of the organizations internal capabilities and its response to external forces
c. Assist the definition of organizations strategy
d. Aiding in the implementation of organizations strategy

Aid
Assist
Assess
Analyse

Figure 1.2 Strategic Planning Process

FSIPD 15
Tools of Strategic Planning

Boston box

Porters 5
SWOT
forces

Strategic
PESTEL Ansoff
Planning

Figure 1.3 Tools of strategic planning

The(figure 1.3) shows the five widely used tools for business analysis that fit into the strategic planning
process. Among the above tools, the most used tool PESTE has been discussed in detail in the forthcoming
sessions.

PESTLE Analysis
External factors within the organizations environment that have impact on their operations should be
identified. A popular tool used for identifying these changes is the PESTLE Analysis. This is an updated form
of pest analysis sometimes known as STEP. It is a strategic planning technique that provides a useful
framework for analyzing the environmental pressures on a team or an organization.

PESTLE Analysis (figure 1.4) used to consider Political, Economic, Social, Technological, Legal and
Environmental issues.

Political

Econom
Legal
ical

PESTEL

Environ
Social
mental
Technol
ogical

Figure 1.4 PESTEL Analysis

FSIPD 16
The main aim of the PESTEL technique is to identify as many as factors and its impact on the organization.
The process of identifying the external forces involves screening of various disciplines in an organization and
these must be researched and analyzed as inputs for PESTEL Analysis.

Requirement of PESTEL Analysis


PESTEL analysis is required in an organization during the following decisions

To launch a new product or service


Consideration of new route to market
Working as a part of strategic project team
Entering a new region or country

Variations of PESTEL analysis


The priority of the six factors in the PESTEL analysis changes according to the type of organizations. For
example, organizations which sell products to consumers are affected by social factors whereas the global
contractor tends to be affected by political factors. An organization has to consider economic factors as its
first priority if it has been borrowed heavily.

There are several variations in PESTEL analysis with more than six factors or less than six factors .The most
common variations are shown below in figure 1.5.

PESTEL
ETPS STEP STEPE
Variations

STEEPLED PESTLIED STEEPLE PEST

Figure 1.5. Variations in PESTEL

S Social; T Technological; E Economic; E Environmental; P Political; L Legal;


E Educational; D Demographics; I-International.

The factors in PESTLE also vary depending on the type of business. The factors that you would include in your
list will depend on the nature and size of your business, e.g. social factors are considered more to consumer
businesses or a B2B business close to the consumer-end of the supply chain, whereas political factors might
be more relevant to a global arms dealer and environmental factors more particular to an aerosol propellant
manufacturer.

Future impact of external factors has great impact than their impact in the past on the organization. PESTEL
analysis should be in such a way that it overlooks the future impact of those external factors.

Groups or organizations which are having inward look uses PESTEL analysis. They are more focused on
internal pressures rather than external pressures which has adverse effect on their organization. This
technique is more useful to both large and small group activity.

FSIPD 17
Aims of PESTEL analysis
PESTLE analysis for new product development (NPD) aims at the following:
To identify and summarize the influences on the development of new product
To provide a way of auditing the influences those have impacted on the product in the past and in future.
To generate a lot of material about influences and initial efforts of the major factors that affects the
product development.
To assess the differential impact of the factors on their own subject
To prioritize influences in terms of the specific impacts and
To indicate which factors can combine to greater effect and which might cancel each other out.

Combination of PESTEL analysis


PESTLE analysis is a simple technique which can be used in a fairly sophisticated way when combined with
Risk Analysis, SWOT Analysis, or an Urgency/Impotency Grid and supported by expert knowledge about the
organization and its external factors. It may be possible to identify a number of structural drivers of change,
which are forces likely to affect the structure of an industry, sector or market. The combined effect of the
identified factors is more important than the individual effects.

Process of PESTEL analysis


PESTLE is shorthand for a list of macro-economic factors that:
Already; or
May, at some time in the future, affect your business.

The analysis phase consists of a 2 step process, being:


Evaluating the impact of each factor on the organization.
Planning those actions you may wish to take to:
minimize any threats; and
maximize any opportunities

The traditional use of PESTLE in any organization consists of following steps:


a. List of external PESTLE factors for the organization Brainstorming and expert knowledge of the
organization and/or the world outside the organization is needed.
b. Identification of the implications of each PESTLE factor on the organization.
c. Decision about the importance of the implications of the external factors
d. Rating of the potential impact to the organization e.g. high low, and the likelihood of it happening e.g.
low-high
List the factors
Make a that are already or
list likely to affect the
business

Identify which item


Think is a oppurtunity or
about it threat.
Decide to
eradicate threats
Make a and use
plan oppourtunities

Do it

Figure 1.6 Processes of PESTEL Analysis


The figure 1.6 shows the detailed process of PESTLE analysis which can be carried out in any type of
organization or business.

FSIPD 18
Various Factors of th he PESTEL an nalysis
PESTEL is i a part of th
he external analysis during a strategic an
nalysis or marrket research, and gives an
n overview
of the diifferent macrro-environmen ntal factors that
t the company has to be taken intoo consideratio on. It is a
useful sttrategic tool fo
or understand
ding
markket growth or decline,
business position n,
poteential of the organization and
Direcction for operations.

As alreaddy mentioned
d, there are six factors thatt form the fraame work for the PESTLE aanalysis. Thesse factors
are as following
Polittical
Econ nomical
Social
Tech hnological
Legaal and
Environmental

The abovve said factorss should be raanked or rated


d based on their impacts on the organization. The assessment
of the factors based on the followin ng
a. Impaact over time (short, mediu um and long-tterm),
b. Impaact by type (positive or neggative affects)) and
c. Impaact by dynam mics i.e. is the significance/importancce of the implication incrreasing, decreasing or
remaaining unchannged.

Factorrs
rating
g

Im
mpact Im
mpact Impact by
oveer time byy type dynamics

Positive Negativee
Short term M
Medium L
Long term Increassing Decrreasing
effects effects

Figure 1.7 Classification of facctors assessm


ment

The classsification of th
he factors asssessment in the PESTEL an
nalysis is show
wn in the figu
ure 1.7.

PESTEL analysis of the macro-en nvironment


Many facctors in the macro-environ
m nment will afffect the decisions of the managers
m of any organization. Some
of them are as followss
Tax changes,
c
New w laws,
Trad de barriers,
Dem mographic change and
Goveernment policcy changes.

Categorizzation of the above factorss can be donee by using thee PESTEL mod
del. This classsification distinguishes
between:

FSIPD 19
Political factors:
The stability and structure of a countrys government gives a basis for interpreting future changes in the
regions political environment. Policy at the local or federal level can differ dramatically. These refer to
government policy such as the degree of intervention in the economy. The analyses of the factors give rise to
the following questions
What goods and services does a government want to provide?
To what extent does it believe in subsidizing firms?
What are its priorities in terms of business support?

Political decisions can impact on many vital areas for business such as the education of the workforce, the
health of the nation and the quality of the infrastructure of the economy such as the road and rail system.
Some of these Political factors include
Bureaucracy
Corruption
Environmental Law
Freedom of the Press
Government Type
Government Stability
labour Law
Political Change

The influences of political factors on some of the issues are as follows:

Mining Ban:
Sand Mining Ban (figure 1.8):

The construction industry passes through a crisis following the non-availability of sand and also the
skyrocketing price of raw materials. Nearly one-and-a-half crore people including migrant labors depend on
the construction and allied industries. Ban on sand mining and the unscientific manner in which it is
distributed are causing huge problem to the people involved in this industry and there are restrictions in
bringing sand from other states and now the government is planning to tender the right to bring sand from
other states, which might result in major players getting the complete control over the industry and small
players being sidelined. The smalltime building contractors are compelled to launch an agitation if the
government fails to meet their demand in an effort to save the industry.

Though sand is there in many dams in the state, the authorities are not taking any action to mine and sell it,
which also blamed the government for not taking measures to control the skyrocketing price of cement and
steel. The government needs to withdraw the ban on small quarries to save the construction industry.
Moreover, the government should take steps to introduce site insurance at the private building construction
sites and also there should be licensing system for the contractors.

FSIPD 20
Figure1.8 Sand Mining

Land Acquisition Bill:


The Land Acquisition Act, 1894 is a law in India and Pakistan that allows the government to acquire private
land in those countries. Land Acquisition literally means acquiring of land for some public purpose by
government/government agency, as authorized by the law, from the individual landowners after paying a
government fixed compensation in lieu of losses incurred by land owners due to surrendering of his/their land
to the concerned government agency. In India, a new Bill, Land Acquisition and Rehabilitation and
Resettlement Bill was passed by the Parliament in 2013 to repeal this Act.

The authorities and agencies who are involved as follows


Union Government
State Government
Public authorities/agencies like DDA, NOIDA, and CIDCO
Companies like Reliance, Tata (for SEZs)

Economic factors:
Economic indicators such as GDP, GNP, interest rate, consumer sentiment and others provide the way to the
business people to understand the risks and opportunities available within the region.
These include interest rates, taxation changes, economic growth, inflation and exchange rates. For example:
Raise of price in terms of foreign money makes exporting more difficult
Higher wage demands from employees and cost raise due to inflation
Demand for a firm's products is boosted by higher national income growth.

Some of the economic factors include


9 Business cycles
9 GNP trends (Gross National product)
9 GDP(Gross domestic Product)
9 Interest rates
9 Inflation
9 Unemployment
9 Disposable income
9 Globalization
9 Government private sector relationships

FSIPD 21
A few examples are explained below

GDP Growth Rate:


Economic growth is the increase in the market value of the goods and services produced by an economy over
time. It is conventionally measured as the percent rate of increase in real gross domestic product, or real GDP.
Growth is usually calculated in real terms i.e., inflation-adjusted terms to eliminate the distorting effect
of inflation on the price of goods produced. In economics, "economic growth" or "economic growth theory"
typically refers to growth of potential output, i.e., production at "full employment".

As an area of study, economic growth is generally distinguished from development economics. The former is
primarily the study of how countries can advance their economies. The latter is the study of the economic
aspects of the development process in low income countries. Since economic growth is measured as the
annual percent change of gross domestic product (GDP), it has all the advantages and drawbacks of that
measure.

Economic growth is measured as a percentage change in the Gross Domestic Product (GDP) or Gross National
Product (GNP). These two measures, which are calculated slightly differently, total the amounts paid for the
goods and services that a country produced. As an example of measuring economic growth, a country that
creates $9,000,000,000 in goods and services in 2010 and then creates $9,090,000,000 in 2011, has a
nominal economic growth rate of 1% for 2011. To compare per capita economic growth among countries, the
total sales of the respected countries may be quoted in a single currency. This requires converting the value of
currencies of various countries into a selected currency, for example, U.S. dollars. One way to do this
conversion is to rely on exchange rates among currencies, for example how many Mexican pesos buy a single
U.S. dollar? Another approach is to use the purchasing power parity method. This method is based on how
much consumers must pay for the same "basket of goods" in each country.
Inflation or deflation can make it difficult to measure economic growth. If GDP, for example, goes up in a
country by 1% in a year, was this due solely to rising prices (inflation), or because more goods and services
were produced and saved? To express real growth rather than changes in prices for the same goods, statistics
on economic growth are often adjusted for inflation or deflation.

Recession:
In economics, a recession is a business cycle contraction; it is a general slowdown in economic activity.
Macroeconomic indicators such as GDP (Gross Domestic Product), employment, investment spending,
capacity utilization, household income, business profits, and inflation fall, while bankruptcies and the
unemployment rate rise.

Recessions generally occur when there is a widespread drop in spending (an adverse demand shock). This may
be triggered by various events, such as a financial crisis, an external trade shock, an adverse supply shock or
the bursting of an economic bubble. Governments usually respond to recessions by adopting expansionary
macroeconomic policies, such as increasing money supply, increasing government spending and decreasing
taxation.

In the United States, the Business Cycle Dating Committee of the National Bureau of Economic Research
(NBER) is generally seen as the authority for dating US recessions. The NBER defines an economic recession
as: "a significant decline in economic activity spread across the economy, lasting more than a few months,
normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales."

Social factors:
In an organization, the top management people have strict definitions or policies between professional
positions and responsibilities within a company. People with lower hierarchy are considered as democratic.
Individualism indicates their members to make decisions independently and valuing their independence.
Masculinity and femininity compare the cultures emphasis on the quantity versus the quality of life.
Long term orientation reveals the cultures focus on the distant future rather than the short term
orientation view of stressing the importance of the immediate present and past.

FSIPD 22
Changes in social trends can impact on the demand for a firm's products and the availability and
willingness of individuals to work. Attitudes towards health, career and environmental issues should be
considered. For example,

In the UK, the population has been ageing. This has increased the costs for firms who are committed to
pension payments for their employees because their staffs are living longer. Ads have started to recruit
older employees to tap into this growing labour pool for some firms.

The ageing population also has impact on demand: For example,

Demand for sheltered accommodation and medicines have increased whereas demand for toys is falling.

Some of the social factors include

Population demographics
Income distribution
Social mobility
Lifestyle changes
Attitudes to work and leisure
Consumerism
Levels of education and training

Some of the examples are described in the following sections

Social media:
Social media refers to the means of interactions among people in which they create, share, and/or exchange
information and ideas in virtual communities and networks. Andreas Kaplan and Michael Haenlein define
social media as "a group of Internet-based applications that build on the ideological and technological
foundations of Web 2.0, and that allow the creation and exchange of user-generated content. Furthermore,
social media depends on mobile and web-based technologies to create highly interactive platforms through
which individuals and communities share, co-create, discuss, and modify user-generated content. It
introduces substantial and pervasive changes to communication between organizations, communities, and
individuals.

Social media differentiates from traditional/industrial media in many aspects such as quality, reach,
frequency, usability, immediacy, and permanence. There are many effects that stem from internet usage.

Classification of social media


Social-media technologies take on many different forms including magazines, Internet forums, weblogs,
social blogs, micro blogging, wikis, social networks, podcasts, photographs or pictures, video, rating and social
bookmarking. Technologies include blogging, picture-sharing, vlogs, wall-posting, music-sharing, crowd
sourcing and voice over IP, to name a few. Social network aggregation can integrate many of the platforms in
use. By applying a set of theories in the field of media research (social presence, media richness) and social
processes (self-presentation, self-disclosure), Kaplan and Haenlein created a classification scheme in their
Business Horizons (2010) article, with six different types of social media:
collaborative projects (for example, Wikipedia)
blogs and micro blogs (for example, Twitter)
content communities (for example, YouTube and Daily Motion)
social networking sites (for example, Face book)
virtual game-worlds (e.g., World of War craft)
virtual social worlds (e.g. Second Life)

FSIPD 23
Womens Empowerment:
India attained freedom from British rule on 15th August 1947. India was declared a sovereign Democratic
Republic on 26th January 1950. On that date the Constitution of India came into force. All citizens of India are
guaranteed social, economic and political justice, equality of status and opportunities before law by the
Constitution. Fundamental freedom of expression, belief, faith, worship, vocation, association and action are
guaranteed by the Indian Constitution to all citizens- subject to law and public morality.

The Constitution of India not only grants equality to women, but also empowers the State to adopt measures
of positive discrimination in favor of women for removing the cumulative socio-economic, educational and
political disadvantages faced by them. There has been a progressive increase in the plan outlays over the last
six decades of planned development to meet the needs of women and children. The outlay of Rs. 4 crores in
the First Plan (1951-56) has increased to Rs. 7,810.42 crores in the Ninth Five Year Plan, and Rs. 13,780
crores in the Tenth Five Year Plan. There has been a shift from welfare oriented approach in the First Five
Year Plan to development and empowerment of women in the consecutive Five Year Plans.

With the advent of industrialization and modernization, women have assumed greater responsibility, both at
home and in the world of work. This is reflected in the increasing work participation rate of women which was
19.7% in 1981 and rose to 25.7% in 2001. However, this is still low compared to male work participation rate,
which was 52.6% in 1981 and 51.9 % in 2001. The number of women in the reorganized sector was 4.95
million on 31st March 2001, of whom 2.86 million were in the public sector and 2.09 million were in the
private sector. The number rose to 5.120 million on 31.03.2006, and of these women, 3.003 million were in
the public sector and 2.118 million were in the private sector.

Support Measures for Working Women:


The Government of India has undertaken several initiatives to provide support to working women. Some of
these initiatives are:
Rajiv Gandhi National Creche Scheme for the Children of Working Mothers
Working Womens Hostels with Day Care Centres
Swawlamban, erstwhile Setting up of Employment and Income Generating Training cum Production
Units for Women (NORAD) transferred to the States with effect from 01.04.2006
Support to Training and Employment Programme for Women (STEP)
Swayamsidha
Priyadarshini, Womens Empowerment and Livelihood Programmes in the Mid Gangetic Plains
Rashtriya Mahila Kosh (RMK)

Legislation for Working Women:


Several legislations have been enacted since Independence for the welfare of workers and women workers.
These are:
The Equal Remuneration Act, 1976
The Minimum Wages Act, 1948
The Mines Act, 1952
The Factories Act, 1948 (Amended in 1949, 1950 and 1954)
The Beedi and Cigar Workers (Condition of Employment) Act, 1966
The Contract Labour Act 1976 (Regulation and Abolition) Act, 1970
The Employees State Insurance Act, 1948 (with rules uptil 1984)
The Maternity Benefit Act, 1961 (Amended in 1995)
Supreme Court Order regarding Sexual Harassment of Women at Work Place and Other Institutions, 1999
The Employment Guarantee Act, 2004
The Domestic Workers (Registration, Social Security and Welfare) Act, 2008
The Unorganized Sector Workers Social Security Bill, 2007 (Under consideration of Parliament)

FSIPD 24
Figure 1.9 depicts a picture of women working in the call centres.

Figure1.9 Womens Empowerment


Technological factors:
The level of technological advancement in a region can positively or negatively affect the opportunities
available for a business. Consumers react to new technologies in different ways. The product diffusion curve,
that segments technology consumers by their risk tolerance levels, is one tool that can be used to determine
the likelihood of a product being adopted by the mainstream population. It segments the groups into five
groups: innovators, early adopters, early majority, late majority, and laggards. New technologies create new
products and new processes like MP3 players, computer games, online gambling and high definition TVs are
all created by technological advances. Online shopping, bar coding and computer aided design are all
improvements of better technology.

Technology can reduce costs, improve quality and lead to innovation. These developments can benefit
consumers as well as the organizations providing the products.

Some of the technological factors include


New discoveries
ICT developments
Speed of technology transfer
Rates of obsolescence
Research and Development
Patents and licenses

Explanations of some technological factors are as follows

Exploration of Mars:
The exploration of Mars has taken place over hundreds of years, beginning in earnest with the invention and
development of the telescope during the 1600s. Increase in the detailed views of the planet from Earth gives
inspired speculation about its environment and possible life even intelligent civilizations that might be
found there. Probes sent from Earth beginning in the late 20th century have yielded a dramatic increase in
knowledge about the Martian system, focused primarily on understanding its geology and possible
habitability potential.
Engineering interplanetary journeys is very complicated, so the exploration of Mars has experienced a high
failure rate, especially in earlier attempts. Roughly two-thirds of all spacecraft destined for Mars failed before
completing their missions, and there are some that failed before their observations could begin. However,
missions have also met with unexpected levels of success, such as the twin Mars Exploration Rovers
operating for years beyond their original mission specifications. Since 6 August 2012, there have been two
scientific rovers on the surface of Mars beaming signals back to Earth (Opportunity, and Curiosity of the Mars

FSIPD 25
Science Laboratory mission), and three orbiters currently surveying the planet: Mars Odyssey, Mars Express,
and Mars Reconnaissance Orbiter. To date, no sample return missions have been attempted for Mars, and one
attempted return mission for Mars' moon Phobos (Fobos-Grunt) has failed.

Driverless Car (figure 1.10):


The Google driverless car is a project by Google that involves developing technology for autonomous cars. The
software powering Google's cars is called Google Chauffeur. Lettering on the side of each car identifies it as a
"self-driving car." The project is currently being led by Google engineer Sebastian Thrun, director of the
Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun's team at Stanford
created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge and its US$2 million prize
from the United States Department of Defense. The team developing the system consisted of 15 engineers
working for Google, including Chris Urmson, Mike Montemerlo, and Anthony Levandowski who had worked on
the DARPA Grand and Urban Challenges.

The U.S. state of Nevada passed a law on June 29, 2011 permitting the operation of autonomous cars in
Nevada. Google had been lobbying for robotic car laws. The Nevada law went into effect on March 1, 2012,
and the Nevada Department of Motor Vehicles issued the first license for an autonomous car in May 2012.
The license was issued to a Toyota Prius modified with Google's experimental driverless technology. As of
April 2012, Florida became the second state to allow the testing of autonomous cars on public roads.
California became the third state to legalize the use of self-driven cars for testing purposes as of September
2012 when Governor Jerry Brown signed the bill into law at Google HQ in Mountain View.

Figure1.10 Driverless car

Environmental factors:
Environmental analysis involves aggregating and analysing weather patterns and climate cycles.
Environments vary drastically in different areas of the globe depending on the ecosystem of the region.
A rainy season in a region can affect the transportation systems which are active. Sometimes roadways and
train lines are restricted in order to minimize damages to vehicles from mudslides, falling rocks or flooding. In
developing countries, these cyclic weather patterns are more common where transportation infrastructure has
to be modernized.

Environmental factors include the weather and climate change. Changes in temperature can impact on many
industries including farming, tourism and insurance. This external factor is becoming a significant issue for
firms to consider because of major climate changes occurring due to global warming and with greater
environmental awareness. The growing desire to protect the environment is having an impact on many
industries such as the travel and transportation industries (for example, more taxes being placed on air travel
and the success of hybrid cars) and the general move towards more environmentally friendly products and
processes is affecting demand patterns and creating business opportunities.

FSIPD 26
Some of the Environmental factors include
9 Environmental impact
9 Environmental legislation
9 Energy consumption
9 Waste disposal
9 Contamination
9 Ecological Consequences
9 Infrastructure
9 Cyclic Weather

Energy Efficiency:
Efficient energy use, sometimes simply called energy efficiency, is the goal to reduce the amount of energy
required to provide products and services. For example, insulating a home allows a building to use less
heating and cooling energy to achieve and maintain a comfortable temperature. Installing fluorescent lights
or natural skylights reduces the amount of energy required to attain the same level of illumination compared
with using traditional incandescent light bulbs. Compact fluorescent lights use one-third the energy of
incandescent lights and may last 6 to 10 times longer. Improvements in energy efficiency are most often
achieved by adopting a more efficient technology or production process. There are many motivations to
improve energy efficiency. Reducing energy use reduces energy costs and may result in a financial cost saving
to consumers if the energy savings offset any additional costs of implementing an energy efficient
technology. Reducing energy use is also seen as a solution to the problem of reducing carbon dioxide
emissions. According to the International Energy Agency, improved energy efficiency in buildings, industrial
processes and transportation could reduce the world's energy needs in 2050 by one third, and help control
global emissions of greenhouse gases.

Energy efficiency and renewable energy are said to be the twin pillars of sustainable energy policy and are high
priorities in the sustainable energy hierarchy. In many countries energy efficiency is also seen to have a
national security benefit because it can be used to reduce the level of energy imports from foreign countries
and may slow down the rate at which domestic energy resources are depleted.

Recycling:
Recycling is a process to change materials (waste) into new products to prevent waste of potentially useful
materials, reduce the consumption of fresh raw materials, reduce energy usage, reduce air pollution (from
incineration) and water pollution (from land filling) by reducing the need for "conventional" waste disposal,
and lower greenhouse gas emissions as compared to plastic production. Recycling is a key component of
modern waste reduction and is the third component of the "Reduce, Reuse, and Recycle" waste hierarchy.
There are some ISO standards related to recycling such as ISO 15270:2008 for plastics waste and ISO
14001:2004 for environmental management control of recycling practice.

Recyclable materials include many kinds of glass, paper, metal, plastic, textiles, and electronics. Although
similar in effect, the composting or other reuse of biodegradable wastesuch as food or garden wasteis not
typically considered recycling. Materials to be recycled are either brought to a collection center or picked up
from the curbside, then sorted, cleaned, and reprocessed into new materials bound for manufacturing.
In the strictest sense, recycling of a material would produce a fresh supply of the same materialfor example;
used office paper would be converted into new office paper, or used foamed polystyrene into new polystyrene.
However, this is often difficult or too expensive (compared with producing the same product from raw
materials or other sources), so "recycling" of many products or materials involve their reuse in producing
different materials (e.g., paperboard) instead.

Another form of recycling is the salvage of certain materials from complex products, either due to their
intrinsic value (e.g., lead from car batteries, or gold from computer components), or due to their hazardous
nature (e.g., removal and reuse of mercury from various items). Critics dispute the net economic and
environmental benefits of recycling over its costs, and suggest that proponents of recycling often make
matters worse and suffer from confirmation bias. Specifically, critics argue that the costs and energy used in

FSIPD 27
collection and transportation detract from (and outweigh) the costs and energy saved in the production
process; also that the jobs produced by the recycling industry can be a poor trade for the jobs lost in logging,
mining, and other industries associated with virgin production; and that materials such as paper pulp can only
be recycled a few times before material degradation prevents further recycling. Proponents of recycling
dispute each of these claims, and the validity of arguments from both sides has led to enduring controversy.

Legal factors:
In order to ensure that all laws and regulations are followed in an organization, it is better to consult an legal
representative when doing business. Legal environments change between the district, city, state/province
and national levels. Complexities within certain industries can have a strong influence on the ease of doing
business, complicating administrative, financial, and regulatory processes, among others. These are related to
the legal environment in which firms operate.

In recent years in the UK there have been many significant legal changes that have affected firms' behaviour.
For example,
The introduction of age discrimination and disability discrimination legislation,
An increase in the minimum wage and greater requirements for firms to recycle.

Some of the laws or legal factors followed in an organization are as follows


Antitrust Law
Consumer Law
Discrimination Law
Employment Law
Health and Safety Laws
Industry/Domain specific laws and certifications
Intellectual Property Rights (IPR)

Different categories of law include:


Consumer laws; these are designed to protect customers against unfair practices such as misleading
descriptions of the product
Competition laws; these are aimed at protecting small firms against bullying by larger firms and
ensuring customers are not exploited by firms with monopoly power
Employment laws; these cover areas such as redundancy, dismissal, working hours and minimum
wages. They aim to protect employees against the abuse of power by managers
Health and safety legislation; these laws are aimed at ensuring the workplace is as safe as is reasonably
practical. They cover issues such as training, reporting accidents and the appropriate provision of safety
equipment

Vodafone-Hutchison Tax Case


Vodafone was embroiled in a $2.5 billion tax dispute with the Indian Income Tax Department over its
purchase of Hutchison Essar Telecom services in April 2007. It was being alleged by the Indian Tax authorities
that the transaction involved purchase of assets of an Indian Company, and therefore the transaction or part
thereof was liable to be taxed in India.

Vodafone Group Plc. entered India in 2007 through a subsidiary based in the Netherlands, which acquired
Hutchison Telecommunications International Ltd.s (HTIL) stake in Hutchison Essar Ltd (HEL)the joint
venture that held and operated telecom licenses in India. This Cayman Islands transaction, along with several
related agreements, gave Vodafone control over 67% of HEL and extinguished Hong Kong-based Hutchisons
rights of control in India, a deal which cost the worlds largest telco $11.2 billion at the time.

The crux of the dispute had been whether or not the Indian Income Tax Department has jurisdiction over the
transaction. Vodafone had maintained from the outset that it is not liable to pay tax in India; and even if tax
were somehow payable, then it should be Hutchison to bear the tax liability.

FSIPD 28
In January 2012, the Indian Supreme Court passed the judgment in favour of Vodafone, saying that the Indian
Income tax department had "no jurisdiction" to levy tax on overseas transaction between companies
incorporated outside India. However, Indian government thinks otherwise. It believes that if an Indian
company, Hutchison India Ltd., conducts a financial transaction, government should get its tax out of it.
Therefore, in 2012, India changed its Income Tax Act retrospectively and made sure that any company, in
similar circumstances, is not able to avoid tax by operating out of tax-havens like Cayman Islands or
Lichtenstein. In May 2012, Indian authorities confirmed that they were going to charge Vodafone about
20000 crores (US $3.3 billion) in tax and fines. The second phase of the dispute is about to start.

Patent Litigation between Apple and Samsung:


Apple Inc. v. Samsung Electronics Co., Ltd. was the first of a series of ongoing lawsuits between Apple Inc.
and Samsung Electronics regarding the design of smartphones and tablet computers; between them, the
companies made more than half of smartphones sold worldwide as of July 2012. In the spring of 2011, Apple
began litigating against Samsung in patent infringement suits, while Apple and Motorola Mobility were
already engaged in a patent war on several fronts. Apple's multinational litigation over technology patents
became known as part of the mobile device "smartphone patent wars": extensive litigation in fierce
competition in the global market for consumer mobile communications. By August 2011, Apple and Samsung
were litigating 19 ongoing cases in nine countries; by October, the legal disputes expanded to ten countries.
By July 2012, the two companies were still embroiled in more than 50 lawsuits around the globe, with billions
of dollars in damages claimed between them. While Apple won a ruling in its favor in the U.S., Samsung won
rulings in South Korea, Japan, and the UK. On June 4, 2013, Samsung won a limited ban from the U.S.
International Trade Commission on sales of certain Apple products after the commission found Apple had
violated a Samsung patent, but this was vetoed by U.S. Trade Representative Michael Froman.

Relationship between PESTLE and SWOT


SWOT analysis is another widely used tool for planning. It is used to assess the Strengths, Weaknesses,
Opportunities and Threats to an organization. Strengths and Weaknesses pertain to internal factors whilst
Opportunities and Threats are the product of external factors.

PESTLE is generally used before SWOT to identify external factors. SWOT is an assessment of a business or a
proposition, whether youre own or a competitor's. The initial step looks in at the organization. PESTLE
assesses a market, including competitors, from the standpoint of a particular proposition or a business. It
looks outwards.

PESTLE becomes more useful and relevant the larger and more complex the business or proposition or
whole industry.

1.2 Introduction to product development methodologies and management

1.2.1 Overview of products and services


A product can be described as an object which the producer or supplier offers to potential customer in
exchange of something (conventionally money which is exchangeable as store of value). The product may be
goods or service and this can be the next breakthrough computer chip, or a new holiday package built together
by a travel agent.In earlier days the exchange system was known as barter system. But in any case in order to
exchange to occur there must be adequate demand for the product. With the existence of demand, the
producers get an opportunity to supply the required object/ product to the market and potential markets can
be developed where buyers and suppliers can do business and build a mutually satisfying relationship.But
while launching any new product to the market, main point to be noted is that the product will be a new one
and hence it is often risky.

In case of large scale business, it is practically impossible that the producer and the buyer can come into direct
physical contact. In order to facilitate the exchange of the goods, elaborated channels of distribution are
required. (Here, supply chain comes into picture). The above statements are not applicable to all kinds of
business e.g. in the case of service providers where the seller needs to make direct exchange with the buyer.

FSIPD 29
One should carefully note that the value of the product does not depend on the extent of contact between the
buyer and producer. The value possessed by the product will always depend on the extent of willingness of
consumers to exchange. This is the reason that it is said that supply always depends on demand. During the
second half of the 20th century, the marketing strategy had changed. The marketing technique reflected the
potential for excess supply in the industrialized economies where technological advancements had created
scope for productivity gains. Now, the capability of excess supply will reduce the value of product because the
unconsumed supply will become worthless. Hence, producers are not certain about the demand for their
product.

Figure 1.11 Product definition

Figure 1.11 gives a relation between buyers, producers and the intermediate processes and techniques.
Product definition comprises of customer, company and quality function deployment (QFD). This also
includes the definition process which in turn may include QFD. The quality function deployment is concern
with the customers and thus links up customers with the companies. The company in turn determines the
definition process for the product.

Defining product by nature of demand


All products begin from core benefit and products can be represented as concentric circles. The concentric
circles can be represented as in figure 1.11.

Figure 1.11 Concentric circles representing products


Source: Production design and manufacturing

FSIPD 30
Core benefit: It represents the basic theme of any product. In other words it represents the main service or
benefits which are derived from the consumers use.

Generic product: It is the basic version of a product i.e. these products do not have various features which
classify the product. These features enable consumers to receive desired benefits.

Expected product: These products contain properties or characteristics which are usually expected and
accepted by buyers.

Augmented product: These products contain some additional benefits and services than the expected ones.
Competitor producers compile each other on the basis of these additional benefits only.

Potential products: These products have undergone all possible augmentations with course of time and
increase in demands. The potential product is the product which just does not meet all the consumers needs
but also delight the consumers.

Classification of product

For producers it is very important to understand demand of their products. And to understand the demand, it
is essential to classify the products. Products can basically classified into following three categories

Consumer products:
Consumer product refers to any article, or component parts which are produced or distributed for sale to a
consumer to be used in or around residence, school, in recreation, or for the personal use, consumption or
enjoyment of a consumer. But consumer product does not include any article which is not customarily
produced or distributed for sale to a customer for its use or consumption, or enjoyment. Hence, a customer
product can be any tangible commodity that is produced and subsequently consumed by the consumer, to
satisfy their needs and these are ultimately consumed rather than used in the production of another good.

Examples of such products are weight loss pills, digital cameras, iPods, laptops, smart cell phones, GPS
navigation devices, beauty products, video games, DVD players, and cable television.
Industrial products:
Industrial product refers to any item that is used in manufacturing or industry. These are the goods produced
in a factory with the help of machinery and technology. These are usually high cost products. These are used
for production of consumer products example various equipment and industrial set ups.

Examples of the products in this section vary according to the type of product being used to manufacture.
Some common examples included in this section of products are carts or dollies, tapes or adhesives, ladders,
lifts, storage lockers, cabinets, scaffolding, personal protection equipment, office supplies, and light fixtures,
or tools.

Specialty products:
Specialty goods represent the third category of product classification which, are unique in nature. These
products are the unusual and luxurious items available in the market; i.e. these products are the products for
which buyers are habitual and may make special efforts for purchase of these products.

The specialty products are purchased with a predetermined pattern in mind; i.e. a customer will go for
purchasing a product of only a specific brand. As an example, a customer will prefer to visit a particular store
just because the product of his/ her liking is available in that very store. Here the price is almost never a
determining factor in choosing between the products. Sellers of specialty goods also need not be conveniently
located, because buyers will seek them out, even if it involves considerable effort. Some products may be
considered shopping goods by some buyers, and specialty products by other buyers. Example of specialty
products are house, holiday package etc.

FSIPD 31
1.2.2 Types of product development

Product Development can be considered as the set of activities which begin with the view of a prospective of
the market and end with the production, sale, and delivery of a product. It is the process of creating a new
product or refurbishing old products which has to be sold by a producer to its buyers. This is a very wide
concept where basically the efficient and effective generation and development of ideas through a process,
leads to new products.

Importance of product development


It is big business
Hundred billion dollars
New products answer to biggest problems
A successful new product does more good for an organization than anything else
It is great life; it is fun and exciting

Characteristics of successful product development


Product quality
How good is the product resulting from development?
Does it satisfy customer needs?
Is it robust and reliable?
Product quality is reflected in market share?
Product cost
What is the manufacturing cost?
It includes capital equipment and tooling?
Development time
How long did the PD effort take?
Development cost
How much spent in PD effort?
Development capabilities
Did the team/firm acquire any experience for future projects?

This whole product development process or the concept is summarized in table 1.1 and figure 1.12:

Enhancement - product improvement Modify features to keep product competence-


familiar market
Slight change to address any flaws
EXAMPLE: Software updates, dual tone car
interiors, replacing the black coloured car bumpers
with body coloured bumpers.
Derivatives of existing product platforms Extend existing products into a new market
Use of new modules onto an existing platform
EXAMPLE: Creative New Apps for the I-phone,
recording live TV shows for future viewing, cars
with parking sensors, navigation systems, rear
axle view camera etc.
New product platform Major development (but familiar markets and
product categories)
Create new family

FSIPD 32
EXAMPLE: Iphone3 to Iphone4, Maruti 800 to
Maruti alto 800 or Hyundai i10 to Hyundai i10
grand.
Breakthrough products Radically different product or technology new
market
More risk
EXAMPLE: Virgin Galactic Space Programme,
Launch of Nano by TATA motors.
Table 1.1. Product development processes
Source: TCS

Figure 1.12 Types of product development


Source: TCS

There are various types of product development process like new product development, Re- Engineering,
Reverse Engineering, Device Porting & Homologation, etc. Further, in this chapter, we will be studying these
product development processes in details.

New Product Development


Before understanding the New Product Development process, we need to have good understanding of What
is new? A new product could be anything like a new model of car, reduced version of a branded product, or
even a totally new concept. Various marketing experts have categorised new product as following:

1. A product which is already available in market but launched by a new producer.


2. A product which is advanced version of existing product and can replace the existing product.

New Product Development Process


New product development consists of following phases:
1. Market research
2. Idea Generation
3. Idea Screening
4. Concept Development and Testing
5. Business Analysis
6. Product Development
7. Market Testing
8. Commercialization

FSIPD 33
Market research: The manufacturing companies conduct surveys in order to gather information
about markets or customers, which is a very important component of product development process. This
survey is a key factor to maintain competitiveness over competitors which provides important information to
identify and analyze the market need, market size and competition, i.e. whether the product to be launched
by the company will be accepted the customers or not, or to identify the present test of the customers. Once
the need or the demand is identified, the product development process starts.

Idea generation: By name, it is understood that it is all about generating new ideas; but in actual it is a bit
different. Basically it is a process of identifying and activating the sources of new ideas. The ultimate goal is
to develop a bank of ideas. Various departments of an organisation like Research & Development, Design,
Engineering Works, Sales and Marketing all contribute in the process of idea generation. Various bodies
outside the organisation like competitors, customers, distributors, educational institutions, etc. may also
contribute for this cause.

There are various methods of idea generation, such as:

1. Brain Storming: In this method, group discussions of 6 to 10 people are conducted. The group members
are required to be open and frank while making suggestions. After a short time period, near to 100 ideas
are generated. After generation of ideas, the group starts evaluating all the ideas for practicability.
2. Market Analysis: In this method market research is carried out to identify the needs and demands of
customers. This method is used by most of the companies because it helps the companies to meet the
customers needs in the best possible way. This method is now so popular that some companies have
started outsourcing this job of market research to third parties.
3. Futuristic Studies: In this method forecasting is carried out the basis of various factors like change in
customer lifestyle, change in social trends, economical trends, etc.
4. Management Think Tank: The top managers of various departments of an organisation generate ideas for
new product; these ideas are also based on consumers needs.
5. Global Study: Many multinational companies transfer ideas from one country to another. This method
helps to implementation a successful idea of a country to another country. But, this method is not
successful always. Sometimes, it may happen that an idea successful in a country fails in another
country.

Idea Screening: In this stage, first assessment of idea is done in reference to the capability of the company to
make the product. For this purpose, again people from different departments are involved. The reason for
involving all departments is that; in order to make new idea fit into companys overall strategy, the idea
should have practicability with all the departments of a company. During this process, an idea is checked for
its potential for profitability, marketability, cost of production, etc.

Concept Development and Testing: After screening, ideas become clearer concept. Now, it is tested for
companys capability and customer satisfaction. At this stage company needs to know what all additional
facilities are required for production of the new product.

Business Analysis: This is the stage where the major decision of yes /no is taken. This decision is very crucial,
because in next stage prototype development will consume lot of money and it is required to ensure that the
expenditure on prototype development is worthwhile. It requires information from different sources like:

Market analysis with details of market share, competition, etc.


Technical, economical, R&D aspects, etc.
Fulfillment of corporate objectives.

At this stage, again the new product is checked for compatibility with existing capabilities and also for
potential for sales, profit, market growth, competitiveness, human welfare, etc.

Product Development and Testing: In this stage, physical prototypes of products are made. Then following
tasks are carried out:

FSIPD 34
Assessment of product with reference to its functional performance.
Implementation or changes required in product specification and manufacturing configurations.
Assessment of product for overall impression over customers.

Basically, there are two approaches of carrying tests on products:


1. Beta Testing: Testing is carried out customers / consumers end.
2. Alpha testing: This testing is carried out at producers end.

Since, lot of investment is required for prototype development and testing; therefore, many companies prefer
to use computer aided programming. Apart from reducing the cost of manufacturing, these computer-aided
manufacturing systems also helps in making corrections in manufacturing configurations wherever required.

Test marketing: In this stage, a small scale test of the product is carried out. Additional result of this testing
is the measure of product appeal under the combined effect of salesmanship, advertising, sales promotion,
distributor incentives, public relations, etc. With the advancement in technology and approach, use of
computer based simulation models for test marketing have come into picture. These models use information
like consumer awareness, repeat purchases in form of stored data to carry out the required test.

Scope for adopting this stage:


If the loss to the company due to failure of the new product is much greater than the proposed test
marketing procedure, test marketing is preferred
If the loss due to failure is much greater than the profit due to success of the new product, test
marketing is preferred.
If the investment of the full-fledged commercial launch is much greater than the test launch, test
marketing is preferred.
If the competitors are good at copying the technology from the test products, test marketing is not
preferred.

Objectives of test marketing:


The test marketing is carried out to have better understanding of:
Consumer needs
Effect of advertising and promotion
Market potential
Worthiness of the product as per its price.

Process of test marketing:


The process of test marketing can be divided into following segments:
City selection
Sales representative selection
Test duration selection
Implementation

The above mentioned selection should be properly done on the basis of the type of product and its scope of
application. Wrong selections may lead to incorrect feedback or non-universal feedback. Based on the
feedback, companies have following three options to choose:
1. Completely give up on the product
2. Implement more testing
3. Commercialization

Commercialization: This is the final stage of new product development. In this stage the product is finally
launched in market. There are four basic concerns: when, where, how and whom to launch the product. These
decisions are made by considering the following points:

Seasonality of product

FSIPD 35
Effect on a commercial event
Capability of the new product to replace the existing product.

There are two methods of launching the product:


1. Immediate national launch: In this method, the product is launched at a large level / area in one goes.
Reduction in launch cost is one of its advantages. But there is a major risk involved with this method i.e.
many such problems may arise which are not checked during the test marketing.
2. Rolling Launch: In this method, initially the product is launched at one or two smaller well known areas
and then gradually increasing such areas. There is no risk of facing problems as it is in the case of
immediate national launch.

After the launch, initially it may happen that expenditures exceed sales; but later on the sales revenue should
increase in case of successful launch. If in case, the sales are not growing significantly, some major decisions
are required to be taken such as revising the strategies or product itself, pulling out the product temporarily or
permanently from the market.

Re-Engineering
Reverse Engineering is the process of developing a New Product/S/W for which only the physical product is
available and there is no design documentation available. Typical flow for Engineering Product Reverse
Engineering is listed below:

Re-Engineering is a field of engineering which deals with transformation and reconstitution of an existing
product in order to derive improved performance, efficiency and capability of the product at low cost and
maintenance. A re-engineering process is shown in figure 1.13:

FSIPD 36
Figure1.13. A Re-engineering process

Reengineering technology is based on rethinking and radical redesign of products, to achieve drastic
improvements in products, up to date performance measurement, such as cost, quality, service and speed.
Thus, it can be inferred that the technique mainly focuses on the continuous improvements of the quality and
performance of the products developed. The process of discarding the defective or old products and launching
an entirely new product into the market faces huge competition from the products of the rival companies. At
the same time cost involved in the production processes is also high. Hence, re-engineering serves as an
alternative technology to develop the product. This not only saves the time for the production processes, but
also reduces the cost of the product and thereby improving the product cost effectively, so that the company
can launch the product with a suitable price tag attracting the customers without incurring any lost to the
company itself. In turn this also helps to maintain the image of/ faith on the organization, which had built
hug reputations among the customers.

An example of re-engineering process is shown in the figure 1.14.

Figure 1.14. Example of reengineering process: Software Life Cycle

The main reasons for re-engineering a product can be sorted out as follows:
1. To fight increasing competition
2. To get products/ services faster to the market
3. To build closer relationships with customers and suppliers and
4. To reverse declines in market share/ profits.

The effect of re-engineering can be listed as:

1. Improvement internal and external customer satisfaction


2. Improvement competitiveness

FSIPD 37
3. Improvement of consistency of delivery
4. Improvement of service
5. Improvement of processes
6. Radical reduction of process cost
7. Reduction of process cycle time
8. Reduction of errors
9. Reduction of bureaucracy

Case study: Re-engineering


Yamaha with a view to step into the scooter market in India launched its first model as Yamaha Ray in this
section in the year, 2012. This model was launched to target the female customers and the great success
from the launch (selling of 70000 scooters) followed the next re-engineered model of the Ray series, i.e.
Yamaha Ray Z, which was designed to attract the male customers of the country. The technical specifications
and features of ray z are almost similar to Yamaha ray delivering the same performance as Yamaha ray. The
basic difference in the two designs is that the ray z has a masculine look with a combination of red and black,
black and white and also in full black colour giving it a much more sporty when comparing with Yamaha ray.
So the only difference between Yamaha Ray Z and Yamaha Ray models are in graphics and colours. The
engine and performance remains same. With the added style it attracts males more. Technical Features of
Yamaha Ray Z are given in table 1.2.

Features of the model Yamaha Ray Z


Engine 2-valve 4-stroke, SOHC
Displacement 113cc
Max power 7 Bhp @ 7500 rpm
Max torque 8.1 Nm @ 5500 rpm
Top speed 85kph
Fuel consumption(city) 40 kmpl
Gears Automatic
Clutch Dry, Centrifugal
Dimensions(length x width x height) 1835.00 mm x 675.00 mm x 1090.00 mm
Weight 104.00 kg
Ground clearance 128.00 mm
Fuel tank 5.00 ltrs
Front suspension Telescopic
Rear suspension Unit Swing
Brakes 130mm drum
Breaks (rear) 130mm drum
Self-start Yes
Indicators Low Fuel Indicator, Low Oil Indicator, Low
Battery Indicator
Head lamp 12V, 35W/35W
Horn Single/Mono
Wheel type Steel Wheels
Wheel size 10 inch 100/90 mm
Colors White, Red and Black
Table 1.2. Technical features of Yamaha Ray Z

Source: www.zigwheels.com

Reverse engineering
Reverse engineering is a process that involves developing a new product which starts with a physical product
existing in the market place with a vision to redesign it for some observed market defect or intended
evolution. Reverse engineering entails the prediction of what a product should do, followed by modelling,
analysis, dissection, and experimentation of its actual performance. Redesign follows reverse engineering,
where a product is evolved to its next offering in the market place.

FSIPD 38
General reverse engineering and redesign methodology
The figure below shows the general composition of reverse engineering and redesign methodology. There are
three distinct phases embodying the methodology, i.e. reverse engineering, modeling and analysis, and
redesign.

Figure 1. General reverse engineering and redesign methodology

This approach allows us to present the necessary material on how to understand the product. For example, it
is not likely that any company would order its design team to tear down the companys last product to
understand how it works, which the design team themselves designed it a month earlier. It has been observed
that re-engineering and reverse engineering focuses on developing a new product basing on existing products,
another process is there which is based on incorporating of new technologies into an existing product to
develop a new product. This process is known as design porting.

Design porting & homologation

Design porting
Design porting or migration is the process of getting an increasingly important status in today's design teams.
Design porting typically involves taking a design from a hardware prototype to a production cell-based,
application-specific integrated circuit (ASIC). Or design porting could include migration of a cost-reduction
technology.

Several factors drive design porting. For example, if there is an increase in design complexity; as such this will
result in longer simulation-based verification process. This increase in verification time has resulted in
reduction of practicality for simulation-only verification. Simultaneously, this has raised the demand for
hardware prototyping. Another factor that drives design porting is the increasing mask costs that make it
harder for companies to enter new markets.

This design porting approach works if designs are architected properly from the start and if it is not so, the
designs can quickly become locked into a single-implementation technology or vendor. If an unplanned
porting becomes necessary, significant technical challenges could arise later in the design cycle.

Design-for-reuse (DFR) concepts are well documented in many publications which focus on a single
subsystem in a design and how it is reused in subsequent designs. Designs that will later be ported to
standard-cell architecture require more than just good design-reuse practices.

FSIPD 39
The design and porting finds lots of applications. For example, integration of a new technology; provided the
existing applications are ensured to be seamlessly migrated to new devices or in the enterprise-class ones one
buys and those the people bring from home, or mobile application development and application optimization
so as to keep pace with change and combat obsolescence. Thus, this provides uninterrupted service and the
same level of quality and availability as before.

The technology also has a numerous benefits along with the positive features. Such as:
1. These technologies accelerates technology rollout such as by preparing the applications to run and quick
adoption of new technology in the old appliances,
2. These allows to move information to a point of customer interaction
3. These help the customer to experience new technology post-deployment
4. These ensure that the requisite operational results are met and maintained
5. These allow to quickly migrate to new technology without any compromise on the application
performance
6. And these allow to port to a new technology while keeping the required applications that are being in use
at peak performance

Once the porting of a new technology into an existing product is completed, the next task is to test the
product to meet the requisite standards for the product.

Design homologation
Design homologation is the certification provided for a product or specification to indicate that it meets
regulatory standards. In the world of manufacturing there are companies which have earned the
specialization to help the manufacturers to achieve the regulatory compliance. The services of these
companies include the description and understanding of standards and specifications, supporting in audit
and approval of plant facility, material testing and certification, and transformation of manuals, legal
mandates and other written material.

As an example, design homologation in case of automobile is the process of certifying that a particular
automobile is safe and matches the specified criteria set by the government for all vehicles made or imported
into a country. This practice is accepted worldwide. In India, the process of providing clearance is performed by
the Pune-based Automotive Research Association of India (ARAI) or the Vehicle Research and Development
Establishment (VRDE), Ahmednagar and by the Central Farm Machinery Training and Testing Institute, Budni,
Madhya Pradesh for tractors. The tests essentially ensure that the vehicle matches the requirements of the
Indian market in terms of emission and safety and road-worthiness as per the Central Motor Vehicle Rules.
All original car models running in India, whether Indian made or imported cars, for example TATA Indica or
Ford Mondeo, are homologated.

Need for design porting and homologation


The scenario of the modern day is such that each product has to be validated with up to date technologies.
Moreover, the ever increasing competition between the companies has forced each company to provide the
latest technology available in the market to its customers. And to completely build a new product will
increase the cost for the manufacturing as well as material. Hence, to overcome this problem as well as to be
competent, incorporating the recent technologies available or developed into the existing product, and then
maintaining the required specifications and regulations, is the most suitable process.

An example for design porting and homologation is the incorporation of the DTSI engine in Bajaj pulsar has
not only improved the performance and efficiency of the motorcycle but was also able to attract the
customers. At the same time it also was able to get the certification for required standards.

While the various product development technologies like re-engineering, reverse engineering and design
porting are used to develop new products; there are different product development methodologies like Over
the Waterfall model, stage-Gate Process, Agile systems, etc.

FSIPD 40
1.2.3 Overview of Product Development methodologies

A product development methodology is the process by which an engineering team will build a given product.
Different companies approach the delivery of product requirements in different ways.
The various methodologies include:

1) Waterfall methodology: The Waterfall methodology is a sequential development process, where progress
flows steadily toward the conclusion (like a waterfall) through the phases of a project. This involves fully
documenting a project in advance, including the user interface, user stories, and all the property variations
and outcomes. This methodology is resistant to change. Any change is expensive because most of the time
and effort has been spent early on in the design and analysis phases. This is a major drawback of this
methodology. So the practical outcome may be quite different than the prediction. The various phases in this
methodology (figure 1.15) are:

a) Analysis: First the team determines the requirements and fully understands the problems. The team
attempts to ask all the questions and secure all the answers they need to build the product requirement.
b) Design: A technical solution is being developed to the problem set by the product requirements including
scenarios, layouts and data models. This phase is usually accompanied by documentation for each
requirement, which enables other members of the team to review it for validation.
c) Implementation: After the approval of the design, technical execution is carried out. This is the shortest
phase.
d) Verification: Upon completion of full implementation, inspection needs to be carried out before the
product can be released to customers. The testing team will use the design documents, personas and user
case scenarios delivered by the product manager in order to create their test cases.
e) Maintenance: Eliminating defects and control the performance of the product before delivery.

Analysis Design Implemenatation Verification Maintenance

Figure 1.15.Various phases of Waterfall Methodology

2) Agile methodology: This is an iterative approach to product development that is performed in a


collaborative environment by self-organizing teams. The methodology produces high-quality software in a
cost-effective and timely manner to meet stakeholders changing needs. In this method every product release
begins with a list called a back log which consists of a list of prioritized requirements i.e. a list of work to be
done in order of importance. By this methodology the team will always adjust the scope of work to ensure
that the most importance work are completed first. The backlog is a dynamic set of requirements that can
change weekly (depending on the length of your iterations). So instead of delivering the entire back log at the
end of product release we can divide the work into smaller amount of delivered requirements, which are taken
from the backlog in their order of importance. These smaller amounts are known as iterations (or sprints).
Iterations have short time frames that last from one to four weeks, depending on the teams experience. A
key element of an iteration is that, unlike in back log, the priorities regarding which requirements should be
built do not change within the iteration (for example, during the two-week period); this list should only
change from one iteration to another. This methodology accepts that project change is inevitable. The use of
small iterations allows changes to be absorbed quickly without inflicting significant project risk. The backlog
order can adjust as business priorities evolve; with the next iteration, the team can adapt to those priorities.
In the context of a product release, the items that are the most technically difficult (i.e., that hold the larger
risk) tend to be done in early iterations to ensure that the risk can be minimized. This approach to mitigating
risk is a key differentiator from the Waterfall methodology. Instead of adjusting during the development
process, the Waterfall methodology involves planning and researching each task in advance. However, should

FSIPD 41
items neeed to change after the inveestment of th
his upfront wo
ork, revision tends to be ressisted by the team and
can also be expensive. The various steps in agile developmentt (figure 1.16)) are:

1) Projject approval
2) Pre-Iteration Plannning
3) Iteraations (Iteratiion Execution
n, Iteration Plaanning, Iteration Wrap up)
4) Post-Iteration Co onsolidation
5) Releease

Figure 1.16Vario
ous steps of Agile
A Methodology

3) Over the Wall Meethodology (figure( 1.17): During the industrial revolution, technology becaame more
complex.. The complexxity forced emmployees of coompanies to specialize
s in different
d areass of the produ
uct design
process. No longer cou uld one personn handle multtiple responsiibilities like thhe design, maanufacture and sales of
a producct and thus the era of speecialists prevaailed. Large companies
c beegan to organ nize departmeents with
differentt responsibilities. Some exaamples of dep
partments and d their respon nsibilities are shown below:

a) Marketing: This department tries to und derstand the future need d of the customers and keep the
organization updaated about thhe current marrket condition
n.
b) Reseearch: This deepartment deevelops the technology to meet
m the need ds identified b
by marketing
c) Desiign: This deppartment uses the technology developed by researcch to design products to meet the
needds of the customer.
d) Mannufacturing: This
T departm
ment developss the methods to manufaccture the products designeed by the
desiggn departmen nt.
e) Salees: This deparrtment develo
ops plans and executes the plans to sell the products to the custom
mer.

In over-the-wall meth hodology, eachh departmentt works on a product


p until they had commpleted their tasks
t and
then theyy handed off the project too the next dep
partment. Nott only was this serial process very slow but
b it also
caused many
m problemms when thee communication between n departmentts broke dow wn. The breakkdown of
commun nications led to projects being thrown baack over the walls
w that divvided the departments for rework.

Some typpical reasons for reverse flo


ow (figure 1.18) are listed below:
b
1. Markketing department specifiees a need thatt research can nnot develop a technology tto meet.
2. Reseearch departmment developss a technologyy that is too expensive
e or clumsy to use in a product.
3. Design departmen v difficult and expensive to manufacture.
nt creates a design that is very
4. Afteer many changges to meet the t demandss of each department, man nufacturing d
department produces a
prod
duct that doess not solve thee customers problem or is too expensive.

FSIPD 42
Figure 1.17.The Over-tthe- Wall Dessign methodology

Fiigure 1.18. Th
he Over-the-W
Wall Design methodology with
w reverse floow

4) V-mo odel: The V model


m (figure 1.19(b)) is a modified verssion of the Waterfall
W method (figure 1.19(a)). It
was put forth by Paul E. Brook in 1986.
1 Unlikee the Waterfall method, th his one was not designed in a linear
axis; insttead the stagges turn backk upwards after the codingg phase is done so that it makes a V shape and
hence the name V Model.
M In otheer words, in a V-model therre are extra vaalidation stagges after norm
mal stages
in Waterrfall model have been comp pleted (Figuree 1.19).

The V-model includes these three levels:


a) The Life Cycle Prrocess Model (Procedure)): It determines What hass to be done? This level deetermines
what activities arre to be perforrmed, what should be the results of this activities, and what shou uld be the
conttent of this results.
b) Allocation of Meethods (Meth hod): It deterrmines How iti is done?In this method it is determined what
methods are to bee used to carry out the activities.
c) Funcctional Tool Requirementts (Tool Requ uirement): Itt determines What
is used
d to do it? In this level
the characteristic
c s of the tool used
u to perforrm the activities are to be determined.
d

At all levvels the stand


dards are orgaanized accord
ding to the arrea of functio
onality. Thesee areas of fun
nctionality
are calledd sub models. There are fou
ur sub modelss:

1. Projecct Managemeent: This sub b-models initiiates, plans, monitors


m and controls the project. It alsso passes
informattion to the othher sub-modeels. It comprises of 14 activvities:
Project Initialization,
I Placement/Procurement,, Contractor Managementt, Detailed P Planning, Cosst/Benefit
Analysis,, Phase Review, Riskk Managemeent, Projectt Control, Information Service/ Reporting, R
Training//Instruction, Supplying
S Ressources, Alloccation of Work Orders, Stafff Training, Prroject Completion.

FSIPD 43
Req
quirement

S
Specifica
ation

Architecturaal Design
n

Deetailed Design

Code
Figure 1.19(a). The Wateer Fall Model

Figure 1.19(b). The V model

Figure 1.19.
1 V-Modeel: an extensio
on of Waterfall Model

2. System Developm ment: This sub


b model contrrols the activiities of develo
oping the sysstem. It comp
prises of 9
activitiess: System reqquirement anaalysis, system
m design, Softtware(SW)/Hardware(HW)) requirementt analysis,
preliminaary SW desiggn, detailed SW design, SW Implementation, SW W Integrationn, System Inttegration,
transition to utilizatio
on.

3. Qualitty Assurancee: This sub model specifiess the quality requirementss and informss the other su ub models
about it. It determines the test casses and criterria to assure that
t the produ
ucts and the processes commply with
the standards. It comprises of 5 acctivities: QA innitialization, Assessment preparation, pprocess assesssment of
activitiess, process assessment of activities, prod
duct assessmeent, QA reportting.

FSIPD 44
4. Configuration Management: This sub model controls the generated products. It ensures that a product
can be identified at any time. It comprises of 4 main activities:
CM Planning, product and configuration Management, change management, CM services.
The various steps in V-model are already shown in figure 17.These are:

1. Requirement Analysis: In this very first step of the verification process the project and its function are
decided. So a lot of brainstorming and documentation is done to reveal the requirements to produce that
program or product. During this stage the employees are not going to discuss how it is going to be built; it
is going to be a generalized discussion and a user requirement document is put forward. This document
will carry information regarding the function of the system, performance, security, data, interface etc.
This document is required by the business analysts to convey the function of the system to the users. So
it will merely be a guideline.

2. System Design : In this phase, the possible design of the product is formulated. It is formulated after
keeping in mind the requirement notes. While following the documents, if there is something that
doesnt fit right in the design, then the user is made aware of it and changes are accordingly planned.
Diagrams and data dictionary is also produced here.

3. Architecture Design : The architecture design, also known as the computer architecture design or the
software design should realize the modules and the functionality of the modules which have to be
incorporated.

4. Module Design : In the module design, the architectural design is again broken up into sub units so that
they can be studied and explained separately. The units are called modules. The modules can separately
be decoded by the programmer.

The Validation Phases of the V model


1. Unit Testing
A unit in the programming system is the smallest part which can be tested. In this phase each of these
units are tested.

2. Integration Testing or Interface Testing


In this phase the separate entities or units will be tested together to find out the flaws or errors in the
interfaces.

3. System Testing
After the previous stage of interface testing, in this phase it is checked if the system meets the
requirements that have been specified for this integrated product.

4. Acceptance Testing
In the acceptance test, the integrated product is put against the requirement documents to see if it
fulfills all the requirements.

5. Release Testing
It is in here that conclusion has to be taken if the product or software which is created is suitable for the
organization.

FSIPD 45
Advantaages of the V Model
The biggest advantagee of using thee V Model is th
hat unlike thee Waterfall model every staage is tested.

Disadvan
ntages of thee V Model
It assumes that the requirements do not change.
The design is not authenticated.
The Requirementts are not veriffied.
At each stage theere is a potenttial of errors.
The first testing is done after the
t design of modules
m whicch is very late and costs a loot.

5) Stagee-Gate Metho odology: It is widely acceptted that in ord der to developp a new produuct and finallyy launch it
into the market, a nu umber of activvities need to o be performeed. It has been observed th hat the phaseed project
planningg process also o known as phasep review processes had a lot of dissadvantages. But when th he phase-
review process is execcuted by Crosss-Functional Teams (CFTss), offers a nu umber of ben nefits such ass reducing
risk, easing the task of setting goalls toward com mpleting each phase, and im mproves focuss on a particular phase.
A cross functional
f teaam is a group of members who can perform a large number of fun nctions like deeveloping,
project managing,
m testing etc. Onee such processs gaining widee acceptance is known gen nerically as Stage-Gate.
A stage-gate process applies a consistent planning and assesssment technique throughout the proceess. Phase
reviews are
a conducted d at the end of each phase to assess thee work carried out in the current phase, approving
a
the proceeedings to thee next phase and then plan nning for the resourcing annd execution oof the next ph hase. This
philosophy is used in the
t developm ment of the ph hase gates forr the whole prrocess (or the Protocol). Phase gates
are classified either ass soft or harrd with the so
oft gates allow
wing the poteential for agreeement of resuults in the
process while
w ensuringg that the keyy decision poiints in the pro ocess are resp
pected. The vaarious stages in Stage-
Gate (figure 1.20) are:

a) Ideaa: At the begin


nning a thougght of action to
t develop a project
p is beingg created.
b) Preliminary Investigation: This stage involves determining the need of o the custom
mer.
c) Build Business C ase: The custtomers need is developed into an appro opriate design solution.
d) Deveelopment: Th he design afteer review is maanufactured into the produ uct.
e) Testt and Validatte: Inspection and validatio on of the prod
duct is carried out.

Figure 1.20. The Stage/G


Gate Process

Spiral Methodology: The spiral meethodology iss a risk-driven n process methodology for software projjects. The
model was first described by Barry Boehm in 19 986 (figure 1.2
21). Based onn the unique rrisk patterns of
o a given
project, the
t spiral mo odel guides a team to ado opt elements of one or mo ore process mmethodologiess, such as
waterfalll, prototypingg, incrementaal and other approaches. The T risk-driveen sub-settin ng of the spiral model
steps alllows the model to accom mmodate anyy appropriate mixture of a specificatioon-oriented, prototype
oriented,, simulation--oriented, au utomatic tran nsformation-oriented or any other aapproach to software
developmment. Thus itt is a method dology wheree choices based on a projeect's risks geenerate an ap ppropriate
process model
m for thee project. Thu
us, the incremmental, waterrfall, prototyp ping, and oth
her process models
m are
special caases of the sp
piral model that fit the risk patterns of certain
c projectts.

FSIPD 46
Some Misconceptions regarding the Spiral Methodology:
A number of misconceptions arising from over simplifications in the original spiral model diagram have been
listed by Boehm. The most dangerous of these misconceptions are:
1) The spiral model is simply a sequence of waterfall increments.
2) All project activities follow a single spiral sequence.
3) Every activity in the diagram must be performed, and in the order shown.

While these misconceptions may fit the risk patterns of a few projects, they are not true for most projects. To
better distinguish them from "hazardous spiral look-alikes", Boehm lists six characteristics common to all
authentic applications of the spiral model.

The Six Invariants:


Authentic applications of the spiral model are driven by cycles that always display six invariant characteristics
irrespective of the process. These are:
1. The requirements are knowable in advance of implementation.
2. The requirements have no unresolved, high-risk implications, such as risks due to cost, schedule,
performance, safety, security, user interfaces, organizational impacts, etc.
3. The nature of the requirements will not change very much during development or evolution.
4. The requirements are compatible with all the key system stakeholders expectations, including users,
customer, developers, maintainers, and investors.
5. The right architecture for implementing the requirements is well understood.
6. There is enough calendar time to proceed sequentially.

In situations where these assumptions do apply, it is a project risk not to specify the requirements and
proceed sequentially. The waterfall model thus becomes a risk-driven special case of the spiral model.

Perform four basic activities in every cycle:


This invariant identifies the four basic activities that must occur in each cycle of the spiral model:
1. Consider the win conditions of all success-critical stakeholders.
2. Identify and evaluate alternative approaches for satisfying the win conditions.
3. Identify and resolve risks that stem from the selected approaches.
4. Obtain approval from all success-critical stakeholders, plus commitment to pursue the next cycle.

Figure 1.21. Spiral Model developed by Boehm, 1988

FSIPD 47
7) Systems Engineering Methodology: This is an interdisciplinary approach and means to enable the
realization of successful systems. It focuses on defining customer needs and required functionality early in
the development cycle, documenting requirements, then proceeding with design synthesis and system
validation while considering the complete problem including operations, performance, test, manufacturing,
cost, and schedule. Systems engineering encourages the use of modeling and simulation to validate
assumptions or theories on systems and the interactions within them. Use of methods that allow early
detection of possible failure are integrated into the design process. At the same time, decisions made at the
beginning of a project whose consequences are not clearly understood can have enormous implications later
in the life of a system, and it is the task of the modern systems engineer to explore these issues and make
critical decisions. No method guarantees a decision taken today will still be valid when a system goes into
service years or decades after first conceived. However, there are tools and techniques that support the
process of systems engineering. Some examples of these tools are:

System model, Modeling, and Simulation,


System architecture,
Optimization,
System dynamics,
Systems analysis,
Statistical analysis,
Reliability analysis, and
Decision making
Taking an interdisciplinary approach to engineering systems (figure 1.22) is inherently complex since
the behavior of the system components and interaction among themselves is difficult to predict. Defining and
characterizing such systems and subsystems and the interactions among them is one of the goals of systems
engineering. In doing so, the gap that exists between informal requirements from users, operators, marketing
organizations, and technical specifications is successfully reduced or even can be bridged.

Figure 1.22. Scope of System Engineering


Which Product Methodology to be used?
This is not always an easy question to answer. Every client and project is different and has specific constraints
that should be analyzed to determine an appropriate methodology or combination of methodologies to
utilize. The best approach is to find the methodology that fits the clients' environment and will guarantee
project success. A too rigid of a process (or not enough) will not provide the desired product on time or within
budget. The two main principle approaches behind product development methodologies are: DMADV and
DMAIC approaches.

FSIPD 48
DMADV approach involves the following steps:

Define: Define the project goals and customer (internal & external) deliverables.
Measure: Measure and determine the customer needs and satisfaction.
Analyze: Analyze the process options prior to customer needs.
Design: Design (in detail) the process to meet customer needs.
Verify: Verify the design performance and ability to meet customer.

DMAIC approach involves the following steps:

Define: Define the project goals and customer (internal & external) deliverables.
Measure: Measure the process to determine current performance.
Analyze: Analyze and determine the root causes of the defects.
Improve: Improve the process by eliminating defects.
Control: Control future process performance.

Whenever an incremental change to the current process is good enough we use the DMAIC approach, but
when the current process needs to be replaced we use the DMADV approach. The above two approaches have
led to the development of so many methodologies. Most of the methods are modifications of the Waterfall
methodology. Every methodology has some advantages and limitations. Following are the some of the
important points of each methodology to be considered before adopting a particular methodology:

The Waterfall methodology is the basic product development methodology but it is resistant to change
and does not assist in risk mitigation.
The Agile methodology is an iterative process that considers that change is inevitable and prioritizes the
work to be carried out. This makes it an economic and time-efficient process.
The Over-the-wall has become obsolete because of its limitations. Due to the complexity of the product
development work, it was divided among various departments which would specialize only in a particular
phase of the development and would be ignorant about the other phase. It was a slow process which also
leads to miscommunication between various departments handling the development. The lack of a
centralized system to keep each department updated about the other departments led to this
methodologys downfall.
The V-model models main advantage over Waterfall methodology lies in the fact that every step of the
Waterfall methodology is being tested in this method. But the main disadvantage is that this method is
resistant to change and is expensive for carrying out simple developments because of its repeated testing
and despite testing the method may be error prone. This has led to the decline in the use of this process.
The Stage-gate methodology, unlike Waterfall methodology, does not plan beforehand but plans and
analyses at each stage of product development and is thus more responsive to change, at any stage if
there is a new development, the methodology adopts according to the new development.
The Spiral model combines more than one development methodology to develop the product, but it is
also resistant to change and does not take the risk mitigation into account.
System engineering methodology is an inter-disciplinary approach towards product development, though
the interdisciplinary approach is a complex one but it helps in bridging the gaps between various sections
or departments of the product development, in stark contrast to the Over-the-wall methodology. This
has led to a promising potential for this methodology.

Software companies prefer to use Agile technology and Stage-Gate methodology over Waterfall technology
more considering their positive response to change and attempts to mitigate risks.

FSIPD 49
1.2.4 Product life cycle

The product life-cycle is a series of different stages a product goes through, beginning from its introduction
into the market and ending at its discontinuation and unavailability. These stages are commonly represented
through the sales and profit history of the product itself, although there can be many other variables that
affect the lifespan of a product line. Between the initial growth and concluding maturity stages, the profit
curve usually reaches its peak. During the maturity phase of the life-cycle, sales volumes for an established
product tend to remain steady, or at least do not suffer from major declines, but the rate of profit drops.

In most cases, the trajectory and behavior of the product life-cycle is determined by a set of factors over which
manufacturers and marketers have little control, forcing them to react to changing circumstances in order to
keep their product development strategy viable. These external factors include shifting consumer
requirements, industry-wide technological advances, and an evolving state of competition with a companys
market rivals. The fluctuating patterns of a life-cycle indicate that a different marketing and product
development approach may be needed for each stage of the cycle. Understanding life-cycle concepts can aid
in long-term planning for a new product, as well as raising awareness of the competitive landscape and
estimating the impact that changing conditions can have on profitability.

1) The Life-Cycle Curve

Industrial products usually follow an S-shaped life-cycle curve when sales and profits are plotted over time.
However, certain products, such as high-tech goods and commodities, may follow a different life-cycle
pattern. High-tech products often require longer development times and higher costs, making their growth
stages long and their decline stages short, while commodities, such as steel, tend to have relatively static
demand with sales that do not appreciably decline from an absence of competition. Sales would drop, though,
from an increase in competing products.

Under most life-cycle conditions, profits typically peak before sales do, with profits reaching their peak level
during the early growth stages and sales reaching their peak in the maturity stages. Competition tends to be
lower at the beginning of the life-cycle, but as competing companies start to offer lower prices, newer
services, or more appealing promotions in the maturity phase, the initial product must be made more
attractive. This often results in comparable price drops or increased spending on advertising and promotions,
as well as greater investment in distribution and modifications to the existing product. The initiatives
improve sales, but drive up costs and lower profit

Figure1.23 stages in product life cycle

FSIPD 50
There are following four stages in Product life cycle curve (figure 1.23)
The introductory stage
The growth stage
The Maturity stage
The decline stage

The Introductory Stage


The Introduction stage is probably most important stage in product life cycle. In fact, most probably products
fail in the introduction stages. This is the only stage where product is going to introduce with market and with
consumer or user of products. If consumers dont know about it then consumers dont go to buy it. There are
two different strategies you can use to introduce your product to consumers. You can use either a
penetration strategy or a skimming strategy. If a penetration strategy is used then prices are set very high
initially and then gradually lowered over time. This is a good strategy to use if there are few competitors for
your product. Profits are high with this strategy but there is also a great deal of risk. If people don't want to
pay high prices you may lose out. The second pricing strategy is a skimming strategy. In this case you set your
prices very low at the beginning and then gradually increase them. This is a good strategy to use if there are a
lot of competitors who control a large portion of the market. Profits are not a concern under this strategy. The
most important thing is to get you product known and worry about making money at a later time.

A company that introduces a product requiring a high degree of learning and expects a relatively low rate of
acceptance can focus on market development strategies to help build consumer appeal. Conversely, products
with a low learning curve and a quick route toward acceptance may need a marketing strategy designed to
offset rival products, as competition at these levels tends to be higher.

The Growth Stage


When an industrial product enters a period of higher sales and profit growth, the marketing plan often shifts
to focus on improvements to the design and any added features or benefits that can expand its market share.
Increasing the efficiency of distribution methods can help improve product availability by reaching more
customers, and some degree of price reductions, particularly for large-scale operations, can be introduced to
make the product more appealing for purchase. Maintaining the higher price set at the introductory stage
increases the risk of competitors entering the market due to the wider profitability margin. Similarly, without
stronger distribution efforts the product may have limited availability, which encourages rival companies to
encroach on market share.

The Maturity Stage


The maturity stage of a life-cycle is characterized by an increase in the number of market competitors and a
corresponding decline in profit growth as a percentage of sales. To compensate for the level of saturation that
occurs during this phase, the product development strategy revolves around entering new markets, often
through exports. It may also be helpful to increase efforts to satisfy existing customers in order to preserve
the customer base. Reducing spending on marketing and production can help maintain profit margins.

The Decline Stage


In the decline stage, the competition for product pricing tends to escalate, while profits and sales generally
decrease. When working with industrial products, marketers sometime opt to discontinue a product when it
has reached this level or introduce a replacement product that renders the previous version obsolete.
Marketing and production budgets are typically scaled back to save on costs, and resources may be shifted to
newer products under development. Product decline usually proceeds more quickly among industries that rely
on rapidly changing technologies, with newer advances periodically driving existing goods out of the market.

FSIPD 51
2) S-Curve
S-curves visually depict how a product, service, technology or business progresses and evolves over time. S-
curves can be viewed on an incremental level to map product evolutions and opportunities, or on a macro
scale to describe the evolution of businesses and industries. On a product, service, or technology level, S-
curves are usually connected to market adoption since the beginning of a curve relates to the birth of a new
market opportunity, while the end of the curve represents the death, or obsolescence of the product, service,
or technology in the market. Usually the end of one S-curve marks the emergence of a new S-curve the one
that displaces it (e.g., video cassette tapes versus DVDs, word processors versus computers, etc.). Some
industries and technologies move along S-curves faster than others. High tech S-curves tend o cycle more
quickly while certain consumer products move more slowly.

3) Bathtub curve:
Reliability specialists often describe the lifetime of a population of products using a graphical representation
called the bathtub curve. The bathtub curve consists of three periods: an infant mortality period with a
decreasing failure rate followed by a normal life period (also known as "useful life") with a low, relatively
constant failure rate and concluding with a wear-out period that exhibits an increasing failure rate. This
article provides an overview of how infant mortality, normal life failures and wear-out modes combine to
create the overall product failure distributions. It describes methods to reduce failures at each stage of
product life and shows how burn-in, when appropriate, can significantly reduce operational failure rate by
screening out infant mortality failures. The material will be presented in two parts. Part One (presented in this
issue) introduces the bathtub curve and covers infant mortality and burn-in.

Figure 1.24. Bathtub Curve

The bathtub curve, displayed in Figure 1.24, does not depict the failure rate of a single item, but describes the
relative failure rate of an entire population of products over time. Some individual units will fail relatively
early (infant mortality failures), others (we hope most) will last until wear-out, and some will fail during the
relatively long period typically called normal life. Failures during infant mortality are highly undesirable and
are always caused by defects and blunders: material defects, design blunders, errors in assembly, etc. Normal
life failures are normally considered to be random cases of "stress exceeding strength." However, as we'll see,
many failures often considered normal life failures are actually infant mortality failures. Wear-out is a fact of
life due to fatigue or depletion of materials (such as lubrication depletion in bearings). A product's useful life
is limited by its shortest-lived component. A product manufacturer must assure that all specified materials
are adequate to function through the intended product life.

FSIPD 52
Note that the bathtub curve is typically used as a visual model to illustrate the three key periods of product
failure and not calibrated to depict a graph of the expected behavior for a particular product family. It is rare
to have enough short-term and long-term failure information to actually model a population of products with
a calibrated bathtub curve.

Also note that the actual time periods for these three characteristic failure distributions can vary greatly.
Infant mortality does not mean "products that fail within 90 days" or any other defined time period. Infant
mortality is the time over which the failure rate of a product is decreasing, and may last for years. Conversely,
wear-out will not always happen long after the expected product life. It is a period when the failure rate is
increasing, and has been observed in products after just a few months of use. This, of course, is a disaster
from a warranty standpoint!

We are interested in the characteristics illustrated by the entire bathtub curve. The infant mortality period is a
time when the failure rate is dropping, but is undesirable because a significant number of failures occur in a
short time, causing early customer dissatisfaction and warranty expense. Theoretically, the failures during
normal life occur at random but with a relatively constant rate when measured over a long period of time.
Because these failures may incur warranty expense or create service support costs, we want the bottom of the
bathtub to be as low as possible. And we don't want any wear-out failures to occur during the expected useful
lifetime of the product.

Reverse bathtub curve

Figure 1.25. Reverse Bathtub Curve

Where, Phase 1: Introduction phase; Phase 2: Growth phase; Phase 3: Maturity phase; Phase 4: Decline phase.

The demand of product in the market over time follows an inverted/ reverse bathtub curve (figure 1.25). The
first two phases, i.e. phase 1 and phase 2 show that demand of the product will increase continuously will
increase continuously as the product is new in the market. But, after some time its demand will stabilize as
there is a build of saturation level in the market place, i.e. phase 3, and then finally the demand for the
product will decrease gradually due to the entry of competitors or better products, i.e. phase 4. The overall
time span refers to as product life cycle and the estimation of the overall time span is important from the
view point of economic analysis.

For any product development process planning and management of the product to be developed should be
understood. This is included in the subsequent sections.

FSIPD 53
1.2.5 Product development planning and Management

Planning and management are indispensible part of product development. Planning involves thinking about
and organizing the activities required to achieve a desired goal. It involves the creation and maintenance of
a plan. It combines forecasting of developments with the preparation of schemes of how to react to them. A
basic tool for product planning is to follow a set of systematic steps. These steps are intended to estimate
four basic aspects: the what-tasks, the when-schedule, the where-equipment and facilities, and the
how-people, material, facility, and equipment costs. Product development management is the discipline of
planning, organizing, motivating, and controlling resources to achieve specific goals.
Product Development Planning and Management (PDPM)(figure 1.26) is an organizational lifecycle function
within a company dealing with the planning, forecasting, or marketing of a product or products at all stages of
the product lifecycle. It consists of product development and product marketing, which are different (yet
complementary) efforts. The main objectives of PDPM are maximizing sales revenues, market share and
profit margins.

Figure1.26. Product Development Planning and Management

Product development can be considered as a project. A project is a temporary effort designed to produce a
unique product, service or result with a defined beginning and end usually time-constrained, and often
constrained by funding or deliverables, undertaken to meet unique goals and objectives, typically to bring
about beneficial change or added value. The temporary nature of projects stands in contrast with business as
usual or operations, which are repetitive, permanent, or semi-permanent functional activities to produce
products or services. In practice, the management of these two systems is often quite different, and as such
requires the development of distinct technical skills and management strategies. PDPM often serves an inter-
disciplinary role, bridging gaps within the company between teams of different expertise, most notably
between engineering-oriented teams and commercially oriented teams.

The various elements or tools of product development planning and management are budgeting, scheduling,
collaboration, risk management, change management and product cost management. The primary challenge
of project management (product development in this case) is to achieve all of the project goals and objectives
while honoring the preconceived constraints or limitations. The primary constraints are scope, time, quality
and budget. The secondary challenge is to optimize the distribution of necessary inputs and integrate them to
meet pre-defined objectives. The third challenge is to adapt with the continuous change of need in the
market. All these elements have critical roles in the integrated product development process. Now we shall
discuss the various elements in details:

FSIPD 54
1) Budgeting
A budget is a proposal of activities to be done in the future. It is a managerial tool for planning, programming
and controlling business activities. A budget is a written plan or programmers of proposed future activities
(including estimates of sales, expenditure and production etc.) expressed in quantitative terms.

According to Dickey, a budget is written plan covering projected activities of a firm for a defined period.

There are following characteristics of budgets:


Budget outlines the project activities
The expressions are made in quantitative terms, and in most of the budgets in financial terms, i.e. rupee
value, and
It relates to a fixed periodically say, a day, a month, a year etc.

Budgeting
Budgeting is an art of budget making. According to the terminology of cost Accountancy, Institute of Cost and
Works Accountants, U.K., budgeting refers to the establishment of budgets relating to the responsibilities
of executives to the requirements of a policy, and the continuous comparison of actual with budgeted results
either to secure by individual action the objective of that policy or to provide a basis for its revision.

For budget preparation need some basic requirements as following:


A budget committee consisting of representatives from all the concerned department should be formed.
The budget should distinctly mark the responsibilities of each section of the business.
A budget, as a plan of future action, is based on estimates of sales, costs, estimated business conditions
etc.
The budget should be made flexible so that unavoidable charges may be incorporated whenever
necessary.
To prepare good budget, it is essential to know about the business policies, the budget should be
prepared considering their effect in each department.
During preparation of budget all the information regarding costs are essential (i.e. manufacturing costs,
margin of profit etc.).

Types of budget:
Fixed budget
Flexible budget
Capital expenditure budget
Operating budget

Fixed budget is prepared on the basis of certain fixed pre-determined level of activity. Fixed budget is suitable
for those enterprises whose quantity and quality of production as well as sales during the budget period can
be pre-determined with reasonable accuracy. The Govt. budgets are mostly fixed budgets.

Flexible budget reflects the actual behavior characteristics of the costs. Capital expenditure budget are those
the benefits from which are deferred over a long period of time, e.g. purchase of equipment, machinery, plant
etc. This type budget prepared for the purchase of new assets or for the replacement of existing assets. The
budget period is generally longer as compared to operating budget, usually 3 to 5 years.

Operating budgets are usually prepared on the annual basis and they are broken into still shorter time-spans
say, half, yearly, quarterly, monthly etc. There are following types of Operating budgets:

Sales budget
Production budget
Special budget (Material, equipment, Financial budget)
Master budget.

FSIPD 55
2) Collaboration
Collaboration is the act of working with each other to do a task. It is a recursive or repetitive process where
two or more people or organizations work together to realize shared goals. It is an important tool in project
management as it helps to reduce the cost of the product and helps the product to survive in the market.

Internal Collaboration
This collaboration is of paramount importance for successful new development project. This collaboration is
generally between CFT, Manufacturing, Supply Chain, Quality, Sales and Marketing within the organization.
An example of internal collaboration is a cross-functional team. As already described before, in an
organization cross-functional team is formed who have different functional expertise but working towards a
common goal. The main benefits of internal collaboration are:

Using cross-functional teams have proved to reduce the cycle time in new product development.
Cross-functional teams eliminate the "throw it over-the-wall" mentality that passes a product off from
department to department.

External Collaboration
External collaboration involves two or more organizations working together to develop a product. The main
benefits of external collaboration are:

External sources may provide valuable contributions to new product development (NPD) as they provide
access to external knowledge that complements the firms internal knowledge base.
Product Development Organizations have tie-ups with the Certification agencies, regulatory bodies,
Industry forums and specialized service provider players.

A very good example of external collaboration is the collaboration of Maruti Udyog Limited with Suzuki to
compete with the other multi-national companies coming to India in 1990s like General Motors, Hyundai etc.
While Maruti realized that to survive in the competition they had to upgrade their quality in design and
performance, they did collaboration with the conditions that Suzuki will assist them with the technology,
resource and design. Thus Maruti became a subsidiary company of Suzuki. Both companies were benefitted
by this collaboration as Maruti survived the fierce competition of the newly coming MNCs while Suzuki was
benefitted as they were able to grasp the Indian market along with Maruti.

Another example of external collaboration can be off shoring. It is a type of outsourcing. Off shoring simply
means having the outsourced business functions done in another country. Frequently, work is off-shored in
order to reduce labor expenses. Other times, the reasons for off shoring are strategic - to enter new markets,
to tap talent currently unavailable domestically or to overcome regulations that prevent specific activities
domestically.

India has emerged as the dominant player in off shoring, particularly in software work. Three factors came
into play to make this possible. First, in the 1970s the Indian government put in place regulations that
mandated that all foreign ventures have Indian majority ownership. Fearing government takeover, many large
U.S. corporations, such as IBM, departed, leaving India in the position of fending for itself to maintain its
technical infrastructures. This quickly forced the creation of schools to train students in technology.

Next came the global ubiquity of the Internet and massive telecommunications capacity, which enabled
companies to get computer-based work done seemingly anywhere, including India.

Third, as the year 2000 approached, organizations hired service providers to update their legacy program code,
a source code that relates to a no-longer supported or manufactured operating system or
other computer technology. Much of this work was handled in India, where English was commonly spoken,
where there was a large and highly trained population of software engineers, and where labor costs were
much lower than in developed countries. The year 2000 work proved the merits of an offshore labor force, and
companies have continued tapping the talents and skills (and cost savings) made available by Indian offshore

FSIPD 56
service providers. Major companies working as off-shoring service providers in India include Tata Consultancy
Services (TCS), Infosys and Wipro.

3) Risk Management
Risk is the potential for realizing some unwanted and negative consequence of an event. According to
International Organization for Standardization (ISO 31000), risk has been defined as the effect of uncertainty
on objectives, whether positive or negative. Risk is part of our individual existence and that of society as a
whole.

Risk management can be defined as the identification, assessment, and prioritization of risks followed by
coordinated and economical application of resources to minimize, monitor, and control the probability and/or
impact of unfortunate events or to maximize the realization of opportunities. Risks can come from
uncertainty in financial markets, threats from project failures (at any phase in design, development,
production, or sustainment life-cycles), legal liabilities, credit risk, accidents, natural causes and disasters as
well as deliberate attack from an adversary, or events of uncertain or unpredictable root-cause. Several risk
management standards have been developed including the Project Management Institute, the National
Institute of Standards and Technology, actuarial societies, and ISO standards. Methods, definitions and goals
vary widely according to whether the risk management method is in the context of project management,
security, engineering, industrial processes, financial portfolios, actuarial assessments, or public health and
safety.

The strategies to manage threats (uncertainties with negative consequences) typically include transferring
the threat to another party, avoiding the threat, reducing the negative effect or probability of the threat, or
even accepting some or all of the potential or actual consequences of a particular threat, and the opposites
for opportunities (uncertain future states with benefits).

Certain aspects of many of the risk management standards have come under criticism for having no
measurable improvement on risk, whether the confidence in estimates and decisions seem to increase.

The process of risk management (figure 1.27) involves the following steps:
Identify, characterize threats.
Assess the vulnerability of critical assets to specific threats.
Determine the risk (i.e. the expected likelihood and consequences of specific types of attacks on specific
assets).
Identify ways to reduce those risks.
Prioritize risk reduction measures based on a strategy.
Implement the techniques or measures.
Review the the results after implementation.

Figure 1.27. The process of risk management

FSIPD 57
Principles of risk management
The International Organization for Standardization (ISO) identifies the following principles of risk
management:

Risk management should:


create value resources expended to mitigate risk should be less than the consequence of inaction, or (as
in value engineering), the gain should exceed the pain
be an integral part of organizational processes
be part of decision making process
explicitly address uncertainty and assumptions
be systematic and structured
be based on the best available information
take human factors into account
be transparent and inclusive
be dynamic, iterative and responsive to change
be capable of continual improvement and enhancement
be continually or periodically re-assessed

Risk Management Techniques


Some of the commonly used risk management techniques are Failure Mode and Error Analysis (FMEA),
Failure Mode, Error and Criticality Analysis (FMECA) and Fault Tree Analysis.
Failure Mode and Error Analysis (FMEA)

Failure Mode and Effects Analysis (FMEA) was one of the first systematic techniques for failure analysis. It
was developed by reliability engineers in the 1950s to study problems that might arise from malfunctions of
military systems. A FMEA is often the first step of a system reliability study. It involves reviewing as many
components, assemblies, and subsystems as possible to identify potential failure modes, and their causes
and effects. For each component, the failure modes and their resulting effects on the rest of the system are
recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. A FMEA is mainly
a qualitative analysis.
A few different types of FMEA analysis exist, like
Functional FMEA (FFMEA)
Design FMEA (DFMEA)
Process FMEA (PFMEA)
An FMEA is an bottom-up (from component level to system level), inductive reasoning single point of failure
analysis and is a core task in reliability engineering, safety engineering and quality engineering. A successful
FMEA activity helps to identify potential failure modes based on experience with similar products and
processes - or based on common physics of failure logic. It is widely used in development and manufacturing
industries in various phases of the product life cycle. Effects analysis refers to studying the consequences of
those failures on different system levels.
Functional analyses are needed as an input to determine correct failure modes, at all system levels, both for
functional FMEA or part-piece or component FMEA. A FMEA is used to structure risk mitigation based on
either failure effect severity reduction or based on lowering the probability of failure or both. The FMEA is in
principle a fully inductive analysis, however the failure probability can only be estimated or reduced by
understanding the failure mechanism. Ideally this probability shall be lowered to "impossible to occur" by
eliminating the root causes. It is therefore important to include in the FMEA an appropriate depth of
information on the causes of failure and thus making it a deductive analysis.

FSIPD 58
Failure Mode effects and criticality analysis (FMECA)
Failure mode effects and criticality analysis (FMECA) is an extension of failure mode and effects
analysis (FMEA). FMEA is a bottom-up, inductive analytical method which may be performed at either the
functional or component level. FMECA extends FMEA by including a criticality analysis, which is used to chart
the probability of failure modes against the severity of their consequences. The result highlights failure
modes with relatively high probability and severity of consequences, allowing remedial effort to be directed
where it will produce the greatest value.
Fault tree analysis (FTA)
Fault Tree Analysis (FTA) is a top down (from system level to component level), deductive failure analysis in
which an undesired state of a system is analyzed using Boolean logic to combine a series of lower-level
events. This analysis method is mainly used in the fields of safety engineering and reliability engineering to
understand how systems can fail, to identify the best ways to reduce risk or to determine (or get a feeling for)
event rates of a safety accident or a particular system level (functional) failure. FTA is used in the
aerospace, nuclear power, chemical and process, pharmaceutical, petrochemical and other high-hazard
industries; but is also used in fields as diverse as risk factor identification relating to social service system
failure.
In aerospace, the more general term "System Failure Condition" is used for the Top event of the fault tree
which is the undesired case in this case. These conditions are classified by the severity of their effects. The
most severe conditions require the most extensive fault tree analysis. These "System Failure Conditions" and
their classification are often previously determined in the functional hazard analysis.
FTA can be used to:

understand the logic leading to the top event which is the undesired state in this case.
prioritize the contributors leading to the top event-creating the critical equipment/parts/events lists for
different importance measures.
monitor and control the safety performance of the complex system For example: It can determine the
criteria of safety of a product like an aircraft. Questions like the feasibility or possibility of an air craft to
continue flying with a malfunctioning valve and the duration of flight with the malfunctioning valve can
be determined.
minimize and optimize resources.
assist in designing a system. The FTA can be used as a design tool that helps to create (output / lower
level) requirements.
function as a diagnostic tool to identify and correct causes of the top event. It can help with the creation
of diagnostic manuals or processes.

The main difference between FMEA and FTA is that, in FMEA a system is analyzed or experimented on
component level and the consequences are checked on system level while in FTA a state is defined at the
system level and causes for this are verified at the component level. In other words, while in FMEA we induce
the undesired state from the flaws, in FTA we deduce the flaws from the undesired state. Thus both the
methods help to reduce the risk or avoid the undesired state.

3) Scheduling

Scheduling can be defined as a plan for performing work or achieving an objective, specifying the order and
allotted time for each part. It is an important tool for production processes, where it can have a major impact
on the productivity of a process.

In project management, a schedule consists of a list of a project's terminal elements with intended start and
finish dates. Terminal elements are the lowest element in a schedule, which is not further subdivided. Those

FSIPD 59
items are often estimated in terms of resource requirements, budget and duration, linked by dependencies
and scheduled events.

Objectives of Project Scheduling

The main objectives of project scheduling are:

Completing the project as early as possible by determining the earliest start and finish of each activity.
Calculating the likelihood a project will be completed within a certain time period.
Finding the minimum cost schedule needed to complete the project by a certain date.

Project scheduling techniques

The two main scheduling techniques are:

Critical Path Method (CPM) and Project Evaluation and Review Technique (PERT)

Scheduling phase comprises of laying the activities according to precedence order and determining, start and
finish times of the activities, critical path on which the activities needs special attentions and also slack and
float for the non-critical paths. There are mainly two scheduling techniques namely Critical Path Method
(CPM) and Project Evaluation and Review Technique (PERT).

Critical path method (CPM) is a tool to analyze project and determine duration, based on identification of
"critical path" through an activity network. Knowledge of the critical path can permit management of the
project to change duration.

The Program/ Project Evaluation and Review Technique, commonly abbreviated PERT, is a statistical tool,
used in project management that is designed to analyze and represent the tasks involved in completing a
given project. It is commonly used in conjunction with the critical path method (CPM).PERT was developed by
the US Navy for scheduling the research and development work for Polaris missiles program.

The main difference between CPM and PERT lies in the fact that PERT is used for those activities which are
subjected to a considerable degree of uncertainty and this is the reason the principal feature of PERT is that
its activity time estimates are probabilistic. PERT is event oriented. Whereas CPM activity time estimates are
relatively less uncertain. Hence the estimates are of deterministic nature. CPM is activity oriented.

Few terms used in PERT and CPM

Activity

Activity is a physically identifiable part of a project which consumes time and resources which are obtained by
the work breakdown into smaller work contents. In a network diagram it is represented by an arrow (figure
1.28) with the head determining start and the tail representing the end of the activity.

Figure 1.28. Activity

FSIPD 60
Event

The beginning and the end points of an activity are known as events which can also termed as nodes. It is a
point in time which does not consume any time. It is represented by a circle (figure 1.29).

Figure 1.30. Activity

Path

Path is an unbroken chain of activity arrow which connects the initial event to some other event.

Network

Network is a graphical representation of logically and sequentially connected arrows and nodes, representing
activities and events of a project. These are known as arrow diagrams (figure 1.30).

Figure 1.30. Network diagram

Dummy

Dummy is an activity which determines the dependency of an activity over the other. This activity does not
consume any time and is represented by dotted arrow (figure 1.31).

Figure 1.31. Dummy activity

FSIPD 61
Time estimates in PERT and CPM

The CPM system of networks omits probabilistic considerations and is based on single time estimates of the
average time required to ensure the activity.

The PERT system takes in consideration the probabilistic considerations and is based on three time estimates
of the performance of time of an activity. They are

1. Optimistic time estimate (to): it is the shortest possible time required to complete the activity, if all goes
well.
2. Pessimistic time estimate (tp): it is the maximum possible time an activity will take if everything goes
bad.
3. Most likely time estimates (tm): it is the time an activity will take if executed under normal conditions.

Hence, the expected time or average time in PERT is given by the following expression

4
6

Critical Path

The path that consumes the maximum amount of time is known as the critical path.

Slack/ Float:

Slack and float both refer to the amount of time by which a particular event or activity can be delayed without
affecting the time schedule of the network. Slack refers to events and is used in PERT. Float refers to
activities and is used in CPM.

Some important terminologies related to scheduling are:

Milestones

Within the framework of project management, a milestone is an event that receives special attention. It is
often put at the end of a stage to mark the completion of a work package or phase. Milestones can be put
before the end of a phase so that corrective actions can be taken, if problems arise, and the deliverable can be
completed on time.
In addition to signaling the completion of a key deliverable, a milestone may also signify an important
decision or the derivation of a critical piece of information, which outlines or affects the future of a project. In
this sense, a milestone not only signifies distance traveled (key stages in a project) but also
indicates direction of travel since key decisions made at milestones may alter the route through the project
plan. Milestones can add significant value to project scheduling. When combined with a scheduling
methodology such as PERT or CPM, milestones allow project management to much more accurately
determine whether or not the project is on schedule
Gate Review

Gate review in project management means to set up specific points in the schedule where the project could be
evaluated to ensure things are on track and to determine whether the work should continue. The main
features of Gate review are:

Gates can be at multiple points in the overall Project Management


At every gate there must be approval to proceed for next stage
Review of Budget, Schedule, Risk and any other issue

FSIPD 62
Launch Dates

Launch date is the date of introduction of the product into the market. An effective project scheduling is
crucial to confidently predict product development durations and launch dates. Achieving planned launch
date target helps to gain market share.

5) Change management

Change Management is an approach for handling the transitioning of individuals, a team and organizations to
a desired future state. At a time of gain, change can be a time of exciting opportunity for some and a time of
loss, disruption or threat for others. How such responses to change are managed can be the difference
between surviving and thriving in a work or business environment. Change is an inherent characteristic of any
organisation and like it or not, all organizations whether in the public or private sector must change to remain
relevant. Change can originate from external sources through technological advances, social, political or
economic pressures, or it can come from inside the organisation as a management response to a range of
issues such as changing client needs, costs or a human resource or a performance issue. It can affect one
small area or the entire organisation. Nevertheless, all change whether from internal or external sources, large
or small, involves adopting new mindsets, processes, policies, practices and behaviour.

Irrespective of the way the change originates, change management is the process of taking a planned and
structured approach to help align an organisation with the change. In its most simple and effective form,
change management involves working with an organisations stakeholder groups to help them understand
what the change means for them, helping them make and sustain the transition and working to overcome
any challenges involved. From a management perspective it involves the organisational and behavioural
adjustments that need to be made to accommodate and sustain change.

In general, the steps involved in change management are (Figure 1.32)

Identifying the need for change.


Planning the change.
Executing the change.
Evaluating the change.

Engage Execute

Team Measure

Change
Plan Management Improve

Figure1.32. Steps in Change Management

This means that the process is used to ensure that the product change is desired by customers and then used
to plan out the actual process through which such product changes will occur.

Factors Common to Successful Change Management:

Planning: Developing and documenting the objectives to be achieved by the change and the means to
achieve it.

FSIPD 63
Defined Governance: Establishing appropriate organizational structures, roles and responsibilities for
the change that engage stakeholders and support the change effort.

Committed Leadership: Ongoing commitment at the top and across the organisation to guide
organizational behaviour, and lead by example.

Informed Stake Holders: encouraging stakeholder participation and commitment to the change, by
employing open and consultative communication approaches to create awareness and understanding of
the change throughout the organisation.

Aligned Work Force: Identifying the human impacts of the change, and developing plans to align the
workforce to support the changing organisation.

The extent to which each of these five factors is exhibited in successful change projects will vary depending
on the nature of the change involved. Clearly where large whole of government change is involved the
complexities will be increased and each of the factors outlined will require fuller consideration. In the case of a
small, more localized change, the need may be less significant.

6) Product cost management

Product cost management (PCM) is a set of tools or methods used by companies who develop and
manufacture products to ensure that a product meets its targeted profit. There is not a specific definition for
product cost management or a specifically defined scope of PCM. Sometimes PCM is considered a synonym to
target costing while at other times it is equated to design to cost. But target costing is considered as a pricing
process, while PCM focuses on maximizing the profit or minimizing the cost of the product, irrespective of the
cost at which the product is sold to the customer.

Some practitioners of PCM are mostly concerned with the cost of the product up until the point that the
customer takes delivery (manufacturing costs + logistics costs) or the total cost of acquisition. They seek to
launch products that meet profit targets at launch rather than reducing the costs of a product after
production. Other people believe that PCM extends to a total cost of ownership or lifecycle
costing (Manufacturing + Logistics + operational costs + disposal). Depending on the practitioner, PCM may
include any combination of organizational or cultural change, processes, team roles, and tools. Many believe
that PCM must encompass all four aspects to be successful and have shown how the four parts work
together.

Principles of Effective Product Cost Management

The principles of effective product cost management are:

1. Spread the responsibility to all

Employees throughout the business should share responsibility for managing costs. Thus, design experts,
engineers, store-managers, sales managers etc should all contribute towards managing costs and should see
this as part of their job. All the employees should be provided with a basic understanding of costing ideas
such as fixed and variable costs, relevant costs and so on, to enable them to contribute fully. As cost-
consciousness permeates the business, and non-accounting employees become more involved in costing
issues, the role of the accountants will change. They will often facilitate, rather than initiate, cost
management proposals and will become part of the multi-skilled teams engaged in creatively managing
them.

FSIPD 64
2. Spread the word and make it a habit

Throughout the business, costs and cost management should become everyday topics for discussion.
Managers should seize every opportunity to raise these topics with employees, as talking about costs can
often lead to ideas being developed and action being taken to manage costs.

3. Develop the cost management locally

Emphasis should be placed on managing costs within specific sites and settings. Managers of departments,
product lines or local offices are more likely to become engaged in managing costs if they are allowed to take
initiatives in areas over which they have control. Local managers tend to have local knowledge not possessed
by managers at head office. They are more likely to be able to spot cost-saving opportunities than are their
more senior colleagues. Business-wide initiatives for cost management which have been developed by senior
management are unlikely to have the same beneficial effect.

4. Benchmark continually

Benchmarking should be a never-ending journey. There should be regular, as well as special-purpose,


reporting of cost information for benchmarking purposes. The costs of competitors may provide a useful basis
for comparison. In addition, costs that may be expected as a result of moving to new technology or work
patterns may be helpful.

5. Focus on managing rather than on instantaneous cost reduction

Conventional management accounting tends to focus on cost reduction, which is, essentially, taking a short-
term perspective on costs. Strategic cost management, however, means that in some situations costs should
be increased rather than reduced to gain a final reduction in costs.

Product Cost Management (PCM) Tools

Effective PCM is also enabled by putting the proper tools in the hands of anyone that impacts product cost.
These tools help assess true product costs at a detailed level at any stage and enable people to act on the
appropriate opportunities to reduce costs. For example:

Product cost estimation systems that can quickly and consistently generate and manage accurate
estimates without requiring specialized manufacturing or cost knowledge.
Reporting systems for documenting and tracking cost management results.
Analytics systems to search large volumes of data and identify cost outliers and trends.
BOM cost tracking systems to roll-up costs at any point in a product's life cycle.

Depending on the scope of work, PCM may include the following processes:

Change management and building a cost/profit-conscious culture.


Building cost management into the Product Lifecycle Management processes
DFM Design for Manufacturing
DFA Design for assembly
DTC Design to Cost
DFP Design for Procurement
VA/VE Value Analysis / Value engineering
DFSS Design for Six Sigma
Cost targeting

FSIPD 65
Should Cost / Price
Make Buy
Capital asset justification
Commodity Pricing
Spend analysis

Terminal questions:

1. What is product? How is service a type of product? What are the types of products?
2. List down the various factors which affect product decision. Explain of these factors affect the product
decision.
3. What is PESTLE analysis?
4. Conduct sample PESTLE analysis with day to day life products.
5. What are the different kinds of product development processes?
6. What is NPD? Explain.
7. Define design porting and homologation.
8. Explain product life cycle.
9. What is S curve? What is the difference bathtub curve and reverse bathtub curve?
10. What are the different types of product development methodology? Which methodology is most suitable
for a) automobile, b) laptop, c) tank (defence).
11. What are the roles of various elements of the product development, planning and management in
integrated product development? Explain with an example.
12. Conduct Sample PESTLE Analysis with day-to-day life products like:
i. Laptop
ii. Soap
iii. Shirt
iv. Biscuits
v. Soft Drinks
vi. Mobile
vii. Internet
viii. Email
ix. Bus
x. Spectacle
xi. Bike
xii. Education Courses

Reference

1) Anthony Heyes, Implementing Environmental Regulation: Enforcement and Compliance


2) Aguilar F.J. Scanning the business environment, Macmillan
3) An Introduction to PESTLE Analysis, Housing Industry Association
4) Robert I. Lerman, Stefanie R. Schmidt an overview of economic, social, and demographic trends
affecting the us labor market, The Urban Institute, Washington, D.C.
5) Wanda Thibodeaux ,Advantages & Disadvantages of a Demographic Environment, Demand Media
6) A.K.Chitale, R.C.Gupta, Product Design and manufacturing.
7) Day, G. (1981), The product life cycle: Analysis and applications issues, Journal of Marketing, vol 45,
Autumn 1981, pp 6067.
8) Dhalla, N.K., Yuspeh, S. (1976) Forget the product life cycle concept, 'Harvard Business Review', JanFeb
1976
9) Ulrich, Karl T. and Eppinger, Steven D (2004) Product Design and Development, 3rd Edition, McGraw-Hill,
New York, 2004

FSIPD 66
10) Business Process Reengineering: Introduction Yih-Chang Chen (2001) Empirical Modeling for
Participative Business Process Reengineering
11) Subramanian Muthu, Larry Whitman, and S. Hussein Cheraghi Dept. of Industrial and Manufacturing
Engineering Wichita State University Wichita, KS-67260 0035, USA , business process reengineering: a
consolidated methodology Proceedings of The 4th Annual International Conference on Industrial
Engineering Theory, Applications and Practice November 17-20, 1999, San Antonio, Texas, USA
12) O.P.Khanna,Industrial Engineering and management, Dhanpat Rai & sons, Delhi,1993.
13) Systems Engineering Fundamentals, Defense Acquisition University Press, 2001
14) Andrews Jane, Cameron Helen, Harris Margaret, 2008, All change? Managers experience of
organizational change in theory and practice, Journal of Organizational Change Management, Volume: 21
Issue: 3
15) ISO/DIS 31000 (2009). Risk management Principles and guidelines on implementation, International
Organization for Standardization.
16) Blazewicz, J., Ecker, K.H., Pesch, E., Schmidt, G. und J. Weglarz, Scheduling Computer and Manufacturing
Processes, Berlin (Springer) 2001, ISBN 3-540-41931-4
17) Boehm, B, "Spiral Development: Experience, Principles and Refinements ", Special Report CMU/SEI-
2000-SR-008, July 2000
18) Karl T. Ulrich and Steven D. Eppinger, Product Design and Development 4th edition, Irwin McGraw-Hill,
2008.
19) Internet materials, You tube etc.

FSIPD 67
FSIPD 68
Module 2
Requirements and System Design

FSIPD 69
Requirements and System Design

Manufacturing of products consists of various steps like designing of product, collection of information about
materials that fulfil the desired requirements, final production etc.

System or product design method consists of following steps:

Clarification of task and development of its specifications


Determination of the logical relationships and organization of function structure
Selection of best processes for the fulfilment of the functions
Determination of optimum shapes, motions, and materials
Development of final design

In product development and process optimization, a requirement is a documentation of physical, functional


and technical needs of a particular design or process must be able to perform. It is most commonly used in
following fields

Systems engineering,
Software engineering, or
Enterprise engineering.

All the necessary information about a product and its manufacturing is provided by design specification. Its
use is common among architects and engineers, etc. where a product has to be specially made to satisfy a
unique need. The purpose of a design specification is to provide explicit information about the requirements
for a product and how the product is to be put together. It is used in public contracting for buildings, highways
and other public works.

For any system which is being design has to satisfy the requirements of the system designed. For this reason
the requirement engineering is an important part of study in engineering which is discussed in the
subsequent sections.

Objectives:

The following sessions leads to

A detailed explanation of the requirements needed for a product design,


Understand requirement engineering and management,
Construct a traceability matrix and its analysis
Develop system design and modelling
Optimisation of system
Introduction to sub-system and interface design

FSIPD 70
2.1 Requirement Engineering

2.1.1 Definition of Requirement

It is a statement that identifies a necessary attribute, capability, characteristic, or quality of a system for it to
have value and utility to a customer, organization, internal user, or other stakeholder. A specification (often
abbreviated as spec) may refer to an explicit set of requirements to be satisfied by a material, design, product,
or service.

In the classical engineering approach, sets of requirements are used as inputs into the design stages of
product development. Requirements are also an important input into the verification process, since tests
should trace back to specific requirements. Requirements show what elements and functions are necessary
for the particular project.

Requirement - in system/software engineering:

a capability needed by a user to solve a problem or achieve an objective;


a capability that must be met or possessed by a system or system component to satisfy a contract,
standard, specification or other formally imposed document;
the set of all requirements that form the basis for subsequent development of the software or software
component;
A restriction imposed by a stakeholder

A requirement is defined as

A condition or capability to which a system must conform

Requirements can be said to relate to two fields:

Product requirements prescribe properties of a system or product.


Process requirements prescribe activities to be performed by the developing organization. For instance,
process requirements could specify the methodologies that must be followed, and constraints that the
organization must obey.

Product and process requirements are closely linked; a product requirement could be said to specify the
automation required to support a process requirement while a process requirement could be said to specify
the activities required to support a product requirement. For example, a maximum development cost
requirement (a process requirement) may be imposed to help achieve a maximum sales price requirement (a
product requirement); a requirement that the product be maintainable (a product requirement) often is
addressed by imposing requirements to follow particular development styles (e.g., object-oriented
programming), style-guides, or a review/inspection process (process requirements).

FSIPD 71
Characteristics of requirements

The requirements needed for a product design should have the following characteristics (table 2.1) to make
the design efficient

Characteristics Explanation

Unitary (Cohesive) Addressing of one and only one thing.

Complete No missing information and it is fully stated


No contradiction with any other requirement and is fully consistent with all
Consistent
authoritative external documentation.
It does not contain conjunctions. E.g.,
Non-Conjugated "The postal code field must validate American and Canadian postal codes" should
(Atomic) be written as two separate requirements:
(1) "The postal code field must validate American postal codes" and
(2) "The postal code field must validate Canadian postal codes".
Satisfaction of all or part of a business need as stated by stakeholders and
Traceable
authoritatively documented.
Current The requirement has not been made obsolete by the passage of time.
It expresses objective facts, not subjective opinions. It is subject to one and only
Unambiguous
one interpretation. Vague subjects, adjectives, prepositions, verbs and subjective
phrases are avoided. Negative statements and compound statements are avoided.
Many requirements represent a stakeholder-defined characteristic the absence of
Specify Importance which will result in a major or even fatal deficiency. Others represent features that
may be implemented if time and budget permits. The requirement must specify a
level of importance.
The implementation of the requirement can be determined through basic possible
Verifiable methods: inspection, demonstration, test (instrumented) or analysis (to include
validated modelling & simulation).
Table-2.1-Requirement characteristics

Stakeholder:

The stakeholder is defined as someone who is affected by the system that is being developed. The two main
types of stakeholders are

Users - Users are people who will be using the system and
Customers- Customers are the people who request the system and are responsible for approving it.
Usually customers pay for the development of the system.

For example, in the travel agency website, a customer is a travel agency owner, and the users are all the people
who will be using this website through the Internet.

For a Maruti 800 Car, the user is the Driver and the customer is the owner who buys it. Similarly for the S/W
the actual end user who works on it or who uses the software is the user and the company's Infrastructure/IS
team is the customer, for the mobile phones, the users are the people and the customer is the owner who buys
it.

The following people may also be considered as stakeholders:

Anyone participating in the development of the system (business analysts, designers, coders, testers,
project managers, deployment managers, use case designers, graphic designers)
Anyone contributing knowledge to the system (domain experts, authors of documents that were used for
requirements elicitation, owners of the websites to which a link is provided)

FSIPD 72
Executives (the president of the company that is represented by customers, the director of the IT
department of the company that designs and develops the system)
People involved in maintenance and support (website hosting company, help desk)
Providers of rules and regulations (rules imposed by search engines regarding content of the website,
government rules, state taxation rules)

Internal stakeholders

Internal stakeholders are people who are already in that particular line of business or the organization. These
are people who already serve the organisation, for example, staff, board members or volunteers.

Employees: Employees and their representative groups are interested in information about the stability
and profitability of their employers. They are also interested in information which enables them to assess
the ability of the enterprise to provide remuneration, retirement benefits and employment opportunities
Investors: The providers of risk capital and their advisers are concerned with the risk inherent in, and
return provided by, their investments. They need information to help them determine whether they
should buy, hold or sell.
Management and those who appointed them: Financial statements also show the results of the
stewardship of management, or the accountability of management for the resources entrusted to it.
Those users who wish to assess the stewardship or accountability of, management do so in order that
they may make economic decisions; these decisions may include, for example, whether to hold or sell
their investment in the enterprise or whether to reappoint or replace the management

External stakeholders

External stakeholders are stakeholders outside the organisation, but those who have an impact on the
organisation, such as the community or the organisation's clients.

Customers: Customers are one of the most immediate external stakeholders that a company must consider.
For retailers, consumers are customers. Attracting, retaining and generating loyalty from core consumer
markets its critical to long-term financial success. For business-to-business companies, the customers are
the businesses that buy goods for business use. Trade resellers sell directly to wholesalers or retailers, but
they must also consider end customers as part of their stakeholders. If consumers don't buy manufactured
goods, for instance, nobody in the distribution channel succeeds.

Communities and Governments: Communities and governments are closely tied external stakeholders.
Companies operate within communities, and their activities affect more than just customers. Businesses pay
taxes, but they are also informally expected by residents to operate ethically and with environmental
responsibility. Communities also like to see businesses get involved in events and local charitable giving.
Government entities make decisions that can significantly impact a company's operations. It is important,
therefore, for company managers to maintain good relationships with local officials to anticipate legal or
regulatory changes or community developments that may affect them.

Suppliers and Partners: Suppliers and business partners have become more critical stakeholders in the early
21st century. More often, companies build a number of small, loyal relationships with suppliers and
associates. This enables each business to develop shared goals, visions and strategies. Trade buyers and
sellers can effectively collaborate to deliver the best value to end customers, which is beneficial to each
partner. Additionally, your trade partners expect that you operate ethically to avoid tarnishing the reputation
of companies with whom your business associates. Suppliers are among a trade reseller's external
stakeholders.

Creditors: Businesses commonly use lenders to finance business ventures, building and asset purchases and
supply purchases. Banks often provide loans for major purchases, such as a new building. Suppliers may
provide product inventory on account, which a business than pays down the road. Current creditors basically
expect that a business meets its payment deadlines responsibly and consistently. Doing so helps your

FSIPD 73
business maintain good relationships with creditors and also makes you more likely to get quality financing in
the future. Regulatory Agencies: A regulatory agency (also regulatory authority, regulatory body or regulator)
is a public authority or government agency responsible for exercising autonomous authority over some area of
human activity in a regulatory or supervisory capacity. An independent regulatory agency is a regulatory
agency that is independent from other branches or arms of the government. Regulatory agencies deal in the
area of administrative lawregulation or rulemaking (codifying and enforcing rules and regulations and
imposing supervision or oversight for the benefit of the public at large). The existence of independent
regulatory agencies is justified by the complexity of certain regulatory and supervisory tasks that require
expertise, the need for rapid implementation of public authority in certain sectors, and the drawbacks of
political interference. Some independent regulatory agencies perform investigations or audits, and some are
authorized to fine the relevant parties and order certain measures. Regulatory agencies are usually a part of
the executive branch of the government, or they have statutory authority to perform their functions with
oversight from the legislative branch. Their actions are generally open to legal review. Regulatory authorities
are commonly set up to enforce standards and safety, or to oversee use of public goods and regulate
commerce. Examples of regulatory agencies are the Interstate Commerce Commission and U.S. Food and Drug
Administration in the United States, Ofcom in the United Kingdom, and the TRAI (Telecom Regulatory
Authority of India), ARAI (Automotive Research Association of India) in India.

Requirement Pyramid

Figure-2.1 Requirement Pyramid

The parts of the requirement pyramid (figure 2.1) are as follows

Stakeholder need: a requirement from a stakeholder


Feature: a service provided by the system, usually formulated by a business analyst; a purpose of a
feature is to fulfill a stakeholder need
Use case: a description of system behavior in terms of sequences of actions
Supplementary requirement: another requirement (usually non-functional) that cannot be captured in
use cases
Test case: a specification of test inputs, execution conditions, and expected results
Scenario: a specific sequence of actions; a specific path through a use case

Top level of the pyramid is occupied by the stakeholder needs. On the lower levels are features, use cases, and
supplementary requirements. Quite often, at different levels of these requirements, different levels of detail
are captured. The lower the level, the more detailed the requirement.

For example, a need might be Data should be persistent. The feature can refine this requirement to be
System should use a relational database. On the supplementary specification level, the requirement is even
more specific: System should use Oracle 9i database. The further down, the more detailed the requirement.

FSIPD 74
One of the best practices of requirements management is to have at least two different levels of requirement
abstraction. For example, the Vision contains high-level requirements (features), and the lower levels in the
pyramid express the requirements at a detailed level. Senior stakeholders (such as vice presidents) do not
have time to read 200 pages of detailed requirements but should be expected to read a 12-page Vision
document.

The main difference between needs and features is in the source of the requirement. Needs come from
stakeholders, and features are formulated by business analysts. The role of test cases is to check if use cases
and supplementary requirements are implemented correctly. Scenarios help derive use cases from test cases
and facilitate the design and implementation of specific paths through use cases.

Parts of Requirement pyramid:

Needs

All detailed requirements would be captured as stakeholder requests. However, in many projects it is easier to
capture all input from the stakeholders in the same type of requirement; stakeholder needs represent all
input from the stakeholders, regardless of granularity. In some projects there may be a need to distinguish
between stakeholder needs describing initial requirements and stakeholder requests that may include
subsequent change requests. Requirements elicitation, also called requirements gathering, is a very
important step. Missing or misinterpreting a requirement at this stage will propagate the problem through
the development lifecycle.
Some of the techniques used to elicit requirements from stakeholders:

Interviews
Questionnaires
Workshops
Storyboards
Role-playing
Brainstorming sessions
Affinity diagrams
Prototyping
Analysis of existing documents
Use cases
Analysis of existing systems

Developing the Vision Document

Information that comes from stakeholders does not have necessary attributes. It is especially the case that
requirements coming from different sources may be conflicting or redundant. During development of the
Vision document, one of the main goals of business analysts is deriving features from stakeholder needs (see
Figure 2.1). Features should have all the attributes of a good requirement. They should be testable, non-
redundant, clear, and so on.
The Vision document should contain essential information about the system being developed. Besides listing
all the features, it should contain a product overview, a user description, a summary of the systems
capabilities, and other information that may be required to understand the systems purpose. It may also list
all stakeholder needs in case they were not captured in separate documents.

Creating Use Cases

Functional requirements are best described in the form of use cases. They are derived from features, as
shown in Figure 2.1. A use case is a description of a system in terms of a sequence of actions. It should yield
an observable result or value for the actor (an actor is someone or something that interacts with the system).
The use cases
Are initiated by an actor.
Model an interaction between an actor and the system.
Describe a sequence of actions.
Capture functional requirements.

FSIPD 75
Should provide some value to an actor.
Represent a complete and meaningful flow of events.

Supplementary Specification

Supplementary specification captures nonfunctional requirements (usability, reliability, performance,


supportability) and some functional requirements that are spread across the system, so it is tough to capture
them in the use cases. These requirements are called supplementary requirements and are derived from
features, as shown in Figure 2.1.

Creating Test Cases from Use Cases

As soon as all the requirements are captured, we should design a way to check whether they are properly
implemented in the final product. Test cases will show the testers what steps should be performed to test all
requirements. In this step we will concentrate on creating test cases from use cases. If we did not create
scenarios while generating use cases, we need to define them now. Test cases are at the lowest level of the
pyramid, as shown in Figure 2.1.

Creating Test Cases from the Supplementary Specification

The approach used in the preceding step does not apply to testing supplementary requirements. Because
these requirements are not expressed as a sequence of actions, the concept of scenarios does not apply to
them. An individual approach should be applied to each of the supplementary requirements because
techniques used to test performance requirements are different from techniques used to test usability
requirements. In this step we also design testing infrastructure and platform-related issues.

2.1.2 Types of Requirements

Design Engineers must consider a multitude of technical, economic, social, environmental, and political
constraints when they design products and processes. The above constraints are the requirements needed for
efficient design by an organization. Depending on the format, source, and common characteristics; the
requirements can be split into different requirement types. Various types of requirements are

Functional
Performance
Physical
Regulatory
Economical
Behavioral
Technical
Stakeholder
Environmental
Industry specific

Functional Requirement:

They are also called as solution requirements. It contains detailed statements of the behaviour and
information that the solution will need. Examples include formatting text, calculating a number, modulating
a signal. They are also known as capabilities.

For any design problem, the first task is to identify the functional requirements (FRs) of the product, i.e. the
requirements pertaining to what the product will have to do. FRs focuses on the operational features of
products. Examples of Functional Requirements are:

The product must display enlarged images.


The robot must orient parts on the ground at any angle.
Resist bending excess heat must be dissipated to the ambient air.

FSIPD 76
The container must hold hot liquids.
Colour of car for the visual need
Bluetooth connectivity in mobile phone for recharging
Holder in cup

Steam Ti=500c To=200c


System

Figure-2.2 Example of functional requirement

The function of the system (figure 2.2) is to cool the steam from 500c to 200c.The system should be designed
in such a way that it satisfies the functional requirement.

Performance Requirements:

The product performance requirements represent the minimum performance requirements for products. It
defines How, When and How Much the product or service needs to perform. They are also called Quality-of-
service or non-functional requirements.

It contains detailed statements of the conditions under which the solution must remain effective, qualities
that the solution must have, or constraints within which it must operate. Examples include: Reliability,
Testability, maintainability, Availability. They are also known as characteristics, constraints.

Performance requirements should have the following characteristics:

Requirements should be quantitative rather than qualitative.


e.g. Overall length of the motor cycle shall be 150 inches or less. Overall width shall not exceed 52 inches.
Overall height shall be 85 inches or less
Requirements should be verifiable.
e.g. The mandrel shall have a hardness of not less than 60 and not more than 65 on the Rockwell C scale
Performance requirements should describe interfaces in sufficient detail to allow interchange ability with
parts of different design
e.g. Provision shall be made for installation of 24 volt DC power cable access to the equipment. The size of
the PADS unit is approximately 26x31x20 inches. The weight of the unit is 317 pounds.
Requirements should be material and process independent
e.g. All mowers shall be treated with the manufacturers commercial standard rust-proofing treatment

Physical Requirements:

The physical requirements or constraints on the system may include:

Physical size- size of the product should be compact but it should satisfies all the needs
Geometries
Power consumption
Physical robustness- The ability of a system to resist change without adapting its initial stable
configuration. For example, the ability of a computer system to cope with errors during execution or the
ability of an algorithm to continue to operate despite abnormalities in input, calculations.
Overall weight- The weight of car should be light and should be strong enough to sustain any type of
stress. Also Strength and weight ratio is the important consideration in Aviation product design, because
when we got to design aviation product we always kept in mind product should have low weight and high
performance/efficient (Weight of product directly proportional to energy consumption).

FSIPD 77
Regulatory Requirements:

Regulation may refer a process of the monitoring, and enforcement of rules, established by primary or
delegated legislation. It is a written instrument containing rules having the force of law.

Safety/Reliability: Direct or indirect hazards should be eliminated during the usage of products for the safety
of workers or users. It should contain human warnings.

Regulation:

Creates, limits, or constrains a right


Creates or limits a duty
Allocates a responsibility

Regulation can take many forms:

Legal restrictions,
Contractual obligations
Self-regulation by an industry
Co-regulation
Third-party regulation
Certification e.g. ISO 9001:2000
Accreditation or market regulation
Social regulation e.g. OSHA,FAA,FDA

Note: OSHA Occupation Safety and Health Administration; Operation and Support Hazard analysis; FAA-
Federal Aviation Administration; FDA- Food and Drug Administration; Fault Detection and Accommodation

Economical Requirements

Economic means "pertaining to the production and use of income," and economical is "avoiding waste, being
careful of resources. Therefore Economical is using the minimum of time or resources necessary for
effectiveness.

Cost: The cost of the products should satisfy the customers needs .Product costs are calculated by many
different departments in a company: cost engineering, industrial engineering and design & manufacturing etc.

Various parameters for economical requirements of a product are:

Cost of labour
Cost of raw material
Manufacturing cost
Cost of the labour required to deliver a service to a customer.
Selling price of service or product
Maintenance cost

Technical Requirements

Technical Requirements are based on the technology used to make the product or service. A technical
requirement pertains to the technical aspects that your system must fulfil, such as performance-related,
reliability etc. A technical specification (often abbreviated as spec) is an explicit set of technical requirements
to be satisfied by a product, or service. If a product or service fails to meet one or more of the applicable
specifications, it may be referred to as being out of specification.

FSIPD 78
Table 2.2 depicts an example of technical and economic requirements of different life phases of an industry

Life phase Engineering requirements Economic Requirements

Planning Design State of technology, Companys Planning, designing, licensing


state of knowledge costs

Manufacturing Production process required, Production, assembly, testing


Quantities to be manufactured, costs
Assembly requirements
Marketing Storage requirements, Storage ,packaging costs
Transportation requirements

Product use On-site assembly facilities, Start- Installation, operation and


up requirements, Geometric maintenance costs
requirements
Disposal Recycling, Waste disposal, reuse Cost of recycling, scrapping
of product

Table-2.2-Technical & Economic Requirements

Behavioural Requirements

Behavioural requirements explain what has to be done by identifying the necessary behaviour of a system.
These are requirements that specify the reactive behaviour of the host. They are expressed in scenarios
where upon due to some internal or external event, certain reaction is expected (or) prohibited. Based on a set
of such requirements, the behaviour of the entity is monitored and a protocol can be modelled.

Usage pattern decides the behaviour. Typical example could be frequent dropping of mobile phone on the
floor or rapid shutdown of any S/W application or speed of a motor on maximum load condition or processor
speed when multiple processes on a system. These thongs should be thought of while designing them.

Stakeholder Requirements

Stakeholder requirements represent the views of users, acquirers, and customers regarding the problem (or
opportunity), through a set of requirements for a solution that can provide the services needed by the
stakeholders in a defined environment. They are also called as user requirements. It includes Mid-level
statements of the needs of a particular stakeholder or group of stakeholders. They usually describe how
someone wants to interact with the intended solution. It acts as a mid-point between the high-level business
requirements and more detailed solution requirements.

The purpose of defining stakeholder requirements is to elicit a set of needs related to a new or changed
mission for an enterprise and to transform these needs into clear, concise, and verifiable stakeholder
requirements. Various stakeholder requirements are as follows are

Service or functional
Operational
Interface
Environmental
Utilization characteristics
Human Factors
Logistical
Design and Realization constraints
Process constraints

FSIPD 79
Project constraints
Business constraints

Environmental requirements

The basic environmental requirements of any product (figure 2.3) are

Think green
Buy green
Be green

Environmental requirements typically list all of the major statutes with appropriate records addressing
environmental, health or safety issues by any governmental authority. It includes all present and future
requirements of common law, regulation of the discharge, disposal, remediation, etc. of any Hazardous
Material or any other pollutant, contaminant, etc. Some of the environmental regulations are

Restriction of Hazardous Substances (RoHS)


REACH - Regulation on chemicals and their safe use (EC 1907/2006)

Figure- 2.3. Environmental products

Industry-specific requirements

Industry-specific requirements are the requirements that are pertained or related to a specific industry.
Controls, regulations, laws etc. can be industry specific requirements. For example, aviation, nuclear energy
etc. demand higher safety related rules. Nuclear energy sector needs industry-specific regulatory protocols. All
industrial activities are governed by certain legal provisions that come in force from time to time. A few of
them are given below

Factories Act, 1948


Employees Provident Fund & Miscellaneous Provisions Act, 1952
Employees State Insurance Act
Payment of Wages Act, 1936
Minimum Wages Act, 1948
The Indian Partnership Act, 1932
The Income Tax Act, 1911
Pollution Control Act
HIPAA requires the establishment of national standards for electronic health care transactions and
national identifiers for providers, health insurance plans, and employers
The Mines Act, 1952 contains provisions for measures relating to the health, safety and welfare of
workers in the coal, metallic, ferrous and oil mines

FSIPD 80
Business or internal- company specific requirements

It includes high-level statements of the goals, objectives, or needs of an organisation. They usually describe
opportunities that an organisation wants to be realised or problems that they want to be solved. For example
health and safety executives requirements is to develop clear, agreed standards of good management
practice for a range of work-related stressors.

2.1.3 Requirement Engineering

During the development process, the reverse engineering must elicit the stakeholders requirements;
document the requirements in a suitable manner, validate and verify the requirements and manage the
requirements over the entire life-cycle of the system.

Requirement engineering is a systematic and disciplined approach for the satisfaction and management of
requirements with the following goals

(1) Knowing the relevant requirements, achieving a consensus among the stakeholders among these
requirements, documenting them according to the given standards and managing them systematically.

(2) Understanding and documenting the stakeholders needs and desires ,they specify and manage the
requirements to minimize the risk of delivering a system that need not meet the stakeholders desires and
needs

The processes used for RE vary widely depends on the application domain, the people involved and the
organization developing the requirements. It consists of processes used to discover, analyze and validate
system requirements.

Functions of Requirement Engineering:

Requirement Engineering begins during the communication activity and continues into the modelling activity.
It builds a bridge from the system requirements into new product design and construction. It allows the
requirements engineer to examine

the context of the software work to be performed


the specific needs that design and construction must address
the priorities that guide the order in which work is to be completed
the information, function, and behaviour that will have a profound impact on the resultant design.

Requirements engineering activities:

The process of requirement engineering (figure 2.4) involves activities which vary widely, depending on the
type of system being developed and the specific practices of the organization(s) involved. These may include:
Requirements inception or requirements elicitation
Requirements identification - identifying new requirements
Requirements analysis and negotiation - checking requirements and resolving stakeholder conflicts
Requirements specification (Product Requirements Specification)- documenting the requirements in a
requirements document
System modelling - deriving models of the system, often using a notation such as the Unified Modelling
Language.
Requirements validation - checking that the documented requirements and models are consistent and
meet stakeholder needs
Requirements management - managing changes to the requirements as the system is developed and put
into use.

FSIPD 81
Figure 2.4-Process of Requirement engineering

Feasibility Studies:

A feasibility study decides whether the proposed system is worthwhile or not. It is a short focused study that
checks

If the system contributes to organizational objectives;


If the system can be engineered using current technology and within budget;
If the system can be integrated with other systems that are used.

Requirement analysis

The field requirement analysis is composed of requirement inception and requirement elicitation. These two
sub fields are described as follows:

Requirement Inception:

During inception, the following set of questions to be asked by a requirement engineer to establish a basic
understanding of the problem

The people who want a solution


The nature of the solution that is desired
The effectiveness of preliminary communication and collaboration between the customer and the
developer

Through these questions, the requirements engineer needs to

Identify the stakeholders


Recognize multiple viewpoints
Work toward collaboration
Break the ice and initiate the communication

The process identifies the goals which are the high level objectives of new product must meet; System
boundaries to find what is the exact problem needs to be solved, based on which the system boundaries need
to be identified; Stakeholders who are the individuals or organizations who stand to gain or lose from the
success or failure of a system.

FSIPD 82
Requirement elicitation:

Eliciting requirements is difficult because of the following problems

Problems of scope in identifying the boundaries of the system or specifying too much technical detail
rather than overall system objectives
Problems of understanding what is wanted, what the problem domain is, and what the computing
environment can handle (Information that is believed to be "obvious" is often omitted)
Problems of volatility because the requirements change over time

Elicitation (figure 2.5) may be accomplished through two activities

Collaborative requirements gathering


Quality function deployment

Requirement Elicitation

Collaborative Quality
requirement Function
gathering Deployment

Figure 2.5-Activities of Elicitation

Collaborative requirement gathering:

The following guidelines are followed for requirement gathering

Conducting meetings and attended by both engineers, customers, and other interested stakeholders
Establishment of rules for preparation and participation
Suggestion of an agenda that cover all important points but informal enough to encourage the free flow
of ideas
Control of meeting by a "facilitator" (customer, developer, or outsider)
Usage of a "definition mechanism" such as work sheets, flip charts, wall stickers, electronic bulletin
board, chat room, or some other virtual forum
The goal is to identify the problem, propose elements of the solution, negotiate different approaches,
and specify a preliminary set of solution requirements

Quality Function Deployment:

This is a technique that translates the needs of the customer into technical requirements for product. It
emphasizes an understanding of what is valuable to the customer and then deploys these values throughout
the engineering process through functions, information, and tasks. It identifies three types of requirements

Normal requirements: These requirements are the objectives and goals stated for a product or system
during meetings with the customer
Expected requirements: These requirements are implicit to the product or system and may be so
fundamental that the customer does not explicitly state them

FSIPD 83
Exciting requirements: These requirements are for features that go beyond the customer's expectations
and prove to be very satisfying when present

Requirement elaboration:

During elaboration, the engineer takes the information obtained during inception and elicitation and begins to
expand and refine it. Elaboration focuses on developing a refined technical model of software functions,
features, and constraints. It is an analysis modelling task

Development of use cases


Identification of domain classes along with their attributes and relationships
Capture the life on an object by state machine diagram

The end result is an analysis model that defines the functional, informational, and behavioural domains of
the problem.

Requirement Analysis (Requirement negotiation):

During negotiation, the software engineer reconciles the conflicts between what the customer wants and
what can be achieved within given limited business resources. The following methods are followed in analysis

Ranking of requirements (i.e., prioritized) by the customers, users, and other stakeholders
Identification and analysis of risks associated with each requirement
Rough guesses of development effort are made and used to assess the impact of each requirement on
project cost and delivery time
Using an iterative approach, requirements are eliminated, combined and/or modified so that each party
achieves some measure of satisfaction

Determines whether the stated requirements are

clear,
complete,
consistent
unambiguous, and
resolving any apparent conflicts.

Requirement Specification:

A specification is the final work product produced by the requirements engineer. It serves as the foundation
for subsequent engineering activities. It describes the function and performance of a computer-based system
and the constraints that will govern its development. It formalizes the informational, functional, and
behavioural requirements of the proposed software in both a graphical and textual format.

Requirements Spec typically has the following details of identified requirements

Introduction
Overall description
External Interfaces
Functionality
Required Performance
Quality attributes
Design constraints

FSIPD 84
Requirement Validation:

Validation is not a mechanical process of checking documents. It is an issue of communicating requirements,


as constructed by the design team, back to the stakeholders whose goals those requirements are supposed to
meet, and to all those other stakeholders, with whose goals those requirements may conflict. It is an
information feedback link needed to:

give the stakeholders a chance to check early whether the solution proposed will really solve their
problem
stimulate the evolution of customers understanding (of what is possible)and
therefore act also as a catalyst of the elicitation process

During validation, the work products produced as a result of requirements engineering are assessed for
quality. The specification is examined to ensure that all requirements have been stated unambiguously and
all inconsistencies, omissions, and errors have been detected and corrected. The work products conform to
the standards established for the process, the project, and the product. The formal technical review serves as
the primary requirements validation mechanism. Members include engineers, customers, users, and other
stakeholders.

Requirement Management:

Requirements management is the process of documenting, analyzing, tracing, prioritizing and agreeing on
requirements and then controlling changes and communicating to relevant stakeholders and to maintain
traceability of requirements.

Each requirement is assigned a unique identifier


The requirements are then placed into one or more traceability tables
These tables may be stored in a database that relate features, sources, dependencies, subsystems, and
interfaces to the requirements
A requirements traceability table is also placed at the end of the requirements specification

Voice of the Customer:

Voice of the customer (VOC) is a term used to describe the in-depth process of capturing a customer's
expectations, preferences and aversions. Specifically, the Voice of the Customer is a market research
technique that produces a detailed set of customer wants and needs, organized into a hierarchical structure,
and then prioritized in terms of relative importance and satisfaction with current alternatives. Voice of the
Customer studies typically consist of both qualitative and quantitative research steps. They are generally
conducted at the start of any new product, process, or service design initiative in order to better understand
the customer's wants and needs, and as the key input for new product definition and there are many possible
ways to gather the information

focus groups,
individual interviews,
contextual inquiry,
ethnographic techniques, etc.

But all involve a series of structured in-depth interviews, which focus on the customers' experiences with
current products or alternatives within the category under consideration. Customer needs are then extracted,
organized into a more usable hierarchy, and then prioritized by the customers.

Product development core team should be involved in the process of designing the sample (i.e. the types of
customers to include), generating the questions for the discussion guide, either conducting or observing and
analyzing the interviews, and extracting and processing the needs statements.

FSIPD 85
Voice of the Customer Initiatives

A detailed understanding of the customer's requirements


A common language for the team going forward
Key input for the setting of appropriate design specifications for the new product or service
A highly useful springboard for product innovation.

Qualities of Desirable Voice of Customer Metrics:

Credibility: It should have a good track of results. It should be solved by scientifically and academically
rigorous methodology. It should own the trust of management. It should have good acceptance from
customers
Reliability: It should be consistent which can be applied across the customer life cycle.
Precision: It should be precise to provide insight. It should deliver greater accuracy.
Accuracy: It should be accurate and be the representative of customer base. It should have an acceptable
margin of error.
Action ability: It should provide insight to encourage customers to be loyal and to purchase. It should
prioritize improvements according to big impacts.
Ability to Predict: It should project the future expectations of the customers based on their satisfaction.

Types of customer needs:

Direct Needs: Customers have no trouble declaring these needs. Ex: Cost, Good mileage
Latent Needs: Not directly expressed by customers without probing. Ex: Smooth ride, Good exterior and
interior design, and High efficiency
Constant Needs: These needs are essential to the task of the product and always will be. When product is
used, this need is always there. Ex: Less consumption of fuel, Spacious
Niche Needs: Apply only to a smaller market segment within the entire population. Ex: In-built mp3 and
video players, bullet proof glasses, Rear axle camera

Quality Function Deployment (QFD):

To design a product well, a design teams needs to know what it is they are designing, and what the end-users
will expect from it. Quality Function Deployment is a systematic approach to design based on a close
awareness of customer desires, coupled with the integration of corporate functional groups. It consists in
translating customer desires (for example, the ease of writing for a pen) into design characteristics (pen ink
viscosity, pressure on ball-point) for each stage of the product development (Rosenthal, 1992).

Ultimately the goal of QFD is to translate often subjective quality criteria into objective ones that can be
quantified and measured and which can then be used to design and manufacture the product. It is a
complimentary method for determining how and where priorities are to be assigned in product development.
The intent is to employ objective procedures in increasing detail throughout the development of the product
(Reilly, 1999).

Quality Function Deployment was developed by Yoji Akao in Japan in 1966. By 1972 the power of the
approach had been well demonstrated at the Mitsubishi Heavy Industries Kobe Shipyard (Sullivan, 1986) and
in 1978 the first book on the subject was published in Japanese and then later translated into English in 1994
(Mizuno and Akao, 1994).

QFD is a method for developing a design quality aimed at satisfying the consumer and then translating
the consumer's demand into design targets and major quality assurance points to be used throughout
the production phase. ... [QFD] is a way to assure the design quality while the product is still in the
The 3 main
design goalsAkao
stage." in implementing
(1994) QFD are:

FSIPD 86
To prioritize spoken and unspoken customer wants and needs.
To translate these needs into technical characteristics and specifications.
To build and deliver a quality product or service by focusing everybody toward customer satisfaction.

Since its introduction, Quality Function Deployment has helped to transform the way many companies:

Plan new products


Design product requirements
Determine process characteristics
Control the manufacturing process
Document already existing product specifications

QFD uses some principles from Concurrent Engineering in that cross-functional teams are involved in all
phases of product development. Each of the four phases in a QFD process uses a matrix to translate customer
requirements from initial planning stages through production control (Becker Associates Inc, 2000).

Each phase, or matrix, represents a more specific aspect of the product's requirements. Relationships
between elements are evaluated for each phase. Only the most important aspects from each phase are
deployed into the next matrix. The four phases of QFD (figure 2.6) are as follows

Phase 1, Product Planning:

Phase 1 is to build the House of Quality. Led by the marketing department, Phase 1, or product planning, is
also called The House of Quality. Many organizations only get through this phase of a QFD process. Phase 1
documents customer requirements, warranty data, competitive opportunities, product measurements,
competing product measures, and the technical ability of the organization to meet each customer
requirement. Getting good data from the customer in Phase 1 is critical to the success of the entire QFD
process.

Phase 2, Product Design:

This phase 2 is led by the engineering department. Product design requires creativity and innovative team
ideas. Product concepts are created during this phase and part specifications are documented. Parts that are
determined to be most important to meeting customer needs are then deployed into process planning, or
Phase 3.

Phase 3, Process Planning:

Process planning comes next and is led by manufacturing engineering. During process planning,
manufacturing processes are flowcharted and process parameters (or target values) are documented.

Phase 4, Process Control:

And finally, in production planning, performance indicators are created to monitor the production process,
maintenance schedules, and skills training for operators. Also, in this phase decisions are made as to which
process poses the most risk and controls are put in place to prevent failures. The quality assurance
department in concert with manufacturing leads Phase 4.

FSIPD 87
Prroduct Planning Phasee 2 Process Planning Phasse 4
Buuilding house of Flow chart of
quuality Product
P design manufacturing proocess Process Control
Documentation
D of part
p Monitoring produuction
s
specification process

Phasee 1 Phasse 3

Figure 2.6-Four phasees of QFD

House off Quality:

The first phase in the implementattion of the Qu


uality Functioon Deploymennt process invvolves puttingg together
a "Housee of Quality" (Hauser and Clausing,
C 1988
8) such as thee one shown below,
b which is for the development
of a clim
mbing harness (fig. from Lo
owe & Ridgwaay, 2001). Thee following steps are follow
wed to build the
t house
of qualityy (figure 2.7)

Step 1: Customer
C Re quirements - "Voice of th
he Customer

To determ
mine what maarket segmen
nts will be anaalyzed during the process and to identifyy the customeers

Step 2: Regulatory
R R
Requirements
s

Documen quirements that are dictateed by management or regullatory standarrds


ntation of req

Step 3: Customer
C Im portance Rattings

Rating th
he importancee of each requ
uirement

Step 4: Customer
C Ra ting of the C ompetition

Rating off products or services in rellation to comp


petition

Step 5: Technical
T Desscriptors - "V
Voice of the Engineer"
E

Measurement and Ben


nchmark of prroducts or serrvices against competition

Step 6: Direction
D of I mprovementt

Direction
n of movemen
nt for each desscriptor

Step 7: Relationship
R Matrix

Determin
nation of relattionship betw
ween customeer needs and the companyss ability to meeet those needs

Step 8: Organization
O al Difficulty

Rate the design attrib


butes in termss of organizational difficultty.

Step 9: Technical
T An alysis of Com
mpetitor Prod
ducts

Conductss a compariso
on of competittor technical descriptors
d

FSIPD 88
Step 10: Target Valu es for Techn ical Descripto
ors

Establish
h target values for each tecchnical descrip
ptor

Step 11: Correlation Matrix

It makess the matrix lo


ook like a hou
use with a roo
of. Examine ho
ow each of th
he technical descriptors impact each
other

Step 12: Absolute Im


mportance

Calculatees the absolutte importancee for each tech


hnical descriptor

Figure--2.7 House off quality

Source: Internet

Product Design Specification:

The Prodduct Design Specification (PDS) comprisses quantitatiive statementt of what to d design prior to
o starting
n it. It is independent of any specific em
to design mbodiment off your product, so multiplee solution con ncepts are
possible.. Split the problem up into smaller categgories to makee it easier to consider
c the p
problem.

Elementss of PDS are as


a follows

Performance
Environment
Servvice Life
Mainntenance and repair
Ship
pping
Packkaging
Quan ntity

FSIPD 89
Manufacturing Facility
Size
Weight
Aesthetics, appearance and finish

A design specification provides explicit information about the requirements for a product and how the
product is to be put together. It is the most traditional kind of specification, having been used historically in
public contracting for buildings, highways, and other public works, and represents the kind of thinking in
which architects and engineers have been trained. Its use is called for where a structure or product has to be
specially made to meet a unique need. For example, a design specification must include all necessary
drawings, dimensions, environmental factors, ergonomic factors, aesthetic factors, and cost, maintenance
that will be needed, quality, safety, documentation and description.

Since, along with systems requirements, the ability to establish relation between the various products is also
an essential feature of a product development process, the study of traceability is important.

2.1.4 Traceability:

Definitions of Traceability:

Traceability as a general term is the ability to chronologically interrelate the uniquely identifiable entities in a
way that matters.

The IEEE Standard Glossary of Software Engineering Terminology defines traceability as the degree to which
a relationship can be established between two or more products of the development process, especially
products having a predecessor-successor or master-subordinate relationship to one another.

Requirement Traceability:

The ability to describe and follow the life of a requirement, in both forward and backward directions.
The ability to define, capture and follow the traces left by requirements on other elements of the product
development process and the traces left by those elements on requirements.

Need of requirement traceability:

For rapid evolution and upgrade of the systems with growing complexity
To demonstrate compliance with a contract, specification, or regulation.
To improve the quality of the products, reduce maintenance costs, and facilitate reuse.
To ensure continued alignment between stakeholder requirements and system evolution.
To understand the product under development and its artifacts.
Ability to manage changes effectively.
Maintaining consistency between the product and the environment in which the product is operating.

Types of traceability:

Traceability can be classified as

Vertical traceability:
It identifies the origin of items (for example, customer needs) and follows these items as they evolve
through your project artifacts, typically from requirements to design, the source code that implements
the design, and the tests that validate the requirements.
Horizontal traceability:

FSIPD 90
It identifies relationships among similar items, such as between requirements or within your architecture.
This enables your team to anticipate potential problems, such as two sub teams implementing the same
requirement in two different components.

Traceability is often maintained bidirectional: You should be able to trace from your requirements to your
tests and from your tests back to your requirements. There are four types of traceability links that
constitute bi-directional traceability (figure 2.8).

Forward to requirements:
Maps requirements source/stakeholder needs to the requirements, which can help to directly track down
requirements affected by potential changes in sources or needs. This also ensures that requirements will
enforce all stated needs.
Backward from requirements:
Tracing backward from requirements helps to identify the origin of each requirement and verify that the
system meets the needs of the take holders.
Forward from requirements:
Backward to requirements.

Figure 2.8-Traceability links

Procedure of Traceability:

Requirements traceability for a project can be implemented in a systematic and sequential manner. All the
following factors should be considered for efficient traceability

Define all required relationships.


Identify the parts of the product to maintain traceability information.
Choose the type of traceability matrix to use.
Define the tagging conventions that will be used to uniquely identify all requirements.
Identify the key individuals who will supply each type of link information.
Educate the team about the concepts and importance of requirements tracing.
Audit the traceability information periodically to make sure it is being kept current.

FSIPD 91
Elements of Traceability model:

Stakeholder comprise the types of representatives involved in the system development and maintenance
life cycle, including Customer, Project sponsor, Project manager, Systems analyst, Designer, Programmer,
Tester, Writer, Trainer, and so on.
Object represents the conceptual input and output of the system development process, including
requirement statements, systems elements
Source documents objects, including physical media, such as phone calls or meeting scripts. Stakeholders
participate in the physical management of sources.

Techniques of traceability:

The following techniques are used for traceability of requirements

Identifiers
Attributes
Table
List
Matrix

Traceability Matrix and Analysis:

A traceability matrix is a document, usually in the form of a table that correlates any two base lined
documents that require a many-to-many relationship to determine the completeness of the relationship. It is
often used with high-level requirements (these often consist of marketing requirements) and detailed
requirements of the product to the matching parts of high-level design, detailed design, test plan, and test
cases.
The Requirements Traceability Matrix (RTM) is a classical tool that summarizes in a table form the
traceability from original identified stakeholder needs to their associated product requirements and then on
to other work product elements. A traceability matrix (figure 2.9) traces the deliverables by establishing a
thread for each requirement, from the projects initiation to the final implementation.

Figure-2.9- Traceability matrix

Traceability matrix is used to map between requirements and test cases. Its primary goal is to ensure that all
of the requirements identified by your stakeholders have been met and validated. A requirements traceability
matrix may be used to check to see if the current project requirements are being met, and to help in the
creation of a request for proposal, product requirements specification, various deliverable documents, and
project plan tasks.

FSIPD 92
Construction of traceability matrix:
Common usage is to take the identifier for each of the items of one document and place them in the left
column.
The identifiers for the other document are placed across the top row.
When an item in the left column is related to an item across the top, a mark is placed in the intersecting
cell.
The number of relationships is added up for each row and each column.
This value indicates the mapping of the two items.
Zero values indicate that no relationship exists. It must be determined if a relationship must be made.
Large values imply that the relationship is too complex and should be simplified.
To ease the creation of traceability matrices, it is advisable to add the relationships to the source
documents for both backward traceability and forward traceability. That way, when an item is changed in
one base lined document, it's easy to see what needs to be changed in the other.

The challenges faced by traceability are as follows

Various stakeholders require different information


Tracking and maintaining of large amount of information.
Usage of specialized tools
Heterogeneous artifacts
Time-consuming and expensive to capture relationships manually

The system requirements should not only be identified but also documented and analysed, which is done in
requirement management.

2.1.5 Requirement management:

Requirements management is the process of documenting, analyzing, tracing, prioritizing and agreeing on
requirements and then controlling change and communicating to relevant stakeholders. It is a continuous
process throughout a project. A requirement is a capability to which a project outcome (product or service)
should conform.

A systematic approach to elicit, organizes, and documents the requirement of the system, and a process that
establishes and maintains agreement between the customer and the project team on the changing
requirements of the system.

Requirements management includes all activities intended to maintain the integrity and accuracy of expected
requirements.

Manage changes to agreed requirements


Manage changes to baseline (increments)
Keep project plans synchronized with requirements
Control versions of individual requirements and versions of requirements documents
Manage relationships between requirements
Managing the dependencies between the requirements document and other documents produced in the
systems engineering process
Track requirements status

A simplified description of the requirements management process contains the following major steps:

Establishing a requirements management plan


Requirements elicitation
Developing the Vision document

FSIPD 93
Creating use cases
Supplementary specification
Creating test cases from use cases
Creating test cases from the supplementary specification
System design

Table 2.3 shows the requirement types needed and documents required for each step of the requirement
management process.

Step Requirement Types Documents


Requirements elicitation
Stakeholder needs Stakeholder requests

Developing the Vision document Features Vision

Creating use cases


Use cases, scenarios Use case specifications

Supplementary specification
Supplementary requirements Supplementary specification
Creating test cases from use
cases Test cases Test cases

Creating test cases from the


supplementary specification Test cases Test cases

Class diagrams, interaction


System design
diagrams UML diagrams

Table 2.3 Steps of Requirement management

Cascade system of Requirements:

The cascade is a flow-down of requirements from the Customer needs through to the process level (figure
2.10). The cascade allows us to link requirements from the Voice of the customer, to Performance
Requirements, to System, Sub-system, Sub-assembly, Components and process parameters.

VOC: Voice of customer

CLR: Customer level Requirement

PLR: Performance level Requirement

SLR: System Level Requirement

SSLR: Sub-System Level Requirement


SALR: Sub-assembly Level

COMLR: Component Level


Figure-2.10-Cascade of Requirements

FSIPD 94
When the identification and documentation of the requirements of a system is over, the next task is to model
and control the system.

2.2.1Introduction to System Modelling and Control

System modelling and computer simulation, are the field of engineering that an engineer or a scientist uses
to study a system with the help of mathematical models and computers. In simple language system
modelling and simulation is like an experiment conducted with the help of computers. It has the advantage in
terms of money and labour, i.e. it reduces labour and money which is required when we actually perform
experiments with the real system. Hence, systems modelling or system modelling is the field that studies the
use of models for conceptualization and construction of the systems in business and information technology
development.

Systems

So when we are defining the system modelling the first question that strikes the mind will be what actually
we mean by the term system. So, a system can be defined as the summation or assembly of the total
components which are combined in such a manner; may it be a man-made or natural, to form a complex
structure or process. Some of the engineering examples of a system are turbine, generator, integrated circuit,
automobile, machine tool, robot, aircraft, refrigerator, etc.

Another definition for a system is given as; Any object which has some action to perform and is dependent
on number of objects called entities is a system. The figure 2.11 is a block diagram to show a college as a
system:

Figure- 2.11 College as a system

Sometimes the system may be affected by the environment. Such a system is known as exogenous system.
For example, the economic model of a country which is affected by the world economic conditions is an
exogenous system. If the system is not affected by the environment, it is known as an endogenous system.
For example, astatic aeroplane is an endogenous system.

Dynamic/dynamical systems

Dynamics is a study of a system, process, phenomenon, progress which is subject to continuous changes.
Usually, the purpose of study of such a system or process is to:

Understand the nature of the on-going process


Understand the nature of change
Calculate the position of a subject undergoing a process.

In order to conduct such a study, certain mathematical models are required to be created. Such mathematical
models should be capable of calculating the position of the object at a particular time. Such a mathematical

FSIPD 95
model made up of various numbers and variables which is capable of estimating the future state of a process
depending upon the current state is called dynamical system. A very basic example is the below mentioned
fundamental equation of linear motion:

Where, S is the displacement (position)

u is the initial value of velocity

t is the time

a is the acceleration

By dynamical systems, future positions of an object at different time can be found out; and by these
numerous positions, trajectory of the object can also be found out. By the above equation (mathematical
model/dynamical system), displacement/position (future) can be estimated with respect to initial velocity
(past) and acceleration (present) at any particular time.

It may happen that that the nature trajectory is changing either periodically or randomly. So, in this way
nature of trajectory or process can also be known or understood.

A point is required to be noted here that these dynamic systems have a limitation of making estimations of
future positions only for a short time period. For next future time period, another dynamical system will be
required. In order to make the mathematical model capable of calculating the future positions for all possible
times or say for a longer period of time, integration of dynamical systems for all time periods is required to be
done

System Variables

To every system there are two sets of variables (figure 2.12):

1) Input variables: The input variables are the variables which originate from outside the system and as
such, are not affected by the happenings in the system.
2) Output variables: The output variables are the internal variables that are used to monitor or regulate the
system. They result from the interaction of the system with its environment and are influenced by the
input variables.

Figure- 2.12 System variables

Mathematical Model

Models can be considered as the depiction of the functioning of the real world and in mathematical modelling
the same functioning of the real world are transformed into the language of mathematics. Thus, in
mathematical models are simplified representations of these entities in the form of mathematical equations
or computer codes using the combine laws of physics and the results of experiments conducted.

FSIPD 96
The laws of physics are used to determine the structure of the model i.e. linear or non-linear, and the order of
these models.
The experiments conducted are used to estimate and validate the parameters of these models. As for the
understanding of the mathematical modelling we can take the example of the representation of the dynamic
system with the help of differential equations.

The characteristics of mathematical models are the assumptions about the variables i.e. the entities that
changes, parameters that do not change and the relationship between the two i.e. the functional
relationships.

Advantages

This has many advantages

1) Since mathematics is a very precise language it helps to formulate the ideas and identify underlying
assumptions.
2) Mathematics with well-defined rules for manipulations is considered to be a concise language.
3) The mathematical equations proven by the scientist and the mathematicians are readily available for its
use to define models.
4) The advent of the computers made the task of numerical calculations much easier and as such the whole
mathematical formulation and modelling process is less tedious and time consuming.

Since the majority of the real world systems are far more complicated to model entirely; we are forced to
compromise to a large extend in mathematical modelling. And as a part of this compromise; the first task is
to identify the vital parts of the system which will be included in the model excluding the rest. In
mathematics, the results proven always depend critically on the form of the equations which are used for the
purpose of solving the problem. Any small change in the structure of equations would require enormous
changes in the methods. Thus the second task of compromise concerns the amount of mathematical
manipulation which is will be required to find the solutions.

Objectives of mathematical modelling

Mathematical modelling can have a number of objectives:

1) Scientific understanding: The development of the scientific understanding can be achieved by


expressing the current knowledge of the system quantitatively. A model comprises a hypothesis of the
system under study, and thus, this helps to compare with the data available. These are also useful in
proposing theories, answering specific questions, decision making by managers and planners.
2) Clarification: The mathematical formulation of the models helps to clarify the assumptions, variables
and parameters.
3) Simulation: As it is not possible to obtain the experimental results of complex real-world systems,
formulation of the mathematical models of those complex systems makes it convenient for computer
simulation of these systems. For example, the experiments of the spread of infectious disease in the
human population have to be often considered as impossible, or expensive.
4) Prediction/Forecasting: The mathematical modelling of a system, defining the proper variables and
parameters, can be used to study the models and to predict any possible thread or error that may hinder
the proper running of the system.
5) Prognostics/Diagnostics: The modelling of the systems also helps to identify any faults or deviation
from the specified path of the system and thereby to take the necessary action before it is experienced by
the real-world system. And with the help of the model an analysis will be able to predict how the faults or
deviations can be rectified or recovered. This is just like a doctor who diagnosis a patient of broken leg
after going through the x ray of the patient and predicts the recovery patients recovery.
6) Design/Performance Evaluation: Modelling also helps in designing of the systems and in evaluations of
their performances.

FSIPD 97
Classifications of mathematical models

The mathematical models are classified into following categories which helps to immediately collect the
essentials of their structure.

1) Deterministic vs. Stochastic models

The first classification of these models is based on the type of outcome they predict. In the deterministic
models the random variation, i.e. no parameters in the model are characterized by probability
distributions and so always predict the same outcome from a given starting point.

As contrast to the deterministic models, there are stochastic models which are more statistical in nature.
The stochastic models produce many different results depending on the actual values that the random
variables and may predict the distribution of the possible outcomes.

2) Static vs. Dynamic Models

Static models are at an equilibrium or steady state; whereas dynamic models are the models which
change with respect to time.

3) Mechanistic vs. Statistical Models

Mechanistic models are the models which make the use of a large amount of theoretical information
describing what is happening at one level in the hierarchy by considering processes at lower levels. These
take into account the mechanisms through which changes occur.

In empirical models, mechanism by which changes occur to the system is not taken into account. Instead,
here when the changes are considered to occur, the model try to account for changes associated with
different conditions. They provide a quantitative summary of the observed relationships among a set of
measured variables.

In between deterministic/stochastic and mechanistic/empirical, extremes of a range of model types lie


i.e. in between these types of models, lie a whole spectrum of model types.

4) Qualitative vs. Quantitative Models

Qualitative models are the types of models lead to a detailed, numerical prediction about the responses,
whereas qualitative models lead to general descriptions about the responses.

Transfer Function

Transfer function is a mathematical relationship between input and output parameters of a linear time-
invariant system (figure 2.13). This transfer function is independent from the input / output parameters.
E.g. Laplace transformation, Fourier transformation, etc.

Figure- 2.13 Transfer function

Example of transfer function is accelerometer, digital integrator etc.

FSIPD 98

This is a function for an accelerator system.

This is a function for digital integrator

Mixed Systems

Systems are usually made up of several sub systems. If a system is made up of different types of subsystems
i.e. subsystems of different fields of engineering, then it is called mixed system. E.g. MEMS (micro electro
mechanical system, made up of subsystems based on mechanical and electrical engineering), systems based
on mechatronics (made up of subsystems based on mechanical and electronics engineering).

The modelling of a system has to be optimised so that the designs can be used in the real world applications.
Hence, optimisation occupies an important place in product development process.

2.2.2 System Optimization

In the previous section an introduction on system modelling was given. In the present section highlights the
field of system optimisation. Optimisation is the art of obtaining the best possible result under given
circumstances. In all engineering field whether it is design, manufacturing, construction, maintenance or any
other field, engineers have to take a number of decisions. But the ultimate objective of all these tasks is to
derive the best possible outcome, may it be minimising the effort required or maximising the benefits
desired. And since the main objective is either to minimise the effort required or to maximise the desired
benefit, these can be represented by mathematically by a function, f(x), known as objective function. Hence,
the basic objective of any engineering application is to optimise the objective function f(x).

FSIPD 99
Figure- 2.14 Example of optimisation, curve: objective function, f(x) v/s variable, x

The figure 2.14 shows that x* is the minimum value of the objective function f(x) and the same point is also
the maximum value of the negative of the function f(x), i.e. f(x). This is so because any value which is
minimum for a given function is also the maximum value for the negative of the same function.

Problem formulation

In case of industrial design, a simple optimal design is made by comparing with other designs created by
using a prior knowledge of the problem. At first the feasibility of each design solution is investigated and then
the objective e.g. cost, profit, etc. of each of these designs are computed and hence, the best solution is
selected. This method is preferred by those who have a lack of knowledge of the optimising techniques. But,
the objective of achieving a quality product or a competing product is not a guaranty with this type of method.
Actually, optimisation algorithms are time consuming and expensive in case of computation and as such
these algorithms should be applied in those cases where the main requirements are quality and competitive
products.

The formulation of an optimisation algorithm consists of a number of steps. The basic objective of the
formulation is to create a mathematical model to optimise the design problem, which can be later used for
solving the problem. The figure 2.15 shows the steps involved in the formulation of an optimisation problem.

Figure- 2.15 Formulation of optimisation procedure flowchart

FSIPD 100
Historical development of optimisation

The field of optimisation can be considered to be contributed by a number of well-known personalities like
Newton, Lagrange, Cauchy, Euler and so on. The contributions and the development of optimisation all the
fields are listed in table 2.4

Pre-digitalisation (before digital Computers)

S. No. Contributions

Newton and Leibnitzs contribution to calculus made it possible for the development of
1
differential calculus methods of optimisation

Bernoulli, Euler, Lagrange and Weirstrass laid the foundations of calculus of variations to deal
2
the minimisation of functions

Lagrange invented the addition of unknown multipliers used in the method of optimisation for
3
constrained problems

Cauchy introduced the method of steepest descent for solving unconstrained minimisation
4
problems

Post digital computers

S. No. Contributions

Development of the methods of constrained optimisation paved by the development of the


1 simplex method by Dantzig in 1947 for linear programming problems and introduction of the
optimality principle for dynamic programming problems by Bellman in 1957

Development of Non-linear programming problems was possible due to the contributions of


2
Kuhn and Tucker in 1951 on necessary and sufficient conditions for the optimal solutions

Zoutendijk and Rosen contributed significantly to the field of non-linear programming


3
problems during early 1960s

Carroll and Fiacco and McCormick made the solution of many difficult non-linear programming
4
problems using well known techniques of unconstrained optimisation

Duffin, Zener and Peterson contributed to the field by introducing geometric programming in
5
the 1960s

Pioneering work in the field of integer programming was done by Gomory (This was an
6
important achievement as the real-world applications fall under this category)

Stochastic programming techniques were developed by Dantzig and Charnes and Cooper (They
7 solved the problems by assuming the design parameters to be independent and distributed
normally)

The goal programming was originally proposed by Charnes and Cooper in 1961 for linear
8
programming problems

Von Neumann laid the foundations for the game theory and this was later used to solve
9
various mathematical, economic and military problems

Prof. John Holland of the university of Michigan, predicted the concept of genetic algorithms in
10
mid 1960s and published in his seminal work in 1975, Holland.

11 Simulated annealing, GA, neural network, etc. are a few additions to the field of optimisation

Table- 2.4 History of optimization

FSIPD 101
Classifications of optimization algorithms

The classification of optimization techniques can be done in various ways, such as constrained or
unconstrained problems, single variable or multivariable problems, linear or non-linear problems, traditional
or non-traditional optimization techniques etc. But for simplicity in this section these problems are classified
as given below:

Single variable optimization algorithms

These are further categorized into categories

Direct methods, which do not use any derivative of the objective function and hence only the value of the
objective function, are used to guide the search process.
Gradient based methods, which use the derivative of the objective function (first/second order
derivatives) to guide the search process.
Multi variable optimization algorithms

These are also categorized into direct methods and gradient based methods

Constraint optimization algorithms

These may be single variable optimization or multi variable optimization algorithms.

Specialized optimization algorithms


Integer programming
Geometric programming
Non-traditional optimization algorithms

These algorithms are basically inspired by the biological world or natural optimization techniques. These
include genetic algorithms, neural networks, fuzzy logics, particle swarm optimization, ant colony
optimization etc. the whole optimization area can be depicted by an optimization tree (figure 2.16)

Figure- 2.16 Optimisation tree

FSIPD 102
Engineering applications of optimization

Optimization techniques have wide range of applications in the field of engineering and hence can be applied
to any field of optimization. Some typical examples to show its scope in engineering are given below:

1. Determine the products mix in a factory that makes the best use of machines, labor, raw materials, while
simultaneously maximizing the companies profits.
2. Picking up of raw materials so as to produce a finished product at minimum cost.
3. Determine the distribution of the products from warehouses to delivery locations that at minimize
shipping costs.
4. Optimum production planning, control and scheduling of a company.
5. Design of aircrafts with minimum weight.
6. Design of the civil engineering structures such as frames, bridges, dams, etc. at minimum cost.
7. Optimum machining conditions in metal cutting processes to minimize the production cost.
8. Design of pumps, turbines and heat transfer equipment for maximum efficiency.
9. Optimum design of electrical machineries such as motors, generators etc. and electrical networks.
10. Optimum control system designs.

Who does Optimization?

Optimisation as already mentioned above has a broad field of applications. Its users are:

Mathematical programming, not computer programming (such as use of calculus, non-linear


programming, geometric programming, quadratic programming, etc.)
Operation research (such as stochastic techniques, PERT, CPM, etc.)
Applied optimisation techniques (includes all the branches of engineering)
Logistic and planning (resource planning, scheduling, inventory control, supply chain management, etc.)
Economics (includes various economic policies)
Physical science (statistics, model optimising, etc.)

This is shown in the figure 2.17.

Figure- 2.17 Optimisation users

FSIPD 103
Design of experiments

In any business two of the most important goals would be increasing the productivity and improving the
quality of the products. And there have been tremendous developments in the methods for determining how
to increase the productivity and improve the quality of the products. Hence, the costly and time-consuming
searches have evolved effectively to the powerful and cost-effective methods that are being use in todays
world. Thus, the theory of design of experiments comes in.

The design of experiment can be described as a sequence of tests which are performed by changing the input
variables of a system or process with a purpose to record the effects on the variables of the system or the
process. Design of experiments can be applied both to physical as well as computer simulated models.

The fundamental principles in design of experiments

The fundamental principles in design of experiments are:

Randomization
Replication
Blocking
Orthogonality
Factorial experimentation

Randomization is the method that protects the experiment from any unknown biasing which distorts the
results of the experiment. For example, if we consider an experiment where first we took calculating the
results given by the baseline procedure of the experiment and then we are taking the results given by a new
procedure, there will be definitely some error between the two results. And this error is entirely due to the
instrumental drift. Hence, to reduce or remove the error one can take random sets of experiments in the order:
baseline, new, new, baseline, new, baseline, etc. and then averaging out would give a better result.

Replication is another method to increase the accuracy of the experimental results where, this is done by
increasing the sample size of the experiment.

Blocking is a method to increase the accuracy by removing the effect of known irritating factors.
In orthogonality the factor effects in the experimental results are uncorrelated and therefore can be easily
interpreted and are varied independently of each other. The main results of this method are recorded by
taking the differences of averages. These results in orthogonality method can also be represented graphically
by using simple plots of suitably chosen sets of averages.
In factorial experimentation method the effects due to each factor and also the combinations of the factors
are estimated. The factorial designs are constructed geometrically and all the factors are varied
simultaneously and orthogonally.

Uses

The main uses of design of experiments are

Discovering the interactions among different factors


Screening of the factors
Establishing and then maintaining quality control
Optimization of a process
Design of robust products

FSIPD 104
Various optimisation techniques

Number of optimisation techniques which are available for different applications, according to the required
objectives and design parameters. And these are listed below:

Bracketing methods
Exhaustive search method
Bounding phase method
Region elimination methods
Internal halving method
Fibonacci search method
Dichotomous search
Quadratic interpolation
Cubic interpolation
Gradient based methods
Newton-Raphson method
Bisection method
Secant method
Random search method
Random jumping method
Random walk method
Simplex search method
Hooke-Jeeves pattern search method
Powells conjugate method
Cauchys steepest descent method
Conjugate gradient method

Before we go to describe the different optimisation techniques and algorithms, let us clear some basic
concepts regarding some terms.

Local optimum point: A point or a solution, x* is said to be the optimum point if there is no any other
point in the neighbourhood of the point having better value than x*. in case of the minimisation
problems this x* point is a local minimum point if no other point in the vicinity of the point has a value of
the function f(x), smaller than f(x*).**
Global optimal point: A point x*, is said to be the global optimum point, if there exist no any other point
present in the entire search space of the function, which has a better solution than the value given by x*.
For any minimisation problem, a point x*, is said to be the global minimal point, if there is no any other
point in the entire search space which has a value of the function, f(x), smaller than f(x*).**
Inflection point: a point x*, is said to be an inflection point if the value of the function, f(x*), increases
locally as the value of x* increases and decreases locally as the value of x* decreases, or if the value of the
function increases locally if the value of x* decreases and decreases locally as value of x* increases. **
Stationary point: A point x* is said to be the stationary point, if f'(x*) = 0. The point may be a minimum
point, maximum point, an inflection point, or may not be.

** Consider a point x*. Suppose the first derivative of the function, f(x*), is zero and the first non-zero higher
derivative is given by n; then

If n is odd, x* is an inflection point.


If n is even, x* is a local optimum.
If the derivative is positive it is a local minimum point
If the derivative is negative it is a local maximum point

FSIPD 105
Bracketing Methods

In the bracketing method first the lower and upper bound of the optimum point is found out and then
another approach is adopted to find the optimum point within the two boundaries of the point found out
earlier. Here there are two approaches, namely, exhaustive search method and the bounding phase
method.

Exhaustive search method

In exhaustive search method (figure 2.18), first the optimum value of the function is bracketed by
calculating the value of the function at different equally spaced points. Here, three consecutive function
values are compared at a time and basing on the outcome of the search, the process is terminated or any
one of the three function values is replaced by a new value. Usually, the search begins from the lower
bound of the function value. The following steps are followed in this method:

Figure- 2.18 Exhaustive search method

1) Consider x1 = a, x= (b-a)/n, where n is the number of equally spaced points),


x2 = x1 + x and x3 = x2 + x.
2) Find out f(x1), f(x2) and f(x3). If f(x1) f(x2) f(x3), the minimum point is between x1 and x3.Terminate the
process.

Else, x1 = x2, x2 = x3 and x3 = x2 + x.

3) Find if x3 b? if it is so, go to step 2;

Else no minimum exists in the region (a, b) or the points a and / or b may be the minimum points.

Bounding phase method

Bounding phase method involves bracketing the minimum point of a unimodal** function. Following are the
steps involved in the method:

1) An initial guess is taken as xnand also an increment is chosen. As this is the first step of iteration, set
iteration value n = 0.
2) Find f(xn- I I), F(xn) and f(xn+ I I). If f(xn- I I) F(xn) f(xn+ I I), then is positive;
Else, if f(xn- I I) F(xn) f(xn+ I I); then is negative;
Else, go to step 1 and choose a new initial guess and increment.
3) Set the next iteration value as xn+1 = xn + 2n.

FSIPD 106
4) If f(xn+1) < f(xn), set n = n+1 and go to step 3; else the minimum is between (xn-1, xn+1) and terminate.

Note: If the increment chosen is large, then the accuracy is low but the process becomes faster. If the
increment taken is small, accuracy is better the process becomes lengthy.

Region elimination methods:

In this section there are three algorithms which are based on the principle of region elimination (figure 2.19).
To explain the elimination method let us take an example.

Let us consider two points x1 and x2 receptively which lie between the interval (a, b) as shown in the figure
below:

Figure- 2.19 Region elimination method

Now, for the unimodal functions to be minimum, we have:

f(x1)> f(x2), then minimum lies between (a, x1).


f(x1)< f(x2), then minimum lies between (x2, b).
f(x1) = f(x2), the there is no minimum point in the intervals (a, x1) and (x2 and b).
Internal halving method

In internal halving method (figure 2.20) three different equidistant points are taken, which divide the search
space into four regions. Then fundamental region elimination method is applied to eliminate the region here
there do not lie any optimum point. Let us take the following example.

FSIPD 107
Figure- 2.20 Internal halving method

In the figure 2.20, three equidistant points x1, xmand x2 are taken which divide the region (a, b) into four equal
regions. Now, if f(x1) <f(xm), then there exist no minimum point in the region (xm, b). so, we eliminate the
region and thus the region is reduced to (a, xm). in this fashion we go on to eliminate the unimportant regions
and the search space goes on decreasing by an amount 50% of the previous one. Following are the steps
involved in the method:

Consider a lower, a, and an upper bound, b. Choose a small terminating or accuracy factor .
Now, let xm= (a + b)/2, Lo = L = b a.
Find, f (xm).
Now, compute x1 = a + L/4, and f(x1) and f(x2).
Now, if f(x1) < f(xm), take b = xm; xm = x1; and go to step 5;
Else, go to step 4.
If f(x2) < f(xm), set a = xm; xm = x2; and go to step 5;
Else, set a = x1; b = x2; and go to step 5.
Compute L = b a. and if ILI < , terminate.
Else, go to step 2.

Accuracy , is computed using the equation, (0.5)n/2 X (b - a) = .

Fibonacci search method

In Fibonacci search method (figure 2.21), the search interval is reduced according to the Fibonacci series. The
Fibonacci series is 1, 1, 2, 3, 5, 8, 13 and so on. The generalised formula to find the series is:

Fi = Fi 1 + Fi 2;

In the Fibonacci search method one of the two initial guess is replaced by a new value while the other one
remains the same. And as such evaluation of only one function is required in this method. The figure 2.21
represents the Fibonacci search points and following this figure are the steps involve in this method.

FSIPD 108
Figure- 2.21 Fibonacci search points

1) Choose a lower bound, a and an upper bound, b. let the desired number of steps to be performed be i.
Compute L = (b - a) and set n = 2 for the first iteration step.
2) Now compute Ln* = ((Fi-n+1)/Fi+1)) X L. Set x1 = a + Ln* and x2 = b Ln*.
3) Find out f(x1) or f(x2), which ever was not computed earlier. Follow the principle of region elimination
method and then set the new values of a and b.
4) Find if n = i or not. If no, set n = n + 1 and go to step 2.

Else terminate.
The accuracy of the process, , is given by:
2

Fi 1

** A unimodal function is the function which is having only one maximum or minimum value in the given
interval.

Dichotomous search

In the dichotomous search (figure 2.22), two initial guess are taken in such a way that they are placed as close
as possible at the centre of the interval of uncertainty. And basing on the relative values of the objective
function at the two points, almost half of the search space is eliminated in this method.

These two initial guesses are given as below:

x1 = (Lo/2) (/2)

x2 = (Lo/2) + (/2)

FSIPD 109
Figure- 2.22 Dichotomous search

Where, is small positive number and this number is chosen in a sense that the initial guess gives
significantly different results. The new interval is given by (Lo/2) + (/2).

Interpolation method

Interpolation methods were originally used for optimising solving one-dimensional multivariable problems
and are generally more efficient than the Fibonacci optimisation methods. Suppose we consider a function
f(X) which is expressible as an explicit function f() of xi(i = 1,2,..,n). Hence, this function can be readily
expressible for f() as f() = f(X + S) for a specified vector S.

Now, for any such kind of one dimensional minimisation methods, the basic objective is to find the non-
negative value of i.e. *, for which the function f() = f(X + S) attains a local minimum value. For this we
have to set

f
0

and solve the equation (i) to find the * in terms of S and X. since in many practical problems, the function
f() cannot be expressed explicitly in terms of , the interpolation methods can be used to find the value of
*.

The two of the interpolation are described in this section, i.e. quadratic interpolation method and cubic
interpolation method.

Quadratic interpolation method

In quadratic interpolation method, only the value of the function is taken and as such this method has an
advantage of finding the optimum value, *, of the function, f(X). Here there is no requirement of finding out
the partial derivatives of the function with respect to the variable xi. In this method the length of the
minimising step *, is obtained in three stages.

Stage 1: In this stage the vector S is normalised so that a step length of =1 is satisfactory.

FSIPD 110
Stage 2: in this stage the function f() is approximated by a quadratic function h(). Then the minimum value
of h(), i.e. * is obtained. Now if the minimum value i.e. *, which is approximated is not satisfactorily close
to the true value *, we go to the stage 3.

Stage 3: in this stage the function f() is approximated by a new quadratic function h/ a/ b / c / /
and a new value of * is obtained. This procedure is continued until * obtained which will be sufficiently
close to *.

Cubic interpolation method

In cubic interpolation stage method the minimising step length *, is found out in four stages with the help
of a cubic polynomial h(). Here the derivative of the function is taken as below:

df d
f/ f X S ST f X S
d dx

Stage 1: In the first stage the vector S is normalised so that a step size = 1 is sufficient.

Stage 2: In the second stage the boundary conditions of * are defined.

Stage 3: In the third stage the value of * is obtained by approximating f() by a cubic polynomial h(). If the
* which is obtained in the third stage does not satisfy the required convergence criteria, the cubic polynomial
is refitted in stage 4.

Gradient based methods

These methods use the derivatives of the function and should be preferably used in the problems where the
information of the derivatives are available or could be easily derived. But in real world problems these are
less popular because of the nature of the problems or because of the difficulty in obtaining the derivatives.
These methods are also known in some books as direct roots methods and here the Newton-Raphson, secant
and bisection methods are discussed.

Newton-Raphson method

In Newton-Raphson method, a linear approximation of the first order derivative is used at a point using
Taylors series expansion. This equation is then equated to zero to find the next iterative value.

The formula and the steps are given below:



xn+1 = x n -

1) Start with an initial guess xn and a small terminating parameter, . Set n = 0 to start the computation.
2) Find f(xn) and f/(xn).
3) Calculate x1, using the above equation.
4) If xn(n=1) xn(n = 0) , terminate the process.

Else set n = n+1 and go to step 2.

Let n be the accuracy of the nth iteration then,

The above shows that the convergence of the method is quadratic in nature.

FSIPD 111
Secant method

In secant method both the magnitude and sign of the derivatives are used to find the new point and the
derivative of the function varies linearly between the two points taken as the boundary. In between these two
boundary points having derivatives of opposite sign has a point with zero value of the derivative. This point
can be easily found after knowing the values of the derivatives of the boundary points. Let the two boundary
points be a and b. Then, if m is the point between a, b, the value of the point is given by either of the two
formulas: (considering f(a)0 and f(b)0)

m=b- (i)
/
or,

xn+1 = Xn - , 1 (ii)

1) Take two initial points as xnand xn-1. Set n = 1. Set to be a small terminating factor.
2) Find f(xn) and f(xn-1). Test if f(xn) X f(xn-1) 0. If no, go to step 1 and reconsider the initial guess. Else go to
step 3.
3) Find xn+1 using the above equation (i). Find f(xn+1).
4) Find if xn+1 xn . If yes, terminate. Else go to step 5.
5) Set n=2 and go to step 3.

The secant method can fail if at any iteration f(xn) = f(xn-1) and hence it may not converse.

Bisection method
The bisection method (figure 2.23) comprises of repeated application of the intermediate value property.
Hence, this method is similar to that of region elimination method, where after each step one of the regions
is eliminated. Consider the figure given below:

Figure-2.23 bisection method


In the figure, f(x1) 0 and hence, root lies between a and x1. The second approximation to the roots is x2 = X
(a+x1). Now, if f(x2) 0, then the root lies between x1 and x2. Therefore, the third approximation to the root is
x3 = X (x1 + x2) and so on.
Since the new interval having the root, is exactly the half of the previous interval, the width of the interval is
reduced by a factor (1/2) at each step. Therefore at the end of the nth the new interval is given as (b-a)/22 and
hence, the accuracy is given by:
(b-a)/2n
Since, the error in each step decreases by a factor ;the convergence of the method is linear.
The steps involve in the bisection method are:
1) Take the initial guess as x1 and x2 such that f(x1) 0 and f(x2) 0. Set n = 1 to be the iteration number.
Choose a small number to be the desired accuracy.
2) Find f(x1) and f(x2).
3) Find xm by the formula given:

FSIPD 112


2
4) Find f(xm). If f(xm) , terminate. Else go to step 5.
5) If f(xm) 0; replace x1 by xm and go to step 3;
If f(xm) 0; replace x2 by xm and go to step 3.

Random search method

Random search methods are a class of direct search methods. In random search method, random numbers
are used to find the minimum point. These methods find their applications in computer based problems. Two
of the random search methods are described in this section, the random jumping method and the random
walk method.

Random jumping method

In random jump technique, first the bounds li and ui, namely the lower and the upper bound are considered for
each of the design variables, xi, i = 1,2,3,.., n. this is done so as to generate the random values of xi;

li xi ui; i = 1,2,.n. (i)

In random jumping method we generate the sets of n random numbers, i.e. (R1, R2..Rn). These numbers are
uniformly distributed between 0 and 1 and each set of these numbers is used to find a point, X, inside a
hypercube defined by the equation (i) above as:

x l R u l
x l R u l
. .
X .. ..
. .
x l R u l

After finding the point X, we go on to evaluate the functional value at the point X. By generating a large
number of random points X and evaluating the functional values at these points we get the minimum value
of the function f(X). This is the minimum point.

Random walk method

In random walk method a sequence of improved approximation of the minimum point are generated. Each of
these approximations is generated from preceding approximation. Suppose, if Xi is the approximation to the
minimum value at the (i-1)th step/iteration, the new approximation at the ith step is found out by; Xi+1 = Xi +
ui. Here, is the scalar step length and ui is the unit random vector generated at the ith step.

Simplex search method

In simplex method for a function having N variables, (N+1), points are initially used. But the initial simple
should be such that the points chosen should not form a zero volume, N dimensional hypercube. Thus, for a
function with two variables, the initial three points chosen should not lie on a straight line. And similarly for a
simplex with three variables, the initial four simplex points should not lie on the same plane.

In the simplex method (figure 2.24) first the worst point in the simplex is found out and then the centroid of
the remaining simplex point is obtained. The next this worst point is reflected about the centroid of the
simplex and a new point xmis found out. Now after reflection of the worst point to a new point xnew, the
functional value at this point is better than the best point, an expansion operation from the centroid to
reflected point is performed, which is controlled by a factor, . But, if the functional value of the new point is
worse than the worst point of the simplex, contraction operation is performed from the centroid to the
reflected point. The amount of the contraction is controlled by the factor, (negative value of is used). If
the functional value at the new point is better than the worst point but is worse than the second to the worst

FSIPD 113
point, a contraction operation with positive value is performed. The new point obtained, in whichever case,
replaces the worst point in the simplex and the process continues.

Figure- 2.24An illustration of simplex method

The steps involved are:

1) Take > 1, (0,1) and a terminating factor . Create the initial simplex.
2) Define the worst point, xp, the second to worst point, xqand best point xr, by evaluating the given function
using these values.
3) Compute the centroid, xc as below

xc = ,

4) Calculate the reflected point, xm = 2xc xp.


Now set xnew= xm.
If f(xm) < f(xr), set xnew = (1+ )xc - xp (for expansion);
Else, if f(xm) f(xp), set the new point as xnew= (1 - ) xc + xp(for contraction);
Else if f(xq) < f(xm) < f(xp), set the new point as xnew= (1+)xc xp (for contraction);
5) Calculate f(xnew) and now replace xpbyxnew.
6) If { }1/2 , terminate the process;
Else go to step 2.

FSIPD 114
Hooke-Jeeves pattern search method

Figure- 2.25 Hooke Jeeves pattern search method

In pattern search, first the set of search directions is created and they should be such that staring from any
point in the search space, it should reach any point in the space along these directions. An N directional
problem requires N linearly independent search directions.

In Hooke Jeeves pattern search method (figure 2.25), there is the use of both exploratory search methods and
heuristic pattern search methods. The steps involve are given as below:

Exploratory search
Pattern move
1) Choose a starting point x(0), increment factor i(I = 1,2,.,N), a step reduction factor >1 and a
terminating factor . Set k = 0.
2) Perform exploratory move as:
1) Let the base point be given as xkand the variable (xk)iis incremented by a factor i. Set i = 1 and x = xk.
2) Compute f = f(x), f+ = f(xi + i) and f- = f(xi - i).
3) Evaluate fmin = min (f, f+, f-). Set x corresponding to fmin.
4) Check if i = N? if no, set i = i+1 and go to step 2; else x is the result and go to step 5. Where N = No. of
variables or dimensions.
5) If x xk, success; else failure.
If the move is a success, set xk+1 = x and go to step 4;
Else go to step 3.
3) Is IIII < ? If yes, terminate; else set i = i/ for I = 1,2,.,N and go to step 2.
4) Set k = k+1 and perform the pattern move as:

(xp)k+1 = x(k) +(x(k) - x(k-1)).

5) Perform another exploratory move using (xp)k+1 as the base point to obtain the result (x)k+1.
6) Check is f((x)k+1) < f((x)k)? if yes go to step 4; else go to step 3.

FSIPD 115
Powells conjugate method

Before going on for Powells conjugate method lets understand what conjugate directions are.

Conjugate directions: Let us consider a set A = [A] to be an n X n symmetric matrix. A set of n vectors (or
directions){Si} is said to be conjugate(more accurately A - conjugate) if-

0 for all i j, i = 1,2.,n, j = 1,2,.,n.

It can be seen that orthogonal directions are the special case of conjugate directions.

Quadratically convergent method: if a minimisation method, using exact arithmetic, can find the minimum
point in n steps while minimising a quadratic function in n variables, the method is called a quadratically
convergent method.

Given a quadratic function (figure 2.26) q(x) = A + BTx + (1/2)xTCx of two variables (where A is a scalar quantity,
B is a vector, and C is a 2X2 matrix), two arbitrary but distinct points x(1) an x(2), and a direction d. Now, if y(1) is
the solution to the problem minimise q(x(1) + d) and y(2) is the solution for the problem minimise q(x(2) + d),
then the direction (y(2) y(1)) is conjugate to d or, (y(2)- y(1))T.C.d is zero.

Figure- 2.26 Parallel sub-space property

Thus, two arbitrary points x(1) and x(2) and an arbitrary search direction d, will result into two points y(1) and y(2)
respectively. For the quadratic functions we can take the minimum of the function lying on the line joining y(1)
and y(2). The vector (y(2) y(1)) forms conjugate direction with the original direction vector d. this is the parallel
sub-space property.

FSIPD 116
If we now assume that the point y(1) is found after unidirectional searches along each of m (< N) conjugate
directions form a chosen point x(1) and, in the same manner the point y(2) is found after unidirectional searches
along each of m conjugate directions from another point x(2). The vector (y(2) y(1)) is the conjugate to all m
search directions. This is the extended parallel sub-space property.

Following are the steps followed in Powells conjugate method:

1) Consider an initial point x(0) and a set of n linearly independent directions; possibly s(i) = e(i) for i = 1,2,.n.
2) Minimise along n unidirectional search directions using the previous minimum point to begin the next
computation. Begin the search with s(1) and end with s(n). After this perform another unidirectional search
along s(1).
3) Form a new conjugate direction d using the expected parallel subspace property.
4) Now form a new conjugate direction d using the extended sub-spaced property.
5) If d is small or search directions are linearly dependent, terminate the process. Else replace s(j) = s(j-1) for
all j = n,n-1,,2. Set s(1) = and go to step 2.

If the function is quadratic, exactly (n-1) loops through steps 2 to 4 are required. Since, in each of the iteration
exactly (n+1) unidirectional searches are necessary, a total of (n-1) X (n+1) or (n2-1) unidirectional searches
are necessary to find n conjugate directions. And after this, a final unidirectional search is required to get the
minimum point.

Cauchys steepest descent method

Cauchys steepest descent method is a gradient based method. So before starting the steps followed in the
method let us first clear the concepts on descent direction.

A search direction (figure 2.27), d(t), is said to be a descent direction at a point x(t) if the condition if the
condition f ( ). d(t) 0 is satisfied in the vicinity of the point x(t). the quantity f ( ) is known as the
steepest descent direction as the quantity f ( ). d(t) becomes maximally negative.

Figure- 2.27 Steepest descent direction

FSIPD 117
The f is given as below:


(i)


(ii)

, ,
, ,
= (iii)

The search direction used in Cauchys steepest descent method is negative of the gradient, i.e. s(k) = f
( ). The steps involved in the method are:

1) Choose a maximum number of iterations M, an initial approximation x(0), two termination parameters 1,
2, and set k =0.
2) Compute f ( ), at the point x(k).
3) If II f ( )II 1, terminate; else if k M; terminate; else go to step 4.
4) Perform unidirectional search to find the value of (k) using 2 such that f(x(k+1)) = f(x(k) (k) f ( )) is
minimum. One criterion for termination is when I f ( . f I 2.
5) Is ? If yes terminate; else set k = k+1 and go to step 2.

Since, the direction in this method is a descent direction, the value of f(x(k+1)) is always smaller than f(x(k)) for
positive values of (k).

Conjugate gradient method

In Conjugate gradient method, the objective function is considered to be quadratic in nature and the
conjugate search directions can be found out using only the first order derivatives. Fletcher and Reeves
suggested showed that s(k) is conjugate to all other previous search directions s(i) for i = 1,2,,(k-1);

f (i)

Here, f , and the above equation requires only the first order derivatives at two points x(k) and
x(k-1). The initial search direction s(0) is taken to be steepest descent direction of the initial guess. The steps are
given below:

1) Choose the initial guess as x(0) and the termination parameters 1,2, 3.
2) Compute f and set f .
(0) (0) (0) (0)
3) Find such that f(x + s ) is minimum with termination parameter 1.
Set x(1) = x(0) + (0)* s(0) and k = 1. Calculate the gradient, f .

4) Take f

5) Find (k) such that f(x(k) + (k) s(k)) is minimum with termination parameter 1.
Set x(k+1) = x(k) + (k)* s(k).
6) Check is or f ? If yes terminate; else set k = k+1 and go to step 4.

Every system has to be given certain requisite specifications, which actually describes the specific
requirements for which the system is designed for.

FSIPD 118
2.2.3 Introduction to system specifications

As we have already had the overview about a system, now let us see what basic requirements of a system are
considered during its design and development. Any system when designed and developed has to meet the
specific requirements for the purpose it is made for. These are defined by the specifications of the systems.

Specifications for a new product are quantitative, measurable criteria that the product should be design to
satisfy. They are the measurable goals for the design team. Specifications, like much design information,
should be established early and revisited often. The specification is on a dimension that can support units.
That is, there are associated dimensions; degrees Fahrenheit, lumens, horsepower and so forth. A quantity
that has units we will also call an engineering requirement. In addition to having units, though, a specification
needs a target value. This is a number along the dimensional unit that establishes required performance.

Product specifications can occur at many levels at different points in a development process: targets at the
pre concept phase are different from refined targets at the embodiment phase. Early concept-independence
criteria get refined into performance specifications for a selected concept, which in term get refined into
specifications for subsystems, assemblies, parts, features, and so on.

Each specification should be measurable- testable or verifiable- at each stage of the product development
process, not just at the end of the process when the product is designed and built. In the end, if it isnt
testable and quantifiable, it isnt a specification. The test(s), the means of measuring the performance of
the products system (and subsystem), should always be stated and agreed on up front.

Checklist can be used to identify the specifications. Example of such a check list is given below:

Table 2.5 Categorizes for searching and decomposing specifications (FRANKE, 1975)

Specification category Description


Geometry Dimensions, space requirements, and so on.
Kinematics Type and direction of motion, velocity, and so on
Forces direction and magnitude, frequency, load imposed
by, energy type, efficiency, capacity, temperature,
etc.
Material Properties of final product, flow of materials, design
for manufacturing
Signals Input and output, display
Safety Protection issues
Ergonomics Comfort issues, human interface issues
Production Factory limitations, tolerances, wastage
Quality control Possibilities for testing
Assembly Set by MFMA or special regulations or needs
Transport Packaging needs
Operation Environmental issues such as noise
Maintenance Servicing intervals, repair
Costs Manufacturing costs, materials costs
Schedules Time constraints
Table 2.5 Categories for searching and decomposing specifications

Overall system specification

The overall system specification when considered from the supplier point of view is equivalent to the gained
product requirements specification. The overall system specification is the base for the system development
process and is prepared by the supplier in conjunction with the gainer.

FSIPD 119
The overall system specification includes both the functional and non-functional requirements modelled on
the system to be developed. This overall system specification is derived from the requirements specification
and is developed effectively.
As the first step a preliminary design of the system will be developed and described as a collection of
interfaces. The system and the additional enabling systems to be developed will be identified and will be
assigned to the requirements. Additional logistic requirements will be prepared with the help of the logistic
manager. From the requirements specification the acceptance criteria and scope of delivery for the finished
overall system will be adopted. The requirements will be tracked to the requirements specification, the
system and the enabling systems so that all the requirements are taken into account.

The preparation of the overall system specification requires knowledge of various disciplines, such as system
development, safety and security, ergonomics and logistics etc. and as such this cannot be performed by a
single person. Since the requirements are considered to be the central core of the specification, the
requirements engineer (supplier) has the responsible role for preparing the overall system specification in
cooperation with the experts of various specialities.

The required products, like specification and architecture, will be prepared for every system, sub-system and
segment in the overall system specification.

In addition to the core i.e. overall system specification, there are four specification types: the system
specification for system elements, the external unit specification which specifies units which were not
developed within area of the project and a hardware and software specification and external hardware module
specification and an external software module specification for each system element.

Requirements of the System Specification may influence the logistic Support Specification, in the same
manner as the logistic requirements may influence the System Specification. The system specifications are
generally outcomes of System Requirement Specification (SRS) and System Design Specification (SDS),
which are describe in the next sections.

System Requirement Specification (SRS)


System requirement specification (SRS) is a typical computer application which includes a combination of
software, hardware, and network components. SRSembodies the detailed summary of the requirements
necessary to create the complete system. These requirements are given in a documented form in order to
define the complete functionality, availability, performance, and security needs of a system. The
requirements specification is the base for the architecture, design, and implementation that will be built.

Elements of SRS
1) Introduction
2) Overall System Description
a. Product features
b. User classes
c. Operating environment/Constraints
3) System Features
a. Database management system
b. Hardware requirements
c. Software Requirements
d. Functional Requirements
e. Non-functional requirements
i. Safety
ii. Security
4) External Interface requirements
a. User Interfaces
b. Hardware Interfaces
c. Software Interfaces
d. Communication interfaces

FSIPD 120
All these areas will be described in the subsequent sections and chapters. An example of a system
specification is listed in table 2.6.

Category Example
Transmission (2W / 4W, automatic manual)
Interiors - Instrument cluster, Navigation, Rear seat
Functional specifications
entertainment
Power mirrors, seat adjustment, etc.
Performance specifications (include cost, Acceleration, peak speed
weight) Fuel consumption
Safety specs (Air bags / Seat belts / lane departure
Safety / protection specifications
warning, Remote Keyless Entry etc.)
Reliability specifications 30,000 kms / 3 Years Engine Warranty
Manufacturability specifications 80% of components can be sourced locally
Serviceability specifications Service time for changing the engine oil.
Environmental/qualification specifications NVH specs, Hazardous substances specifications
Regulatory specifications Emissions (e.g. Bharat IV), Braking distance
Table-2.6 Example of car specifications

Source: TCS

System Design Specification (SDS)


System design specification describes how to build the system. It considers the requirements of the tasks
that will be performed by the system and translate these requirements into a hardware and software design.
These are also defined in the form of documents which provide description of the design of the system in a
documented form which can be reviewed when required and approved by the stakeholders. These also provide
description of the system in a manner that the component parts of the system can be procured and built. The
SDS also provides description of the hardware and software system components to guide for the maintenance
and up gradation of these components.

Elements of SDS

1) Introduction
2) System Components
a. System Architecture
b. Sub systems (Purpose, functionality and interfaces)
c. High Level Design
d. Low level Design
3) Hardware Requirements
4) Software Requirements
a. Development plan
b. Initial data
c. User training
5) Operation and Maintenance plan

All these areas will be described in the subsequent sections and chapters.

System specifications

The System Specification gives the descriptions about the functional and non-functional requirements
modelled on a system. The System Specification is prepared from the requirements derived from the Overall
System Specification or the specifications of higher order systems. The specification gives the standards for
the designs and the tools for disintegrating the construction. The system specifications are the first things
which will be modified if changes in the system are required in the course of its development. The Evaluation

FSIPD 121
Specification System Element defines the evaluation cases required for demonstrating the requirements of
interfaces and specifications.
The System Specification mainly defines the requirements modelled on the system element and states the
connected interfaces. In addition to this, it helps to refine and allocate the requirements and interfaces to
lower system elements.

The tracing of the requirements ensures that all requirements modelled on the respective elements will be
considered while refining the next level of hierarchy. The System Specification is prepared along with the
architecture design of the system or a sub-system. The System architectural design is source for the
preparation of the products. It ensures the consistency between specification and architecture.
An example to show the documentations of the specifications is given below, which provides all the
specifications of the model Sony Xperia Z ultra (as provided by the company).

Two examples to show the documentations of the specifications are given in table 2.7 & 2.8, which provide all
the specifications of the Renault duster and model Sony xperia c (as per as provided by the web pages).

1. Renault duster Petrol RxE specifications

ENGINE & TRANSMISSION

Engine Code 1.6 K4M


Displacement (cc) 1598
Engine Type 4 Cyl in-line
Max Power : PS @ rpm 104 PS @ 5850 rpm
Max Torque : Nm @ rpm 145 Nm @ 3750 rpm
Fuel System Multi-point Fuel Injection (MPFI)
Transmission Type 5-speed Manual

DIMENSIONS & CAPACITIES

Overall Length (mm) 4315


Overall Width (mm) 1822
Overall Height - with Roof Rail (mm) 1695
Wheelbase (mm) 2673
Front Track (mm) 1560
Rear Track (mm) 1567
Ground Clearance (mm) 205
Minimum Turning Radius (m) 5.2
Trunk Capacity (Litre) 475 & 1064 (with rear seats folded)
Fuel Tank Capacity 50
Gross Vehicle Weight (kg) 1740

BRAKES

Type Hydraulically Operated Diagonal Split


Dual Circuit Braking
Front Ventilated Disc
Rear Drum

SUSPENSION

Front Independent McPherson Strut with


Coil Springs & Anti-Roll Bar
Rear Torsion Beam Axle with Coil Springs &
Anti-Roll Bar

FSIPD 122
STEERING

Type Hydraulic Power Assisted

TYRES & WHEELS

Tyres 215/65 R16 Tubeless


Wheels 16 inch Steel

Source: As per the website http://www.carwale.com

Table 2.7 Specifications of Renault duster

2. Sony Xperia C HSPA+ C2305

GENERAL

2G Network GSM 900 / 1800 / 1900 - SIM 1 & SIM 2


3G Network HSDPA 900 / 2100
SIM Dual SIM
Announced 2013, June
Status Available. Released 2013, July

BODY
Dimensions 141.5 x 74.2 x 8.9 mm (5.57 x 2.92 x 0.35 in)
Weight 153 g (5.40 oz)

DISPLAY

Type TFT capacitive touchscreen, 16M colors


Size 540 x 960 pixels, 5.0 inches (~220 ppi pixel density)
Multitouch Yes, up to 4 fingers
Protection Yes

SOUND

Alert types Vibration; MP3 ringtones


Loudspeaker Yes
3.5mm jack Yes

MEMORY

Card slot microSD, up to 32 GB


Internal 4 GB, 1 GB RAM

DATA

GPRS Up to 85.6 kbps


EDGE Up to 237 kbps
Speed HSDPA, 42.2 Mbps, HSUPA, 11.5 Mbps
WLAN Wi-Fi 802.11 b/g/n, Wi-Fi Direct, Wi-Fi hotspot
Bluetooth Yes, v4.0 with A2DP
USB Yes, microUSB v2.0

CAMERA

Primary 8 MP, 3264 x 2448 pixels, autofocus, LED flash

FSIPD 123
Features Geo-tagging, touch focus, face and smile detection
HDR
Video Yes, 1080p, video stabilization
Secondary Yes, VGA

FEATURES

OS Android OS, v4.2.2 (Jelly Bean)


Chipset MTK MT6589
CPU Quad-core 1.2 GHz
GPU Power VR SGX544
Sensors Accelerometer, proximity, compass
Messaging SMS (threaded view), MMS, Email, IM, Push Email
Browser HTML5
Radio Stereo FM radio with RDS
GPS Yes, with A-GPS support
Java Yes, via Java MIDP emulator
Colors Black, White, Purple
- SNS integration
- MP4/H.263/H.264 player
- MP3/eAAC+/WAV player
- Document viewer
- Photo viewer/editor
- Voice memo/dial
- Predictive text input

BATTERY
Type Li-Ion 2390 mAh battery
Stand-by Up to 588 h (2G) / Up to 605 h (3G)
Talk time Up to 14 h (2G) / Up to 12 h 25 min (3G)
Music play Up to 111 h

MISC

SAR US 0.54 W/kg (head) 0.36 W/kg (body)


SAR EU 0.52 W/kg (head)

Source: As per as the website http://www.gsmarena.com

Table 2.8 Specifications of Sony Xperia C

System element overview

The basic information provided by the system elements is the brief analysis of the system elements which
has to be realized. It basically provides the description about the various tasks and objectives of the system
elements and the role within the system.

Interface specification

An interface represents the boundary between a system element and its environment. It describes the
exchange of the data that takes place at the system boundary and their dependencies. Thus, the interface
defines the services which are to be provided by the system element. Several interfaces are by a single system
element.
The interface description performs the following functions:
It collects all functional requirements modelled on the system element
It specifies all interfaces and presents them in their environment.
It defines the information required for developing the system element along with the non-functional
requirements.

FSIPD 124
It describes the interfaces to other system elements and the interfaces to the environment, e.g. the man-
machine interface or interfaces to enabling Systems.

The functional interface description is subdivided into static and dynamic behaviour description.

The static behaviour describes the structure of the interface, through which the functionalities of the
system element can be used.
The dynamic behaviour determines the sequence of use and the logic dependencies of the transmitted
data and signals.

Contents and description of the interfaces vary depending upon its use to describe hardware or software
components of the system element. Hardware components are specified by electrical and mechanical data,
while the description of methods, parameters and information specify the software components.
Static elements of a hardware interface include, e.g., information on electrical performance parameters
(power, voltage, current, frequency, polarity), information on the mechanical design (type of connector,
connector assignment, type of cable), or information on the technical design (function call and parameter list,
transmission device, layout of a user interface). The description of the dynamic behaviour includes, e.g., the
determination of communication protocols and their specification, the description of synchronization
mechanisms and references to the use and operation of the interface. The structure of the calls through
which the services of the software elements can be used determines the static behaviour of a software
interface. This description is mainly based on method signatures and definitions of data types. The possible
sequence of the calls are determined by the dynamic behaviour which is frequently based on flowcharts
(sequence charts, message sequence charts) or state transition diagrams.

Non-functional requirements

In addition to the functional requirements, a system element must also fulfil several non-functional
requirements. Quality characteristics like performance, safety and security, availability and maintainability
are a few non-functional requirements. Actually required values describe and specify the non-functional
requirements. The specifications of higher system elements or the overall system specification are used to
derive the non-functional requirements relevant for the system element.

Hardware Specification

All the functional and non-functional requirements modelled on a hardware element (hardware unit,
hardware components or hardware process module) are defined by the hardware specification. The
requirements will be derived from the specifications of higher system elements or hardware elements are
used to prepare the hardware specification. Standards and tools for designing and decomposing the hardware
architecture are provided by the specification. The hardware specification shall be adapted first if any changes
are required in the course of the development of the hardware element.
The evaluation cases required for demonstrating the requirements of interfaces and specifications are
defined by the evaluation specification system element. The requirements modelled on the hardware element
are mainly described by the hardware specification which also specifies the connected interfaces and in
addition to this requirements and interfaces will be polished and allocated to lower hardware elements.
All the requirements modelled on the respective elements will be considered when the next hierarchy level
will be polished is ensured by the requirements tracing. The hardware specification is prepared along with the
architecture design of the hardware units. The preparation of these products and thus ensuring the
consistency between specification and architecture, the hardware architect is responsible.

Software Specification

The Software Specification defines all the functional and non-functional requirements modelled on a
software element (i.e. software unit, software component or software process module). The requirements are
derived from the specifications of higher system elements or software elements so as to prepare the software
specification. Standards and tools for designing and decomposing the software architecture are provided by
the specification. The software specification shall be adapted first if any changes are required in the course of
the development of the hardware element.
The evaluation cases required for demonstrating the requirements of interfaces and specifications are
defined by the evaluation specification system element. The requirements modelled on the software element
are mainly described by the software specification which also specifies the connected interfaces and in
addition to this requirements and interfaces will be polished and allocated to lower software elements.

FSIPD 125
All the requirements modelled on the respective elements will be considered when the next hierarchy level
will be polished is ensured by the requirements tracing. The software specification is prepared along with the
architecture design of the software units. The preparation of these products and thus ensuring the
consistency between specification and architecture, the software architect is responsible.

External unit specification

For every potential external unit identified within the scope of the architectural design an external unit
specification will be prepared. The selection of a standard product, a system element available for re-use, or a
furnished item is based on specification. The external unit specification is used as requirements document in
case of a subcontract. It also acts as a base for the tests.
All the functional and non-functional requirements modelled on the external unit are defined by the external
unit specification. The specification is used to make market surveys and evaluation of the mass-produced
product if its use is possible. The sub-contract with the sub-supplier is awarded basing on this specification.
The external unit specification is prepared by the system architect who is well supported by the system
integrator, who ensures that the finally selected external unit fulfils all requirements regarding the
integration into the system.

External hardware module specification

All the functional and non-functional requirements posed on an external hardware module can be explained
by the external hardware module specification. As in the case of the earlier section, the requirements
specifications of higher system elements are used to prepare the hardware specification. The applicable
specification shall be adapted first if any changes are required in the course of the following development.
The evaluation cases required for signifying the requirements of interfaces and specifications are defined by
the evaluation specification system element.
The requirements modelled on the work product external hardware module are mainly described by the
external hardware module specification and this also specifies the connected interfaces. The fact that all the
requirements modelled on the respective elements will be considered is ensured by the requirements tracing.
The external hardware module specification will be prepared in conjunction with the architecture design of the
hardware units. The consistency between specification and architecture is ensured by the hardware architect
who is also responsible for the preparation of these products.

External software module specification

As state earlier for external module specification, all the functional and non-functional requirements posed
on an external software module can be explained by the external software module specification. The
requirements specifications of higher system elements are used to prepare the software specification. The
applicable specification shall be adapted first if any changes are required in the course of the following
development. The evaluation cases required for signifying the requirements of interfaces and specifications
are defined by the evaluation specification system element.
The requirements modelled on the external software are mainly defined by the external software module
specification which also specifies the connected interfaces. The fact that all requirements modelled on the
respective elements will be considered is ensured by the requirements tracing.
The external software module specification will be prepared in conjunction with the architecture design of the
software units. The consistency between specification and architecture is ensured by the software architect
who is also responsible for the preparation of these products.

A system as a whole is further sub divided into sub-systems hence the study of such smaller components is
important and is included in the subsequent section.

FSIPD 126
2.2.4 Sub-System Design:

A subsystem is a small system made up of small components, which acts as a component of a large system.

All types of systems are made up of several sub-systems which are directly or indirectly related to each other
structurally or behaviourally. Processing of a system can be divided into various sub processes referring to sub
systems. With increase in complexity or size of a system, the no. of sub-systems also increases. Now since, a
system is made up of several sub systems, therefore in order to design a system many a times it is required to
divide a system into a number of subsystems and then to design the subsystems.

General considerations for designing of subsystems:

The subsystems manufacturing cost should be low.


High reliability of subsystem is essential.
The subsystems should perform with high accuracy / precision.
Efficiency of subsystem should be high.
The subsystem should possess high effectiveness.
The subsystem should take less designing and manufacturing time, etc.

The above mentioned considerations are very critical. If any of the subsystems has poor characteristics, then
overall performance of the system will not at all be good.

One of the major challenges involved with subsystem designing is to how to design in minimum amount of
time and with minimum effort. There are many ways to accomplish this task like by:

Use of Top-down design approach.


Use of any of the computer aided technique.
Partitioning the system sensibly.
Establishing proper relationship among the subsystems, etc.

Top-down design approach: In this design approach (figure 2.28), a system is broken down into smaller
components / systems called subsystems and then again further broken down into their subsystems until the
basic systems are found. Now, when all the base systems are identified, the task of sub-system design comes
into picture.

Figure- 2.28 Top-down approach

Computer aided techniques of design: Computer aided designing is one of the most preferred technique
due to its enormous benefits over the manual technique. There are many kinds of softwares available in
market which can be utilised for deigning purpose. E.g. Auto CAD, CATIA, Pro-E, Solid Edge, etc.

FSIPD 127
Partitioning of System: Partition of subsystems should be done in such a way that apart from the pints
mentioned above as general considerations, the resulting subsystems can be separately designed, developed,
delivered, and should possess inter changeability with other subsystems.

Establishing proper relationship among the subsystems: All subsystems should be related to each other
such that all provide the required optimal output to each of the connected system, so that the system gives
optimal overall performance.

When two sub-systems are combined together to form a large system, there will be another component in
the boundary of the two systems. This boundary is the interface and is describe in section 2.2.5.

2.2.5 Interface Design

Interaction between systems happens in four ways. They are:

Physical/Spatial,
Energy,
Material and
Information or Data.

These four factors are vital for the success of a product. Very good S/W will fail because of poor GUI (Graphic
User Interface) or very good car design because of non-availability of expected colours. Interface design (figure
2.29) deals with the process of developing a method for two (or more) modules in a system to connect and
communicate. These modules can apply to hardware, software or the interface between a user and a machine.
An example of a user interface could include a GUI, a control panel for a nuclear power plant, or even the
cockpit of an aircraft. In systems engineering, all the inputs and outputs of a system, subsystem, and its
components are listed in an interface control document often as part of the requirements of the engineering
project. The development of a user interface is a unique field.

Figure- 2.29 Interface Design

Interface design is the arrangement and makeup of how a user can interact with a site. The word interface
means a point or surface where two things touch. So a web user interface is where a person and a website
touch so menus, components, forms, and all the other ways you can interact with a website.

FSIPD 128
Good interface design is about making the experience of using a website easy, effective and intuitive. Its
actually much easier to demonstrate bad interface design because thats when you really notice it. A simple
example of interface design is the use of icons. Have you ever looked at an icon and thought "what is that
meant to represent?!" Well that would bebad interface design. Using icons to label and signify different
types of content or actions is just one part of interface design. Incidentally another example of interfaces that
you will likely encounter as a web designer is Application Programming Interfaces or APIs. An API is the set of
functions and protocols by which you (or your program more precisely) can interact with whatever the API is
for. So for example Google Maps provides an API which you can use to create applications or sites that work
with Google Maps.

Advantages of interface design:

Easy to use
Easy to learn
Easy to understand

Disadvantages of interface design:

lack of consistency
too much memorization
no guidance / help
no context sensitivity
poor response
Arcane/unfriendly

System:

A system is a set of interacting or interdependent components forming an integrated whole or a set of


elements (often called components) and relationships which are different from relationships of the set or its
elements to other elements or sets.

Characteristics of Systems:

A system has structure; it contains parts (or components) that are directly or indirectly related to each
other;
A system has behaviour, it contains processes that transform inputs into outputs (material, energy or
data);
A system has interconnectivity: the parts and processes are connected by structural and/or behavioural
relationships.
A system's structure and behaviour may be decomposed via subsystems and sub-processes to
elementary parts and process steps.

Elements of a System:

Inputs and outputs


Processor
Control
Environment/surroundings
Feedback
Boundaries and interface

FSIPD 129
Designer:

The person responsible for building the system based on his or her understanding of users and their tasks,
goals, abilities, and motivations.

User:

He or she uses the system to accomplish tasks and achieve goals.

Interface:

It is the place where two independent systems meet and communicate.


It is the presentation, navigation, and interaction of information between a computer system and a user.

The following examples show the various interfaces of different systems.

Web site(URL) Browser


Html data file

Figure 2.30 Html data file as interface

Website (URL) and Browser are the two independent systems connected by an interface called Html data file
(figure 2.30)

Motion Sensor Alarm


Trigger command

Figure 2.31 Trigger command as interface

Motion Sensor makes alarm through an interface called trigger command (figure 2.31)

Types of User Interfaces

To work with a system, the users need to be able to control the system and assess the state of the system.

Definition of User Interface

In computer science and human-computer interaction, the user interface (of a computer program) refers to
the graphical, textual and auditory information the program presents to the user. The user employs several
control sequences (such as keystrokes with the computer keyboard, movements of the computer mouse, or
selections with the touch screen) to control the program.

The term "User Interface" refers to the methods and devices that are used to accommodate interaction
between machines and the human beings, users, who use them. User interfaces can take on many forms, but
always accomplish two fundamental tasks: communicating information from the product to the user, and
communicating information from the user to the product. The term Graphical User Interface is a graphical
user interface to a computer. The term came into existence because the first interactive user interfaces to
computers were not graphical; they were text-and keyboard oriented and usually consisted of commands.

There exist several types of user interfaces. We here give you just two examples:

FSIPD 130
Command-Line Interface (CLI): The user provides the input by typing a command string with the computer
keyboard and the system provides output by printing text on the computer monitor

Graphical User Interface (GUI): The use of pictures rather than just words to represent the input and output
of a program. Input is accepted via devices such as keyboard and mouse.

Golden Rules:

Place the user in control


Reduce the users memory load
Make the interface consistent

Place the User in Control

Define interaction modes in a way that does not force a user into unnecessary or undesired actions. Provide
for flexible interaction. Allow user interaction to be interruptible and undoable. Streamline interaction as skill
levels advance and allow the interaction to be customized. Hide technical internals from the casual user.
Design for direct interaction with objects that appear on the screen.

Reduce the Users Memory Load

Reduce demand on short-term memory. Establish meaningful defaults. Define shortcuts that are intuitive.
The visual layout of the interface should be based on a real world metaphor. Disclose information in a
progressive fashion

Make the Interface Consistent

Allow the user to put the current task into a meaningful context. Maintain consistency across a family of
applications. If past interactive models have created user expectations, do not make changes unless there is a
compelling reason to do so.

User Interface Design Models

User modela profile of all end users of the system


Design modela design realization of the user model
Mental model (system perception)the users mental image of what the interface is
Implementation modelthe interface look and feel coupled with supporting information that describe
interface syntax and semantics

User Interface Design Process

Figure- 2.32interface Design Process

FSIPD 131
Interface design process (figure 2.32) consists of the following four phases

Gather/analyze user information


Design the UI
Construct the UI
Validate the UI

This process is independent of the hardware and software platform, operating system, and tools used for
product design and development.

Validate Analyse

Construct Design

Figure- 2.33 Phases of UI process

The phases of User interface (figure 2.33) are discussed below:

Phase-1: Analysis of User interface

Gathering and analyzing activities can be broken down into following five steps:

Determination of user profiles


Perform user task analyzes
Gather user requirements
Analysis of user environments
Matching of requirements to tasks

Phase-2: Design of user interface

UI usually requires a significant commitment of time and resources.

Define product usability goals and objectives


Develop user scenarios and tasks
Define interface objects and actions
Determine object icons, views, and visual representations
Design object and window menus
Refine visual designs

FSIPD 132
Phase-3: Construct of UI

The purpose of prototyping is to quickly and easily visualize design alternatives and concepts, not to build
code that is to be used as a part of the product. Prototypes may show visualizations of the interfacethe
high-level conceptsor they may show functional slices of a product, displaying how specific tasks or
transactions might be performed with the interface. GUI functional specifications are difficult to use, because
it is difficult to write about graphical presentation and interaction techniques. It is easier and more effective
to show a prototype of the product interface style and behaviour.

Phase-4: Validation of UI

A usability evaluation is the best way to get a product in the hands of actual users to see if and how they use
it prior to the products release. Usability evaluations quantitatively and qualitatively measure user behaviour,
performance, and satisfaction. Early usability evaluations include customer walkthroughs of initial designs.
As pieces of the product and interface are prototyped and constructed, perform early usability evaluations on
common tasks. When the product is nearing completion and all of the pieces are coming together, then
conduct final system usability evaluations.

Interface Analysis

Interface analysis means understanding

the people (end-users) who will interact with the system through the interface;
the tasks that end-users must perform to do their work,
the content that is presented as part of the interface
the environment in which these tasks will be conducted

User Analysis

User analysis of analysis of following

Users should be trained professionals, technician, clerical, or manufacturing workers


The level of formal education, the average user have
The capability of users learning either from written materials or classroom training?
Whether users are expert typists or keyboard phobic
The age range of the user community
Will the users be represented predominately by one gender?
Compensation of users for the work they perform
Do users work normal office hours or do they work until the job is done?
Is the software to be an integral part of the work users do or will it be used only occasionally?
What is the primary spoken language among users?
What are the consequences if a user makes a mistake using the system?
Do users want to know about the technology the sits behind the interface?

Task Analysis and Modelling

Task analysis and modelling answers the following questions

What work will the user perform in specific circumstances?


What tasks and subtasks will be performed as the user does the work?
What specific problem domain objects will the user manipulate as work is performed?
What is the sequence of work tasksthe workflow?
What is the hierarchy of tasks?

FSIPD 133
Use-cases define basic interaction

Task elaboration refines interactive tasks

Object elaboration identifies interface objects (classes)

Workflow analysis defines how a work process is completed when several people (and roles) are involved

Interface Design Steps

Using information developed during interface analysis; define interface objects and actions (operations).
Define events (user actions) that will cause the state of the user interface to change. Model this
behaviour.
Depict each interface state as it will actually look to the end-user.
Indicate how the user interprets the state of the system from information provided through the interface

Importance of interface management:

A Complex system has many interfaces. These interfaces should be managed in a systematic way for efficient
operation. The advantages of interface management are as follows

Common interfaces reduce complexity


System architecture drives the types of interfaces to be utilized in the design process
Clear interface identification and definition reduces risk
Most of the problems in systems are at the interfaces.
Verification of all interfaces is critical for ensuring compatibility and operation

Humancomputer interaction (HCI)

Humancomputer interaction (HCI) (figure 2.34) is the study of how humans interact with computer systems.
Many disciplines contribute to HCI, including computer science, psychology, ergonomics, engineering, and
graphic design. HCI is a broad term that covers all aspects of the way in which people interact with computers.
In their daily lives, people are coming into contact with an increasing number of computer-based
technologies. Some of these computer systems, such as personal computers, we use directly. We come into
contact with other systems less directly for example, we have all seen cashiers use laser scanners and
digital cash registers when we shop. And, as we are all too aware, some systems are easier to use than others.
When users interact with a computer system, they do so via a user interface (UI).

FSIPD 134
Figure- 2.34Human-Computer relation

Basic Interactions

Direct Manipulation of graphical objects:

The now ubiquitous direct manipulation interface, where visible objects on the screen are directly
manipulated with a pointing device, was first demonstrated by Ivan Sutherland in Sketchpad, which was his
1963 MIT PhD thesis. Sketchpad supported the manipulation of objects using a light-pen, including grabbing
objects, moving them, changing size, and using constraints. It contained the seeds of myriad important
interface ideas. The system was built at Lincoln Labs with support from the Air Force and NSF. William
Newman's Reaction Handler, created at Imperial College, London (1966-67) provided direct manipulation of
graphics, and introduced "Light Handles," a form of graphical potentiometer, that was probably the first
"widget." Another early system was AMBIT/G (implemented at MIT's Lincoln Labs, 1968, ARPA funded). It
employed, among other interface techniques, iconic representations, gesture recognition, dynamic menus
with items selected using a pointing device, selection of icons by pointing, and modelled and mode-free styles
of interaction. David Canfield Smith coined the term "icons" in his 1975 Stanford PhD thesis on Pygmalion
(funded by ARPA and NIMH) and Smith later popularized icons as one of the chief designers of the Xerox Star.
Many of the interaction techniques popular in direct manipulation interfaces, such as how objects and text
are selected, opened, and manipulated, were researched at Xerox PARC in the 1970's. In particular, the idea of
"WYSIWYG" (what you see is what you get) originated there with systems such as the Bravo text editor and
the Draw drawing program. The concept of direct manipulation interfaces for everyone was envisioned by Alan
Kay of Xerox PARC in a 1977 article about the "Daybook. The first commercial systems to make extensive
use of Direct Manipulation were the Xerox Star (1981), the Apple Lisa (1982) [51] and Macintosh (1984) [52].
Ben Shneiderman at the University of Maryland coined the term "Direct Manipulation" in 1982 and identified
the components and gave psychological foundations.

The Mouse:

The mouse was developed at Stanford Research Laboratory (now SRI) in 1965 as part of the NLS project
(funding from ARPA, NASA, and Rome ADC) to be a cheap replacement for light pens, which had been used at
least since 1954. Many of the current uses of the mouse were demonstrated by Doug Engelbart as part of NLS
in a movie created in 1968. The mouse was then made famous as a practical input device by Xerox PARC in
the 1970's. It first appeared commercially as part of the Xerox Star (1981), the Three Rivers Computer
Company's PERQ (1981), the Apple Lisa (1982), and Apple Macintosh (1984).

FSIPD 135
Windows:

Multiple tiled windows were demonstrated in Engelbart's NLS in 1968 [8]. Early research at Stanford on
systems like COPILOT (1974) and at MIT with the EMACS text editor (1974) also demonstrated tiled windows.
Alan Kay proposed the idea of overlapping windows in his 1969 University of Utah PhD thesis [15] and they
first appeared in 1974 in his Smalltalk system [11] at Xerox PARC, and soon after in the InterLisp system.
Some of the first commercial uses of windows were on Lisp Machines Inc. (LMI) and Symbolics Lisp Machines
(1979), which grew out of MIT AI Lab projects. The Cedar Window Manager from Xerox PARC was the first
major tiled window manager (1981) , followed soon by the Andrew window manager [32] by Carnegie Mellon
University's Information Technology Center (1983, funded by IBM). The main commercial systems
popularizing windows were the Xerox Star (1981), the Apple Lisa (1982), and most importantly the Apple
Macintosh (1984).The early versions of the Star and Microsoft Windows were tiled, but eventually they
supported overlapping windows like the Lisa and Macintosh. The X Window System, a current international
standard, was developed at MIT in 1984.

Application Types:

Drawing programs: Much of the current technology was demonstrated in Sutherland's 1963 Sketchpad
system. The use of a mouse for graphics was demonstrated in NLS (1965). In 1968 Ken Pulfer and Grant
Bechthold at the National Research Council of Canada built a mouse out of wood patterned after Engelbart's
and used it with a key-frame animation system to draw all the frames of a movie. A subsequent movie,
"Hunger" in 1971 won a number of awards, and was drawn using a tablet instead of the mouse (funding by
the National Film Board of Canada) [3]. William Newman's Markup (1975) was the first drawing program for
Xerox PARC's Alto, followed shortly by Patrick Baudelaire's Draw which added handling of lines and curves.
The first computer painting program was probably Dick Shoup's "Super paint" at PARC (1974-75).

Text Editing: In 1962 at the Stanford Research Lab, Engelbart proposed, and later implemented a word
processor with automatic word wrap, search and replace, user-definable macros, scrolling text, and
commands to move, copy, and delete characters, words, or blocks of text. Stanford's TV Edit (1965) was one
of the first CRT-based display editors that was widely used. The Hypertext Editing System from Brown
University had screen editing and formatting of arbitrary-sized strings with a lighten in 1967 (funding from
IBM). NLS demonstrated mouse-based editing in 1968. TECO from MIT was an early screen-editor (1967) and
EMACS developed from it in 1974. Xerox PARC's Bravo was the first WYSIWYG editor-formatter (1974). It was
designed by Butler Lampson and Charles Simonyi who had started working on these concepts around 1970
while at Berkeley. The first commercial WYSIWYG editors were the Star, Lisa Write and then MacWrite...

Spreadsheets: The initial spreadsheet was VisiCalc which was developed by Frankston and Bricklin(1977-8)
for the Apple II while they were students at MIT and the Harvard Business School. The solver was based on a
dependency-directed backtracking algorithm by Sussman and Stallman at the MIT AI Lab.

Hypertext: The idea for hypertext (where documents are linked to related documents) is credited to Vannevar
Bush's famous MEMEX idea from 1945. Ted Nelson coined the term "hypertext" in 1965. Engelbart's NLS
system at the Stanford Research Laboratories in 1965 made extensive use of linking (funding from ARPA,
NASA, and Rome ADC). The "NLS Journal" was one of the first on-line journals, and it included full linking of
articles (1970). The Hypertext Editing System, jointly designed by Andy van Dam, Ted Nelson, and two
students at Brown University (funding from IBM) was distributed extensively .The University of Vermont's
PROMIS (1976) was the first Hypertext system released to the user community. It was used to link patient
and patient care information at the University of Vermont's medical center. The ZOG project (1977) from CMU
was another early hypertext system, and was funded by ONR and DARPA. Ben Shneiderman's Hyper ties was
the first system where highlighted items in the text could be clicked on to go to other pages (1983, Univ. of
Maryland). HyperCard from Apple (1988) significantly helped to bring the idea to a wide audience. There have
beenmany other hypertext systems through the years. Tim Berners-Lee used the hypertext idea to create the
World Wide Web in 1990 at the government-funded European Particle Physics Laboratory (CERN). Mosaic,
the first popular hypertext browser for the World-Wide Web was developed at the Univ. of Illinois' National
Center for Supercomputer Applications (NCSA).

FSIPD 136
Computer Aided Design (CAD): The same 1963 IFIPS conference at which Sketchpad was presented also
contained a number of CAD systems, including Doug Ross's Computer-Aided Design Project at MIT in the
Electronic Systems Lab and Coons' work at MIT with Sketchpad. Timothy Johnson's pioneering work on the
interactive 3D CAD system Sketchpad 3 was his 1963 MIT MS thesis (funded by the Air Force). The first
CAD/CAM system in industry was probably General Motor's DAC- 1 (about 1963).

Video Games: The first graphical video game was probably Space War by Slug Russel of MIT in 1962 for the
PDP-1 including the first computer joysticks. The early computer Adventure game was created by Will
Crowther at BBN, and Don Woods developed this into a more sophisticated Adventure game at Stanford in
1966 .Conway's game of LIFE was implemented on computers at MIT and Stanford in 1970. The first popular
commercial game was Pong (about 1976).

Different types of interaction:

Conversational:

Command language
Dialog imposed by the system

Menus, forms:

The system guides the user


Dialog controlled by the system

Navigation:

Nodes, anchors and links


lost in hyperspace

Direct manipulation:

Physical, direct actions on (representations of) the objects


Inspires all current first person interfaces

Four principles of direct manipulation

Continuous representation of the objects of interest


Physical actions rather than complex syntax
Quick, incremental, reversible operations whose effect on the objects of interest is immediately visible
Layered approach to facilitate learning

Different types of interaction styles:

Gesture-based interaction: e.g. Pen based, Touch based, 3D gestures


Multimodal interaction: combine speech+ gesture
Virtual reality: Immersion of the user
Mixed and augmented reality: Augmented reality (later renamed Mixed reality): Augment physical object
with computational capabilities
Tangible interaction: Use physical objects for interaction

FSIPD 137
Web Interface Structure:

A Web interface is a mix of many elements (text, links, and graphics), formatting of these elements, and
other aspects that affect the overall interface quality. Web interface design entails a complex set of activities
for addressing these diverse aspects. To gain insight into Web design practices, Newman and Landay [2000]
conducted an ethnographic study wherein they observed and interviewed eleven professional Web designers.
One important finding was that most designers viewed Web interface design as being comprised of three
components: information design, navigation design, and graphic design {as depicted in the Venn diagram in
Figure 2.35} Information design focuses on determining an information structure (i.e., identifying and
grouping content items) and developing category labels to reflect the information structure. Navigation
design focuses on developing navigation mechanisms (e.g., navigation bars and links) to facilitate interaction
with the information structure. Finally, graphic design focuses on visual presentation and layout.

Figure 2.35Overview of Web interface design.

All of these design components affect the overall quality of the Web interface. The Web design literature also
discusses a larger, overarching aspect experience design [Creative Good 1999; Shedro 2001] the outer circle of
Figure 2.35.

Experience design encompasses information, navigation and graphic design. However, it also encompasses
other aspects that affect the user experience, such as download time, the presence of graphical ads, popup
windows, etc. Information, navigation, graphic, and experience design can be further refined into the aspects
depicted in Figure 2.36. The figure shows that text, link, and graphic elements are the building blocks of Web
interfaces; all other aspects are based on these. The next level of Figure 2.36 addresses formatting of these
building blocks, while the subsequent level addresses page-level formatting. The top two levels address the
performance of pages and the architecture of sites, including the consistency, breadth, and depth of pages.
The bottom three levels of Figure 2.36 are associated with information, navigation, and graphic design
activities, while the top two levels Page Performance and Site Architecture are associated with experience
design.

FSIPD 138
Figure 2.36 Aspects associated with Web interface structure.

Terminal questions:

1. What is requirement? What are the characteristics of requirement?


2. Explain the various types of requirements.
3. What is requirement engineering?
4. Describe the essential parts of requirement pyramid.
5. Explain traceability matrix in detail.
6. What is system optimisation? Write down algorithm of any one type of optimisation technique.
7. Define a) system specification b) sub system design c) interface design

References

1) Booch, Grady, James Rumbaugh, and Ivar Jacobson. UML User Guide, Boston, MA:Addison-Wesley, 1998.
2) [HUL05] Hull, Elizabeth, Kenneth Jackson, and Jeremy Dick. Requirements Engineering, London: Springer,
2005.
3) [LEF03] Leffingwell, Dean, and Don Widrig. Managing Software Requirements: A Use Case Approach,
Second Edition, Boston, MA: Addison-Wesley, 2003.
4) [LUD05] Ludwig Consulting Services, LLC, www.jiludwig.com.
5) [YOU01] Young, Ralph R. Effective Requirements Practices, Boston, MA: Addison-Wesley, 2001.
6) Customer Requirement Management in Product development: A Review of research issues Jianxin
(Roger) Jio and Chun-Hsein Chen, Concurrent Engineering: Research and Applications, Vol.14, No.3,2006
7) Product Model for Requirements and Design Concept Management: Representing Design Alternatives
and Rationale, Fredrik Andersson, KristerSutinen and Johan Malmqvist, Department of Product and
Production Development ,Chalmers University of Technology SE-412 96 Gteborg, Sweden
8) Sampaio do Prado Leite, Julio Cesar; Jorge HoracioDoorn (2004). Perspectives on Software Requirements.
Kluwer Academic Publishers. pp. 91113. ISBN 1-4020-7625-8.
9) Turbit, Neville. "Requirements Traceability" Retrieved 2007-05-11.
10) Gotel O.C.Z and Finklestein A.C.W., "An analysis of the requirements traceability problem, in Proceedings
of ICRE94, 1st International Conference on Requirements Engineering, Colorado Springs, Co, IEEE CS
Press, 1994
11) Pinheiro F.A.C. and Goguen J.A., "An object-oriented tool for tracing requirements", IEEE Software 1996,
13(2), pp. 52-64
12) Morrison, Scott (2008-01-28). "So Many, Many Words" The Wall Street Journal. Retrieved 2010-04-14:
Attensity Text Analytics Solution for Voice of the Customer Analytics.

FSIPD 139
13) Kotonya G. and Sommerville, I. Requirements Engineering: Processes and Techniques. Chichester, UK:
John Wiley & Sons
14) Internet materials, You tube etc.
15) Software Requirements Engineering Methodology (Development) Alfor,M. W. and Lawson,J. T. TRW
Defense and Space Systems Group. 1979.
16) Thayer, R.H., and M. Dorfman (eds.), System and Software Requirements Engineering, IEEE Computer
Society Press, Los Alamitos, CA, 1990.
17) Royce, W.W. 'Managing the Development of Large Software Systems: Concepts and Techniques', IEEE
Westcon, Los Angeles, CA>pp 1-9, 1970. Reprinted in ICSE '87, Proceedings of the 9th international
conference on Software Engineering
18) Akao, Y., ed. (1990). Quality Function Deployment, Productivity Press, Cambridge MA. Becker Associates
Inc
19) Hauser, J. R. and D. Clausing (1988). "The House of Quality," The Harvard Business Review, May-June, No.
3, pp. 63-73
20) Lowe, A.J. & Ridgway, K. Quality Function Deployment, University of Sheffield
21) Brad A. Myers. "A Brief History of Human Computer Interaction Technology." ACM interactions. Vol. 5, no.
2, March, 1998. pp. 44-54
22) Kalyanmoy Deb, optimization for engineering design algorithms and examples, PHI, New delhi, 2010.
23) Kevin N. Otto, product design techniques in reverse engineering and new product development,
PEARSON, New Delhi, 2011.
24) Internet materials, You tube etc.

FSIPD 140
Module 3
Design and Testing

FSIPD 141
Product design and Testing

Product design is an integral part of product development because according to the needs of the customer,
design has to be updated regularly. Different customers have different tastes, preferences, and needs and the
variety of designs on the market appeal to the preferences of a particular customer group. For example,
consider the mobile cell phone market. A mobile cell phone today has various features in addition to the basic
feature of attending calls for example cameras, music player, gaming, mobile internet, touch screens, 3G
application, multiple SIM and so many. While older generations may prefer a user friendly mobile with simple
key pad and simple basic operations, younger generation will like to avail all the complex ones with a variety
of attracting features. Having all the quality of all the features will make the cost of the mobile expensive. So
suppose when a group of customers goes to buy a mobile depending upon their budgets and mindset each
one of them will choose a mobile phone with his/her preferences. Some of them may have a preference for
the camera feature, while some of them may like the music player feature more, some others may like the
gaming feature while and some may like the mobile internet option. Many professionals like to use multiple
SIMs to maintain office and personal life separately. Some may just go for the physical features like color,
size and shape of the mobile, durability of battery, the touch screen feature. So design is an indispensable
part of product development. Before designing a product various questions coming to mind of the design
team are: Can they do it? How will they do it? How much time will it take? What will be the cost?Product
design and testing affects the product quality, cost and customer satisfaction.

Objectives:

To understand about the industrial design and user interface design and interaction between them

To know about various concept generation techniques and study about concept selection techniques

To study about embedded concept system design

To learn about the detailed design and testing of products (hardware and software)

To understand about various types of prototypes and concept of Rapid prototyping and manufacturing

Integration of Mechanical embedded and S/W systems and know about certification and
documentation of the products

Certain steps are common in procedure of the development of most product designs. They are the following:

1) Idea Development
Any product design begins with an idea. The main source of this idea is the need of the customer and a
product design that would satisfy it.

2) Screening
Once an idea is developed, it needs to be evaluated. Often an industry comes up with numerous product
ideas. At this stage we need to screen the ideas and decide which ones have the greatest chance of
succeeding.

3) Preliminary Design and Testing


This is the stage where preliminary design of the product is made and this preliminary model called the
prototype is launched to the market and the whole situation is analyzed to determine whether this
product design will be a success or not. Depending upon the result of this analysis the product design
may be modified repeatedly till it meets the need of the customer.

FSIPD 142
4) Final Design
This is the final stage, where the final design of the product is made. After the success of the preliminary
model is ensured, the design is finalized for production.

Testing can be defined as a procedure for critical evaluation. It is a means of determining the validity, truth or
quality of something: a process or product in this case. Testing plays a major part in the design of any product
which leads to revising the model if any fault is present, up gradation of the existing product and so and so.
Like design, it is an indispensable part of a successful product development. In fact design and testing go
hand in hand. The success of a product design can be ensured by testing only. Design engineers translate
general performance specifications into technical specifications. Preliminary models called prototypes are
built and tested. Changes are made based on test results and the process of revising, upgrading the model,
and testing continues. For service companies this may require testing the offering on a small scale and
working with customers to refine the service offering, for example, a restaurant chain may launch a new menu
item only in a particular geographical area and its success or failure over the time decides, whether it should
be introduced in other places. Product modification can be time consuming and there may be a desire on the
part of the company to hurry through this phase to rush the product to market. But rushing may lead to even
greater losses if all the constraints are not properly work out.

3.1 Conceptualization:

Product conceptualization begins with understanding the goal of the product. It converts the product vision
into form and function. Defining the products purpose in descriptive and quantified terms establishes the
medium for the product vision to make it into the real world. Effective conceptualization combines
understanding the product function, the products environmental use, and the persons skill being enhanced
by the product. Product conceptualization sees the future product in the final phase of production. How the
product will be used combined with how the product will be produced are exciting ingredients in the product
conceptualization phase. A team of experts in engineering, manufacturing, and regulatory issues captures the
essence of the customers vision. The team draws on years of experience to ask the right questions and
develop a complete picture of what the customer envisions the product to be from function to performance to
packaging.

3.1.1 Industrial Design and User Interface Design:

Industrial Design:

The Industrial Designers Society of America (IDSA) defines Industrial Design as Industrial design (ID) is the
professional service of creating and developing concepts and specifications that optimize the function, value
and appearance of products and systems for the mutual benefit of both user and manufacturer.

An Industrial designer is dedicated to the cause of improving the quality of human environment with products
that are practical and aesthetic. He is a systems thinker who finds innovative solutions by correlating
technical and ergonomic aspects with human needs.

The industrial design mainly emphasizes on the ergonomic i.e. the human comfort and the aesthetic needs of
the customers along with the technological and economical aspects.

User Interface Design:

An important aspect of product design is the user interface. To work with a system, users have to be able to
control and evaluate the state of the system. The user interface, in the industrial design field of human
machine interaction, is the space where interaction between humans and machines occurs. The objective of
this interaction is effective operation and control of the machine on the user's end, and feedback from the
machine, which aids the operator in making operational decisions. For example, when driving a car, the driver
uses the steering wheel to control the direction of the vehicle, and the accelerator pedal, brake pedal and
gearstick to control the speed and acceleration of the vehicle. The driver perceives the position of the vehicle
by looking through the windshield and the rear mirror and exact speed of the vehicle by reading

FSIPD 143
the speedometer. The user interface of the automobile is on the whole composed of the instruments the
driver can use to accomplish the tasks of driving and maintaining the automobile.

Thus user interface design is the design of a product (machines, appliances, hardwares, softwares, websites,
mobile communication devices etc) with the focus on the user's experience and interaction with the product.
The objective of user interface design is to make the user's interaction as simple and efficient as possible, in
terms of accomplishing objectives of the users. Thus user interface is also known as user-centered design.
Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to it.

Process
User interface design requires a good understanding of user needs. There are several phases and processes in
the user interface design, some of which are more demanded upon than others, depending on the project.
Functionality requirements gathering assembling a list of the purposes to be done by the system
to accomplish the goals of the project which is mainly the potential needs of the users.

User analysis analysis of the potential users of the system either through discussion with people who
work with the users and/or the potential users themselves. Typical questions involve:
What would the user want the system to do?
How would the system fit in with the user's normal workflow or daily activities?
How technically knowledgeable is the user and what similar systems does the user already use?
What interface look & feel styles appeal to the user?

Information architecture development of the process and/or information flow of the system (i.e. for
phone tree systems, this would be an option tree flowchart and for web sites this would be a site flow
that shows the hierarchy of the pages).

Prototyping development of wireframes, either in the form of paper prototypes or simple interactive
screens. These prototypes are stripped of all look & feel elements and most content in order to
concentrate on the interface.

Usability inspection letting an evaluator inspect a user interface. This is generally considered to be
cheaper to implement than usability testing, and can be used early on in the development process since it
can be used to evaluate prototypes or specifications for the system, which usually can't be tested on
users.
Usability testing testing of the prototypes on an actual useroften using a technique called think aloud
protocol where you ask the user to talk about their thoughts during the experience.
Graphic interface design actual look and feel design of the final graphical user interface (GUI). It may
be based on the findings developed during the usability testing if usability is unpredictable, or based on
communication objectives and styles that would appeal to the user. This phase is often a collaborative
effort between a graphic designer and a user interface designer, or handled by one who is proficient in
both disciplines

Integration of Industrial design and User Interface Design:

The integration of industrial design and user interface design is becoming essential these days. Many times
we see products having decent physical or industrial design interface but poor interaction or interface design,
for example, a mobile phone with a beautiful, slim and sleek appearance but poor user interface while the
others have interfaces that work very well, but the industrial design is very poor for e.g. a mobile phone with
good user interface but a bulky heavy appearance. The main causes of these failures are: software

FSIPD 144
development by industrial designers lacking knowledge on interface designing or vice versa, lack of
communication between industrial designers and interface designers accompanied by minimum front-end
planning and overall lack of focus on an interactive work, most importantly there is a lack of central design
leadership which can handle both the departments properly and which can be responsible for the whole
product development.

For any industrial designers, the rise of the user interface is inevitable, but recognizing a change and
capitalizing on the opportunities are two different things. Industrial designers and consultants have
addressed the growth of user interface in various ways: avoiding it, accepting it and, in some cases,
advantageously shifting towards it. The advantages of designing the user interface are both financial and
functional. For consultants, interface design services can command a higher rate than comparable stand-
alone industrial design projects, and combined services deliver more usable and cohesive products. The
opportunity to increase the product variety also increases with each added feature.

Quantity and complexity of the user interface design are the key factors affecting impact integrated user
interface design projects. In case of less complex, small no. of interfaces, industrial design team is capable of
doing the task. But, with increase in the complexity or the no. of interfaces, the difficulty of industrial design
team also increases. And, at a saturation point, the industrial design team may fail. In such conditions, user
interface designers are must require. Hence, the industrial design group needs to either hire or build a team of
user interface designers or to outsource the task to such a team / organization.

The Future of Industrial Designers in User-Interface Design

People presume the role of industrial designers thinning as user-interface design continues to flood the
product development. The rapid emergence of user interfaces that utilize the physical dynamics of the human
body will depend on a solid understanding of ergonomics, physical human factors and aesthetics. Many
industrial designers have expertise on the interface design. From touch and multi-touch phones and
computers to gestural interface gaming systems, as the medium becomes less visible and physical,
connections between people and technology are becoming stronger. Creating effective physical-to-digital
interactions is a unique challenge. It is very difficult to find a one size fits all solution, reason being the
diversity in physical capabilities of human beings, which may be due to difference in age, gender and physical
conditions. A human centered approach with collaboration of industrial and interface designers with expertise
in human factors and ergonomics is necessary. In other words, as people and technology become better
integrated than ever before, industrial and interface designers will need to do likewise.

3.1.2 Introduction to Concept generation Techniques:

A concept is a general idea derived or inferred from specific instances or occurrences. Concept may be
considered as an impression which refers to the figure of an object (product in this case), along with other
representations such as characteristics or functions of the object, which existed, is existing, or might exist in
the human mind as well as in the real world. In the product development process, the very early stage of
design process, during which an initial idea or specification is generated, is called concept generation. The
process of Concept generation consists of the following steps:

Step 1: Clarify the problem: First the problem or the task is to be identified i.e. what type of product is to be
developed. The complex problem is being broken into simpler sub problems. Out of those sub problems the
initial efforts are focused on the most critical ones.

Step 2: Search externally: In order to learn more about the product we will have to investigate externally.
We will have to interview the lead users of such type of products or investigate their buying patterns, check
whether any patents regarding the product already exist, search for any related published literature,
comparing the performance of other companies in this product category, finding out the flaws and
limitations, consulting experts regarding this product category etc

Step 3: Search internally: Now investigation is carried out within the company after the external data has
been obtained. Investigation is being carried out in group or individual way.
Finally conclusions, inferences and ideas are drawn out, various analogies in the investigation are being
pointed out and concurrencies in ideas are also taken into consideration.

FSIPD 145
Step 4: Explore systematically: After the ideas have been developed from the previous investigations
concept classification trees and concept combination tables are developed. All the concepts are explored
systematically. The most appealing ones are taken into consideration.

Step 5: Reflect on the results and the process: The results of the most appealing concepts and the whole
process are emphasized. and further research for improving these concepts and the whole process are carried
out.

Concept Generation Techniques:

Various concept generation techniques are:

1) Brainstorming: Brain storming plays a very important role in concept generation. The various steps
before organizing a brain storming session are:
A diversified group of people with knowledge of various fields is formed.
An environment for creativity and risk taking is created
Games & exercises are used to stimulate creative thinking & minimize conceptual blocks or barriers.
A facilitator is being selected to catalyze the process.
A recorder is used to record or write down ideas as they are presented.
Provocative action or stimuli is used if idea process slows down.
Use of the shared ideation space i.e. the concurrency in ideas is done to help in the generation.

Brainstorming Rules: Points to be kept in mind during brain-storming sessions are:


To ensure that each participant has a chance to express ideas.
Listen to everyone.
Do not allow judgments or critical discussion.
To make efforts for quantity.
Let participants build spontaneously on the ideas of others.

Brainstorming Technique: The main process of a brain storming technique involve the following steps:
1) The participants are allowed to generate ideas prior to brainstorming meeting.
2) A round robin method is used where everyone has one chance to introduce an idea. This technique is
called nominal group technique.
3) The next method is the method 6-3-5 (6 participants, 3 ideas, 5 rotations)
Each participant has to generate 3 ideas
After a pre-defined period of time each participant has to rotate or pass these ideas to the next
participants.
The neighbor will have to modify, enhance this idea or create 3 more new ideas.
This process is repeated.

2) Collaborative Sketching: In
In this process, five participants team up on the incremental development of ideas.
No direct communications is permitted between participants.
Each participant sketches one idea/concept on sketchpad for solving problem at hand.
After sometime each participant passes his/her sketch to the person sitting next to him/her.
Each participant modifies the sketch received or develops it further in any way he/she chooses. Portions
of the previous sketch can be erased, but not all of it.
The rotations continue until the originator, that is, the person who first drew the sketch, gets his/her
sketch back.

3) IDEO Idea Cards: This technique involves the following steps:


Diversified group: A group of people with knowledge and interests in diverse fields is formed to talk
about product.
Experience prototype: A preliminary model is being created and tested.
Empathy tools: This tool involves simulating someone elses experiences regarding the product.
Emotional dimension: Personal histories of product are being collected.
A day in the life: How a person spends his day with the product is being investigated.

FSIPD 146
Behavioral sampling: The subjects are given pages and are checked randomly throughout day.
Extreme user interviews: Interviews of the extreme users of the products are carried out.
Foreign correspondents: Information from other countries is also collected.

4) Functional Decomposition: The various steps in this technique are:


The overall product function is being formulated.
The overall product function is being split into sub-functions
Material, energy and information flows for these sub functions are investigated and identified
The functional solutions of others are also accessed.
A number of solutions for each sub-function and auxiliary functions are being search and identified.
These solutions are combined to embody physical concepts
Expertise and heuristics are used to eliminate infeasible solution combinations

5) Concept Expansion: The various steps in this technique are:


Substitute
Combine
Adapt
Modify or magnify
Put to other uses
Eliminate or minify
Reverse or rearrange

6) Triz/Creax: (Triz is a Russian word for theory of inventive problem solving)


TRIZ research starts with the assumption that creative innovations are based on some universal
principles of inventions. .. The three primary findings investigated from 2 million patents are:
Problems and solutions were repeated across industries and sciences
Patterns of technical evolution were repeated across industries and sciences
Innovations used scientific effects outside the field where they were developed

TRIZ looks for contradictions and conflicts in concepts. The common example is a table which has features of
high strength but at the same time lighter in weight. It uses a total number of 40 inventive principles to
generate concepts. Some of these are: segmentation, extraction, local quality, asymmetry, combination,
universality, nested doll, counterweight, prior counter-action prior action, prior cushioning, equi-potentiality,
the other way round, spheroidality, dynamics, partial or excessive action etc.

7) Innovative Workshops: Innovative workshops are also helpful techniques for concept generation

3.1.3 Concept Screening and Evaluation:

Concept Screening and concept evaluation are two methods of concept selection. Concept selection is a pre-
defined methodology for identifying and evaluating new product ideas or product concepts. The first step in
evaluating and identifying viable product concepts is to conduct a thorough investigation of the target
product category. The investigation process gathers information on the industry dynamics and competitive
environment of the target product category. This information is then analysed as part of the concept selection
model.
.
Concept Screening

Screening is a quick, approximate evaluation aimed at producing a few viable alternatives. The concept screen
can be as simple as a checklist of criteria in the form, of questions that fall into two categories: must-meet
and should-meet criteria. Must meet criteria are questions used to determine the feasibility of the
opportunity? These criteria should be structured as closed-ended questions and are designed to provide
go/no-go decision points. Examples of must-meet criteria questions include: Does the product reflect
positively on the brand? And Does the product have any health or safety issues? Should-meet criteria are
often more specific. Examples of should-meet criteria include product varieties or flavors, consumer usage,
seasonality and profit margins.

FSIPD 147
Concept Evaluation [Scoring]:

Scoring is a more careful analysis of these relatively few concepts in order to choose the single concept most
likely to lead to product success. It is used when increased resolution will better differentiate among
competing concepts. In this stage, the team weighs the relative importance of the selection criteria and
focuses on more refined comparisons with respect to each criterion. It follows a six step process. The steps
are:

1) Prepare the selection matrix: This method uses a weighted sum of the ratings to determine concept
ranking. A particular concept is considered as the overall reference concept, there are separate reference
points for each criterion considered. For example: Consider the product as a pen. Now various concepts related
to a pen may be: a ball point pen, spring type pen, an ink pen, a gel pen, a ball point pen with a rubber grip, a
use and throw ball point pen etc. The various criteria may be: ease of handling and writing , durability, weight
of the pen, speed of writing, tendency of leakage, weather adjustability of ink, reliability, aesthetic sense etc.

The use of hierarchical relations is a useful way to further break the criteria. For example: Ease of handling
may be further divided into: ease of refilling, ease of capping and uncapping etc. After the criteria are
entered, importance weights are added to the matrix depending upon the importance of the criteria. For
example, a reliability criterion of a pen is likely to have a higher weighting than the aesthetic sense. Several
different schemes can be used to weigh the criteria, such as assigning an importance value from 1 to 5, or
allocating 100 percentage points among them.

2) Rate the concepts: Ratings can be done on scales of 1 to 5, 1 to 10. Finer the scales more time and effort
will be required. Relative ratings can be done on the basis of the reference which is assigned a particular
value. The criteria which will be better than the reference will get a higher rating than the reference while
those poorer than the reference will receive a poorer value.

3) Rank the concepts: The rank of a particular concept is determined by adding the products of the ratings of
all the criteria and their respective weightings

4) Combine and improve the concepts: Searches for further changes or combinations that can improve the
concept are carried out.

5) Select one or more concepts: The final selection is not simply a question of choosing the concept that
achieves the highest ranking after the first pass through the process. Rather, the team should explore this
initial evaluation by conducting a sensitivity analysis. Using a computer spreadsheet, the team can vary
weights and ratings to determine their effect on the ranking. Two or more scoring matrices with different
weightings for each criterion can be created to yield the concept ranking for various market segments with
different customer preferences.

6) Reflect on the results and the process: Two questions are useful in improving the process for
subsequent concept selection activities:
In what way (if at all) did the concept selection method assist in team decision making?
How can the method be modified to improve team performance?

3.2 Detailed Design

3.2.1 Component Design and Verification

Design verification can be a pressure point in a development programme. Design verification (DV) is a critical
stage in the development of any device (Figure 3.1). It is the point at which design outputs, that is, the
performance aspects of the product you have designed are confirmed as meeting design inputs set out in the
Product Requirements Specifications (PRS). It is also the stage at which any residual risks relating to the
device are assessed against pre-agreed acceptance criteria. Sign-off of successful DV is required prior to
proceeding to Process Validation and Design Validation, the latter of which frequently entails clinical trials.

FSIPD 148
Figure 3.1. Device development waterfall

DV typically requires the involvement of many different stakeholders, all with a vested interest in it being
performed smoothly and successfully (Figure 3.2). There are often serious implications for costs and
timescales of any significant delay due to unmet requirements. Sandwiched between notoriously delay prone
design proving and component sign-off, and the generally "immovable, resource heavy and expensive
validation activities, DV frequently becomes a significant pressure point in development programmes. Yet,
despite this, organizations do not always prepare for DV activities as effectively as they could. Through
detailed planning and the application of best practice approaches it is possible to avoid many common pitfalls
and reduce the risk of failing to hit verification deadlines.

FSIPD 149
Figure 3.2. Elements of design verification

The importance of a good product requirements specification

The planning process for DV activities starts early, at the product specification stage and one of the first
important things to bear in mind is "dont specify what you cant verify. (Another is "dont specify what you
dont need to verify.) Making sure that the specification of product requirements includes some detail on the
verification method is a good way of ensuring the practicality of proving a requirement that is being
considered. At this stage it does not need to extend to details of test methods and protocols, but should
state, for example, whether the trial method will be a laboratory test, design review or user study.

Specifications are generally built around sets of "Must and "Want requirements and there are differing
views on whether or not some or all of the Want requirements will be covered by the verification. It is also
beneficial at the specification stage to capture the party responsible for verification of each requirement and
determine whether any input is required from others. Verification activities can sometimes be divided
between different departments, offices or organizations and this is the first opportunity to make sure that
nothing will fall through the gaps before things become challenging.

All of this may seem to be a lot of detail over issues that are a long way off, the development process not
having started yet. However, ensuring a clear rationale for all specified requirements, including which of them
will be verified, and how and by whom, sets the path towards a structured and well understood verification
programme.

FSIPD 150
Risk management activities

The PRS provides guidance on a clear route through the development process to DV. An equally important,
but perhaps less obvious strand of activities that must be completed and resolved prior to signing off a
verified design derives from the risk management programme. Risk analysis activities, running from the start
of the project in-line with an agreed risk management plan, will include detailed risk assessments such as
User, Product and Process Failure Modes and Effects Analysis (FMEA). These should be performed during
detailed design, well in advance of the DV stage because they will inevitably highlight the need for a range of
risk mitigation actions. Some of these will be analytical in nature (for example, conducting an additional
tolerance stack to check a clearance), and some may relate to the introduction of manufacturing controls. But
there are also likely to be a number that call for physical testing of devices (for example conducting additional
checks on product robustness) and the need to assess potential use related risks through user studies.
Planning in and executing test and user study programmes will ensure consistently clear evidence for risk
mitigation and avoid the need for more subjective methods such as design review.

To be clear, a thorough risk management process does not make DV easier. It will actually add to the number
of things to do. However, it does help achieve the over-riding objective, which is nonnegotiable, that is,
delivery of a device that is safe to use. The point is that, through careful and thorough planning, the
verification programme can be designed to accommodate all the risk mitigation activities that are required
which brings us to the next step

The design verification plan

In whatever form it takes, and there are many, the DV plan is the cornerstone of the entire process. A good DV
plan will form the bridge between the requirements specification and the full range of verification activities,
describing in detail the "trials to be conducted, including details of device quantities, whether or not
protocols are required, who is responsible for conducting the trial and whether the devices being tested will
have been pre-conditioned. It will also describe the pre-conditioning activities such as temperature cycling,
drop, shock, transit and accelerated ageing, with full rationale where appropriate.

The scope of the plan extends across all verification activities, thus it will not only cover laboratory testing.
The nature of user studies will be described, as well as design and project reviews. Some plans will also cover
the specific risk mitigation activities, although this is not always the case. Much of the benefit of the DV plan
actually arises out of the process of writing it. It forces discussions and decisions on critical issues and also
helps to work through the practical details of which devices will go into which test and in what order. This, in
turn, tells you how many devices you will need, when, and what you need to do to them. These are fine
details, but they are things that tend to become bigger issues if tackled at the last minute.

Ensuring supply of approved components and devices

Through personal experience, the most common cause of delay to the start of a DV programme is the
availability of signed-off components and devices. Sometimes this is due to ongoing "tweaks to the design,
or to tooling programmes taking longer than planned, or having to develop inspection methods (including jigs
and fixtures) late in the process to achieve the required Repeatability and Reproducibility.

For all the advanced planning with respect to the verification activities themselves, you cannot do anything
without acceptable devices; hence, just as much attention needs to be paid to achieving these. Other than
allowing realistic timelines for tooling build, de-bug and correction,

FSIPD 151
Four recommendations:

Ensure that component designs and drawings are discussed with manufacturers well in advance. Design
for Manufacturing and Assembly and Process FMEA can be highly effective ways of facilitating this but
need to allow views to be expressed in an open and honest way.
Develop a comprehensive tolerance analysis as soon as the detailed design becomes clear. This aids the
identification of critical dimensions and the tool/component reconciliation process.
Agree in advance the process for component sign-off, such as: target process capabilities, sample size
and inspection dimensions.
Establish inspection methods for critical dimensions (not only process control dimensions) and prove
them as soon as practicable.

Test methods

By the time you actually start DV testing you should effectively be home and dry, the objective being to only
start testing when you are confident of achieving all requirements. Making sure that the test methods
employed are as required should not be a major hurdle. Many device test methods are standard such as drop,
temperature cycling, and vibration and crush resistance. Therefore, achieving a good test method for product
delivery devices, for example, is largely related to ensuring knowledge of the appropriate standards and
having access to the necessary equipment. Certain risk mitigation tests may be nonstandard in nature and
hence require some more development work, but the main area where test methods pose a risk to DV is when
the product being developed is novel in some aspect of its performance.

Watching for the unexpected

Avoiding common pitfalls, mostly through good planning and organisation, will significantly de-risk
verification programmes. However, the chance of something totally unexpected occurring and causing major
issues has always got to be acknowledged as being present. But how often are such issues truly
unpredictable?

Sometimes the issue that derails programmes during verification has actually been flagged up, discussed or
pondered over during the development process, but for some reason it failed to get fully resolved. Perhaps
there is an "elephant in the room aspect about it, which indicates that nobody in the room has a clear
responsibility to sort it out. Or maybe it never quite gets to the top of the right peoples priority list until it is
too late. Watch out for these issues and make sure they are resolved.

Where unexpected failures do occur, it is critical to resolve them as quickly as possible. To achieve this it is
extremely valuable to have in place a good understanding of why a design works like it does and where its
limits of functionality may be. This knowledge can be built up through a combination of theoretical models
and engineering analyses (for example, of mechanism dynamics, stresses and fluid flows) and of
characterization test programmes such as Taguchi. Building this understanding throughout the development
programme rather than at the end in an emergency situation is far more beneficial and effective.

Despite all the precautions and planning, in some respects it is not surprising that new issues will arise during
DV. It is often, after all, the first time that devices have been tested in quantity or the first time they have
been subjected to some pre-conditioning such as sterilization or accelerated ageing, which has an
unanticipated effect. However, by thinking carefully and thoroughly about all aspects of the DV programme
from the project outset we can do a great deal to minimize the chances of this happening.

Design verification is an essential step in the development of any product. Also referred to as qualification
testing, design verification ensures that the product as designed is the same as the product as intended.
Unfortunately, many design projects do not complete thorough design qualification resulting in products that
do not meet customer expectations and require costly design modifications.

FSIPD 152
Project activities in which design verification is useful:

Concept through to Detailed Design


Specification Development
Detailed Design through to Pre-Production
Production

Other tools those are useful in conjunction with design verification:

Requirements Management
Configuration Management
FMEA

Many customers hold the testing of products in the same regard as the actual design. In fact, many
development projects specify design verification testing as a major contract requirement and customers will
assign their own people to witness testing and ensure that it is completed to satisfaction. Although
potentially costly, the expense of not testing can be far greater therefore projects that do not specifically
require testing are wise to include testing as part of the development program. Testing may occur at many
points during the design process, from concept development to post-production. This tool will focus mainly
on prototype testing however many of the guidelines that will be provided can be applied to all testing.

Development tests conducted with materials, models or subassemblies are useful for determining the
feasibility of design ideas and gaining insights that further direct the design. The results of these tests
cannot be considered verification tests however their use can be crucial.
Prototype testing occurs with items that closely resemble the final product. These tests generally stress
the product up to and beyond specified uses conditions and may be destructive. Testing may occur at
many levels. Generally, the more levels of testing is done for the more complex product. For a complex
system, tests might be conducted at the unit level, subsystem level then finally at the system level.
Testing with prototypes allows the correction of deficiencies and subsequent re-testing before large
commitments are made to inventory and production readiness.
Proof testing is another type of design verification testing that employs prototypes. Rather than testing
to specification, proof tests are designed to test the product to failure. For example, if a table is designed
to support a certain amount of weight, prototype testing will be used to ensure that the table will
support the specified weight plus a pre-determined safety factor. Proof testing would continue loading
the table until failure is reached - likely beyond the specified limits. These tests are often used to identify
where eventual failures might occur. This information is often useful for identifying potential warranty
issues and costs.
Acceptance testing is a form of non-destructive testing that occurs with production units. Depending on
the criticality of failures, testing costs and the number of units produced, tests may be conducted on
initial production units and/or random or specified samples (e.g., every 10th unit), or every single unit
produced.

Application of Design Verification Testing

Verification Methods

There are a number of methods that can be used in verification testing. Some are relatively inexpensive and
quick, such as inspection, while others can be costly and quite involved, such as functional testing. A
description of the most common verification methods follow:

Demonstration: Demonstrations can be conducted in actual or simulated environments. For example, if a


specification for a product requires that it be operable with one hand, likely the simplest method for
verifying this requirement is to have someone who actually operates the product with one hand. As
record of the test, it may be acceptable to simply have the test witnessed or alternatively, recorded on

FSIPD 153
video. The cost of a demonstration will vary according to the complexity of the demonstration, however
most are relatively inexpensive.
Inspection: Inspection is usually used to verify requirements related to physical characteristics. For
example, if a specification requires that the product is a certain color, a certain height or labelled in a
specific manner, inspection would be used to confirm that the requirements have been met. Inspection is
typically one of the less expensive verification methods.
Analysis: Typically, analysis is often used in the design of a product. It can also be used to verify the
design and is often the preferred method if testing is not feasible or the cost of testing is prohibitive, and
risk is minimal. For example, analysis may be used to support the argument that a product will have a
lifecycle of 25 years.
Similarity: If a design includes features or materials that are similar to those of another product that has
met or exceeded current specifications, an analysis to illustrate this similarity may be used to verify a
requirement. For example, if a specification requires that a product be water resistant and materials that
have been proven to be water resistant in other applications have been chosen, an analysis of similarity
could be used.
Testing: It is the one of the most expensive verification methods, which depends on complexity as well as
equipment and facility requirements. However, sometimes it is the only acceptable means for verifying
aspects of a design. For example, a product may be required to survive transportation over various
terrains (e.g., dirt roads). The most common method for validating this requirement is transportation
testing where the product is placed in a test bed that moves up and down, and vibrates to simulate
worst-case transportation. Although this testing requires relatively expensive and specialized equipment,
it allows the testers to observe the test and is more economical that using a truck to validate by
demonstration.

Selection of Method(s)

Often a number of verification methods may be equally appropriate to verify a requirement. If this is the case,
the cost and the time that is required to complete the verification should be considered. For example, to verify
that a product satisfies a requirement to fit through a standard 30 by 7 doorway, inspection (measure the
height and width of the product) or demonstration (move the product through the doorway) can be used.

Sometimes it is necessary or useful to utilize two or more methods of verification. For example, if a
specification requires that a product be usable by persons from the 1st to the 99th percentile, a
demonstration may be conducted with representatives from each extreme and an analysis completed to
prove accessibility to all other sized persons within the specified range.

Identification of Verification Activities

Initial identification of verification activities should be done concurrently with specification development. For
each specification developed, a method for verifying the specification should be determined. Usually the
method at this stage will include the method(s) to be employed and a general idea on how the test will be
conducted. This forces the designer to make sure that the specification is objective and verifiable, and also
allows the test engineers to get a head start on putting together a detailed test plan and procedures. The one
caution is that this parallel development puts responsibility on the designer to make sure that test
engineering is promptly informed of any changes to specifications which normally is of minimal concern in
integrated team environments. If verification activities are not identified during the preparation of the
specification, the design engineer needs to ensure enough notice is given to test engineering to allow timely
planning and preparation. The communication and identification of required testing between design and test
can occur through various modes and will generally depend on the overall approach to the design project (e.g.,
integrated team versus department based) and established company procedures. With an integrated team
approach, the test engineer may take the product specification and work jointly with designers and other
members of the team to identify and plan tests. If the design approach is department or functionally based,
the design engineers may be required to complete and forward test requests to the test engineering
department as the tests engineers are not intimately involved in the development of the design.

FSIPD 154
Preparation of Verification Activities

The preparation of verification activities involves:

Determining the best approach to conducting the verifications


Defining measurement methods
Identify opportunities to combine verification activities (i.e., a single demonstration or test might be used
to verify a number of requirements)
Identifying necessary tools (e.g., equipment, software, etc.) and facilities (e.g., acoustical rooms,
environmental chambers)
Identifying a high-level verification schedule

Once the above items have been addressed, the overall verification plan should be reviewed with the design
team to address any issues before detailed planning occurs. Issues that may arise are insufficient in-house
equipment/facilities or expertise, and problems with schedule.

Many tests often require specialized equipment and facilities that are not available in-house (e.g.,
environmental chambers) therefore out-of-house facilities that can conduct these tests must be identified.
At this time, estimates for out-of-house testing are usually obtained. These help to determine which test
facility to use or, if costs exceed budget constraints, whether to redefine the verification requirements such
that verification can be conducted in-house. If tests will be subcontracted, this will generally be managed by
test engineering.

Problems with the verification schedule may be due to a number of reasons. The time to complete the
verification may be insufficient. In this case, some trade-offs may be necessary. Time may need to be
increased, or the number or duration of tests decreased. Sometimes a brainstorming session with the
development team may lead to creative solutions. Another problem with schedules may be the fact that
certain verification activities need to take place during certain weather conditions (e.g., snow) however the
period for verification will occur during summer months. It is likely undesirable to delay a project in the
expectation of weather conditions therefore alternative means must be considered.

Detailed Verification Planning and Procedures

Once all of the issues surrounding initial preparation have been resolved, verification procedures can be
developed. Written procedures should be developed for even the simplest of verification activities. This
increases the quality and accuracy of results, and also ensures that repeated tests are conducted in an
identical manner. The size of these procedures will depend on the complexity of the activities to be performed
and therefore can be as short as a few lines or as large as a substantial document. The format for procedures
should be tailored as appropriate and only those items in the outline relevant to an individual verification
activity should be included.

An important consideration to make when developing detailed verification plans and procedures is the order
in which activities are conducted. Verification time can be substantially reduced if all tests requiring a similar
set-up are conducted sequentially. Also, shorter activities can be scheduled to occur while longer activities
that do not require consistent monitoring are in progress. Two final considerations are related to the order in
which activities are conducted. If testing is destructive, it should be conducted in order from least to most
destructive to limit the number of test units required.

Additionally, it is sometimes beneficial to order verification activities such that the outputs of one test can be
used as inputs to subsequent tests.

FSIPD 155
Conducting Verification Activities

Execution of Verification Activities

It is extremely important that the test procedures be followed to the letter when conducting verification
activities. A failure to do so may invalidate results and may have more dire consequences if the customer
believes that it was done intentionally to increase the probability of passing verification or future product
failures lead to legal action. If for some reason it is discovered that procedures require modification, these
changes should be documented and the necessary approvals obtained before continuing with the affected
verification activity. If a verification activity is continued after a modification rather than started over, it
should be noted in the record of results.

Recording of Results

Careful collection and recording of data is extremely important. The customer may contractually require these
records and they may be a prerequisite for obtaining certifications (e.g., Canadian Standards Association).
Attachment B provides an example outline for recording results of verification activities. Depending on the
requirements of the development project, the verification records may be sufficient to report the results. In
other cases, a formal test report may be necessary. All test records and reports should be reviewed and
approved as defined by company procedures. If formal procedures are not in place, the test engineering leads,
the project manager, a customer representative or some other authority as agreed upon can review these
items.

Highlighting Non-Conformance

If a non-conformance (e.g., anomaly or failure) is discovered through verification activities, it is important to


first attempt to verify that the non-conformance is with the product and not due to test equipment or other
extenuating factors. If the nonconformance is product related, then details should be fed back to the
designers as quickly as possible rather than waiting for the completion of a test record or report. In highly
integrated teams, the optimum method for feedback may be to have the designer witness the non-
conformance first-hand. In any case a non-conformance report should be generated. It is important that the
test engineer maintain these reports to ensure that all non-conformances are adequately addressed.

Verification Costs:

Design verification occupies 70% of project development cycle: Every approach to reduce this time has a
considerable influence on economic success of a product. It is not unusual for ~complex chip to go through
multiple tape-outs before release.

Verification is a bottleneck in the design process.

High cost of design debug designers (sometimes verification engineers = 2 * design engineers) during
time-to-market
High cost of faulty designs (loss of life, product recall)For example, French Guyana, June 4, 1996 leads to
$800 million software failure: Mars, December 3, 1999 crashed due to uninitialized variable which
includes $4 billion development effort and > 50% system integration & validation cost.

Types of errors:

Error in specification

Unspecified functionality,
Conflicting requirements,
Unrealized features.

FSIPD 156
The Only Solution for this error is redundancy because specification is at the top of abstraction hierarchy and
there is no reference model to check

Error in implementation

Human error in interpreting design functionality.


Errors remain from bugs in synthesis program,
Usage errors of synthesis program.

Solution 1:

Use software program to synthesize an implementation directly from specification which eliminates
most human errors,
Use of synthesis tools are limited because many specifications are in conversational language Automatic
synthesis not possible yet,
No high level formal language specifies both functionality and timing (or other requirements) yet.
Even if specifications are written in precise mathematical language, little software can produce
implementations that meet all requirements

Solution 2 (more widely used):

Uncover through redundancy:

Implement two or more times using different approaches and compare.


In theory: the more times and more different ways gives higher confidence.
In practice: rarely > 2, because of cost, time, more errors can be introduced in each alternative.

To make the two approaches different:

Use different languages as follows


Specification Languages: VHDL, Verilog, System C
Verification Languages: Vera, C/C++, e (no need to be synthesizable)

Sometimes comparison is hard, e.g. compare two networks with arrival packets that may be out of order.

Solution a: Sort the packets in a predefined way.

Solution b is double-edge sword:

Verification engineers have to debug more errors (in design and verification language).leads to high cost.
Verification engineers may involve with the differences inherent to the languages (e.g. parallelism in C), be
aware of these differences

Functional Verification:

Functional verification, in electronic design automation, is the task of verifying that the logic design conforms
to specification. In everyday terms, functional verification attempts to answer the question "Does this
proposed design do what is intended?" This is a complex task, and takes the majority of time and effort in
most large electronic system design projects. Functional verification is a part of more encompassing design
verification, which, besides functional verification, considers non-functional aspects like timing, layout and
power.

Functional verification is very difficult because of the sheer volume of possible test cases that exist in even a
simple design. Frequently there are more than 10^80 possible tests to comprehensively verify a design - a
number that is impossible to achieve in a lifetime. This effort is equivalent to program verification, and is NP-

FSIPD 157
hard or even worse - and no solution have been found that works well in all cases. However, it can be attacked
by many methods. None of them are perfect, but each can be helpful in certain circumstances:

Logic simulation simulates the logic before it is built.


Simulation acceleration applies special purpose hardware to the logic simulation problem.
Emulation builds a version of system using programmable logic. This is expensive, and still much slower
than the real hardware, but orders of magnitude faster than simulation. It can be used, for example, to
boot the operating system on a processor.
Formal verification attempts to prove mathematically that certain requirements (also expressed
formally) are met, or that certain undesired behaviors (such as deadlock) cannot occur.
Intelligent verification uses automation to adapt the test bench to changes in the register transfer level
code.
HDL-specific versions of lint, and other heuristics, are used to find common problems.

Simulation based verification (also called 'dynamic verification') is widely used to "simulate" the design, since
this method scales up very easily. Stimulus is provided to exercise each line in the HDL code. A test-bench is
built to functionally verify the design by providing meaningful scenarios to check that given certain input, the
design performs to specification.

A simulation environment is typically composed of several types of components:

The generator generates input vectors that are used to search for anomalies that exist between the intent
(specifications) and the implementation (HDL Code). This type of generator utilizes an NP-complete type
of SAT Solver that can be computationally expensive. Other types of generators include manually created
vectors, Graph-Based generators (GBMs) proprietary generators. Modern generators create directed
random and random stimuli that are statistically driven to verify random parts of the design. The
randomness is important to achieve a high distribution over the huge space of the available input stimuli.
To this end, users of these generators intentionally under-specify the requirements for the generated
tests. It is the role of the generator to randomly fill this gap. This mechanism allows the generator to
create inputs that reveal bugs not being searched for directly by the user. Generators also bias the stimuli
toward design corner cases to further stress the logic. Biasing and randomness serve different goals and
there are tradeoffs between them, hence different generators have a different mix of these characteristics.
Since the input for the design must be valid (legal) and many targets (such as biasing) should be
maintained, many generators use the Constraint satisfaction problem (CSP) technique to solve the
complex testing requirements. The legality of the design inputs and the biasing arsenal are modeled. The
model-based generators use this model to produce the correct stimuli for the target design.
The drivers translate the stimuli produced by the generator into the actual inputs for the design under
verification. Generators create inputs at a high level of abstraction, namely, as transactions or assembly
language. The drivers convert this input into actual design inputs as defined in the specification of the
design's interface.
The simulator produces the outputs of the design, based on the designs current state (the state of the
flip-flops) and the injected inputs. The simulator has a description of the design net-list. This description
is created by synthesizing the HDL to a low gate level net-list.
The monitor converts the state of the design and its outputs to a transaction abstraction level so it can be
stored in a 'score-boards' database to be checked later on.
The checker validates that the contents of the 'score-boards' are legal. There are cases where the generator
creates expected results, in addition to the inputs. In these cases, the checker must validate that the
actual results match the expected ones.
The arbitration manager manages all the above components together.

Different coverage metrics are defined to assess that the design has been adequately exercised. These include
functional coverage (has every functionality of the design been exercised?), statement coverage (has each line
of HDL been exercised?), and branch coverage (has each direction of every branch been exercised?).

FSIPD 158
Functional verification tools

Aldec
Avery Design Systems: SimCluster (for parallel logic simulation) and Insight (for formal verification)
Breker Verification Systems, Inc.: Trek (a model-based test generation tool for complex SoCs)
Cadence Design Systems
EVE/ZeBu
Mentor Graphics
Nusym Technology
Obsidian Software
One Spin Solutions
Synopsys

Verification vs. Validation:

Verification and Validation are independent procedures that are used together for checking that a product,
service, or system meets requirements and specifications and that it fulfills its intended purpose. These are
critical components of a quality management system such as ISO 9000. The words "verification" and
"validation" are sometimes preceded with "Independent" (or IV&V), indicating that the verification and
validation is to be performed by a disinterested third party.

It is sometimes said that validation can be expressed by the query "Are you building the right thing?" and
verification by "Are you building it right?"

In practice, the usage of these terms varies. Sometimes they are even used interchangeably.

The PMBOK guide, an IEEE standard, defines them as follows in its 4th edition:[2]

"Validation: The assurance that a product, service, or system meets the needs of the customer and other
identified stakeholders. It often involves acceptance and suitability with external customers. Contrast with
verification."

"Verification: The evaluation of whether or not a product, service, or system complies with a regulation,
requirement, specification, or imposed condition. It is often an internal process. Contrast with validation."

3.2.2 High Level Design/Low Level Design of product

Product design is the process by which an agent creates a specification of a product artifact, intended to
accomplish goals, using a set of primitive components and subject to constraints. Product design may refer to
either "all the activities involved in conceptualizing, framing, implementing, commissioning, and ultimately
modifying complex systems" or "the activity following requirements specification and before developing in a
stylized product engineering process."

Product design usually involves problem solving and planning a solution. This includes both low-level
component and algorithm design and high-level, architecture design.

Product design is the process of implementing solutions to one or more set of problems. One of the important
parts of product design is the product requirements analysis (PRA). It is a part of the product development
process that lists specifications used in product engineering. If the product is "semi-automated" or user
centered, product design may involve user experience design yielding a story board to help determine those
specifications. If the product is completely automated (meaning no user or user interface), a product design

FSIPD 159
may be as simple as a flow chart or text describing a planned sequence of events. There are also semi-
standard methods like Unified Modeling Language and Fundamental modeling concepts. In either case, some
documentation of the plan is usually the product of the design. Furthermore, a product design may be
platform-independent or platform specific, depending on the availability of the technology used for the
design.

Product design can be considered as creating a solution to a problem in hand with available capabilities. The
main difference between product analysis and design is that the output of a product analysis consists of
smaller problems to solve. Also, the analysis should not be very different even if it is designed by different
team members or groups. The design focuses on the capabilities, and there can be multiple designs for the
same problem depending on the environment that solution will be hosted. They can be operations systems,
webpages, mobile or even the new cloud computing paradigm. Sometimes the design depends on the
environment that it was developed, whether if it is created from with reliable frameworks or implemented
with suitable design patterns.

When designing product, two important factors to consider are its security and usability.

High level/Low level product Design:

High-level design provides an overview of an entire system, identifying all its elements at some level of
abstraction. This contrasts with Low level Design which exposes the detailed design of each of these
elements.

Purpose

Preliminary design - In the preliminary stages of a product development the need is to size the project and to
identify those parts of the project that might be risky or time consuming.

Design overview - As proceeds, the need is to provide an overview of how the various sub-systems and
components of the system fit together.

Design overview

A high-level design provides an overview of a solution, platform, system, product, service, or process. Such an
overview is important in a multi-project development to make sure that each supporting component design
will be compatible with its neighboring designs and with the big picture.

The highest level solution design should briefly describe all platforms, systems, products, services and
processes that it depends upon and include any important changes that need to be made to them. A high-
level design document will usually include a high-level architecture diagram depicting the components,
interfaces and networks that need to be further specified or developed.

The document may also depict or otherwise refer to work flows and/or data flows between component
systems. In addition, there should be brief consideration of all significant commercial, legal, environmental,
security, safety and technical risks, issues and assumptions.

The idea is to mention every work area briefly, clearly delegating the ownership of more detailed design
activity whilst also encouraging effective collaboration between the various project teams.

Today, most high-level designs require contributions from a number of experts, representing many distinct
professional disciplines. Finally, every type of end-user should be identified in the high-level design and each
contributing design should give due consideration to customer experience.

FSIPD 160
Software Architecture
The word software architecture describes the high level structures of a software system. It can be defined as
the set of structures needed to reason about the software system, which comprise the software elements,
the relations between them, and the properties of both elements and relations. The term software
architecture also denotes the set of practices used to select, define or design a software architecture. Finally,
the term often denotes the documentation of a system's "software architecture". Documenting software
architecture facilitates communication between stakeholders, captures early decisions about the high-level
design, and allows reuse of design components between project. It is used to denote three concepts:
High level structure of a software system.
Discipline of creating such a high level structure
Documentation of this high level structure.

Software architecture characteristics


Software architecture exhibits the following characteristics:
Multitude of stakeholders:
Software architecture involves dealing with a broad variety of stakeholders and their variety of concerns and
has a multidisciplinary nature.
Separation of concerns:
The established way for architects to reduce complexity is by separating the concerns of the stake holders
that drive the design.
Quality-driven:
The architecture of a software system is more closely related to its quality attributes such as fault
tolerance, backward compatibility, extensibility, reliability, maintainability, availability, security, usability etc.
Stakeholder concerns often translate into requirements on these quality attributes, which are variously called
non-functional requirements, extra-functional requirements, system quality requirements or constraints.
Recurring styles:
Like building architecture, the software architecture discipline has developed standard ways to address
recurring concerns. These standard ways are called by various names at various levels of abstraction.
Common terms for recurring solutions are architectural style, strategy or tactic, reference
architecture and architectural pattern.
Conceptual integrity:
The architecture of a software system represents an overall vision of what it should do and how it should do
it. This vision should be separated from its implementation.
Software Testing:

Software testing is an investigation conducted to provide stakeholders with information about the quality of
the product or service under test. Software testing can also provide an objective, independent view of the
software to allow the business to appreciate and understand the risks of software implementation. Test
techniques include, but are not limited to the process of executing a program or application with the intent of
finding software bugs (errors or other defects).

Software testing can be stated as the process of validating and verifying that a computer
program/application/product: meets the requirements that guided its design and development, works as
expected, can be implemented with the same characteristics, and satisfies the needs of stakeholders.

Software testing, depending on the testing method employed, can be implemented at any time in the
software development process. Traditionally most of the test effort occurs after the requirements have been

FSIPD 161
defined and the coding process has been completed, but in the Agile approaches most of the test effort is on-
going. As such, the methodology of the test is governed by the chosen software development methodology.

Testing can never completely identify all the defects within software. Instead, it furnishes a criticism or
comparison that compares the state and behavior of the product against oracles -principles or mechanisms by
which someone might recognize a problem. These oracles may include (but are not limited to) specifications,
contracts, comparable products, past versions of the same product, inferences about intended or expected
purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.

A primary purpose of testing is to detect software failures so that defects may be discovered and corrected.
Testing cannot establish that a product functions properly under all conditions but can only establish that it
does not function properly under specific conditions. The scope of software testing often includes
examination of code as well as execution of that code in various environments and conditions as well as
examining the aspects of code: does it do, what it is supposed to do and do what it needs to do. In the current
culture of software development, a testing organization may be separate from the development team. There
are various roles for testing team members. Information derived from software testing may be used to correct
the process by which software is developed. Every software product has a target audience.

For example, the audience for video game software is completely different from banking software. Therefore,
when an organization develops or otherwise invests in a software product, it can assess whether the software
product will be acceptable to its end users, its target audience, its purchasers and other stakeholders.
Software testing is the process of attempting to make this assessment.

Defects and failures

Not all software defects are caused by coding errors. One common source of expensive defects is requirement
gaps, e.g., unrecognized requirements which result in errors of omission by the program designer.
Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability,
usability, performance, and security. Software faults occur through the following processes.

A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If
this defect is executed, in certain situations the system will produce wrong results, causing a failure.[7] Not
all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A
defect can turn into a failure when the environment is changed. Examples of these changes in environment
include the software being run on a new computer hardware platform, alterations in source data, or
interacting with different software. A single defect may result in a wide range of failure symptoms.

Input combinations and preconditions

A very fundamental problem with software testing is that testing under all combinations of inputs and
preconditions (initial state) is not feasible, even with a simple product. This means that the number of
defects in a software product can be very large and defects that occur infrequently are difficult to find in
testing. More significantly, nonfunctional dimensions of quality (how it is supposed to be versus what it is
supposed to do)usability, scalability, performance, compatibility, reliabilitycan be highly subjective;
something that constitutes sufficient value to one person may be intolerable to another.

Software developers can't test everything, but they can use combinatorial test design to identify the
minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to
get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use
combinatorial test design methods to build structured variation into their test cases.

FSIPD 162
Testing methods

The two approaches to software testing are as follows

Static Testing: Reviews, walkthroughs, or inspections are referred to as static testing; Static testing can be
omitted. It takes place when the program is completed. It involves verification.

Dynamic Testing: Execution of programmed code with a given set of test cases is referred to as dynamic
testing. Dynamic testing takes place when the program itself is used. Dynamic testing may begin before the
program is 100% complete in order to test particular sections of code and are applied to discrete functions or
modules. It involves validation.

There are two basics of software testing:

Black box testing and


White box testing.

Black box Testing

Black box testing is a testing technique that ignores the internal mechanism of the system and focuses on
the output generated against any input and execution of the system. It is also called functional testing. Black
box testing is often used for validation

White box Testing

White box testing is a testing technique that takes into account the internal mechanism of a system. It is
also called structural testing and glass box testing. White box testing is often used for verification.

Types of testing

There are many types of testing like

Unit Testing
Unit testing is the testing of an individual unit or group of related units. It falls under the class of white box
testing. It is often done by the programmer to test that the unit he/she has implemented is producing
expected output against given input.
Integration Testing
Integration testing is testing in which a group of components are combined to produce output. Also, the
interaction between software and hardware is tested in integration testing if software and hardware
components have any relation. It may fall under both white box testing and black box testing.
Functional Testing
Functional testing is the testing to ensure that the specified functionality required in the system
requirements works. It falls under the class of black box testing
System Testing
System testing is the testing to ensure that by putting the software in different environments (e.g.,
Operating Systems) it still works. System testing is done with full system implementation and environment.
It falls under the class of black box testing.
Stress Testing
Stress testing is the testing to evaluate how system behaves under unfavorable conditions. Testing is
conducted at beyond limits of the specifications. It falls under the class of black box testing.
Performance Testing
Performance testing is the testing to assess the speed and effectiveness of the system and to make sure it is
generating results within a specified time as in performance requirements. It falls under the class of black box
testing
Usability Testing

FSIPD 163
Usability testing is performed to the perspective of the client, to evaluate how the GUI is user-friendly? How
easily can the client learn? After learning how to use, how proficiently can the client perform? How pleasing is
it to use its design? This falls under the class of black box testing.
Acceptance Testing
Acceptance testing is often done by the customer to ensure that the delivered product meets the
requirements and works as the customer expected. It falls under the class of black box testing.
Regression Testing
Regression testing is the testing after modification of a system, component, or a group of related units to
ensure that the modification is working correctly and is not damaging or imposing other modules to produce
unexpected results. It falls under the class of black box testing.
Beta Testing
Beta testing is the testing which is done by end users, a team outside development, or publicly releasing full
perversion of the product which is known as beta version. The aim of beta testing is to cover unexpected
errors. It falls under the class of black box testing.

3.2.3 Hardware Schematic, Component design, Layout and Hardware testing

A design flow is a rough guide for turning a concept into a real, live working system. It links inspiration
(concept) and implementation (working system).

E.g. Design of an air-deployable motion sensor with 10 meter range and 6 month lifetime.

A printed circuit board (PCB) (Figure 3.3) mechanically supports and electrically connects electronic
components using conductive tracks, pads and other features etched from copper sheets laminated onto a
nonconductive substrate. PCB's can be single sided (one copper layer), double sided (two copper layers) or
multi-layer. Conductors on different layers are connected with plated-through holes called vias. Advanced
PCB's may contain components - capacitors, resistors or active devices embedded in the substrate.

Printed circuit boards are used in all but the simplest electronic products. Alternatives to PCBs include wire
wrap and point-to-point construction. PCBs are more costly to design but allow automated manufacturing
and assembly. Products are then faster and cheaper to manufacture, and potentially more reliable. A view of a
PCB and its essential parts is shown in the figure

When the board has only copper connections and no embedded components it is more correctly called a
printed wiring board (PWB) or etched wiring board. Although more accurate, the term printed wiring board has
fallen into disuse. A PCB populated with electronic components is called a printed circuit assembly (PCA),
printed circuit board assembly or PCB assembly (PCBA)

Figure 3.3. Printed circuit board

FSIPD 164
Design of PCB:

Printed circuit board artwork generation was initially a fully manual process done on clear mylar sheets at a
scale of usually 2 or 4 times the desired size. The schematic diagram was first converted into a layout of
components pin pads, then traces were routed to provide the required interconnections. Pre-printed non-
reproducing mylar grids assisted in layout, and rub-on dry transfers of common arrangements of circuit
elements (pads, contact fingers, integrated circuit profiles, and so on) helped standardize the layout. Traces
between devices were made with self-adhesive tape. The finished layout "artwork" was then photographically
reproduced on the resist layers of the blank coated copper-clad boards. Modern practice is less labor intensive
since computers can automatically perform many of the layout steps.

A practical Printed Circuit Board (PCB) design flow is action-oriented and artifact-focused. The general
progression for a commercial printed circuit board design would include:

Schematic capture through an electronic design automation tool.


Card dimensions and template are decided based on required circuitry and case of the PCB. Determine the
fixed components and heat sinks if required.
Deciding stack layers of the PCB. 1 to 12 layers or more depending on design complexity. Ground plane and
power plane are decided. Signal planes where signals are routed are in top layer as well as internal layers.
Line impedance determination using dielectric layer thickness, routing copper thickness and trace-width.
Trace separation also taken into account in case of differential signals. Microstrip, stripline or dual stripline
can be used to route signals.
Placement of the components. Thermal considerations and geometry are taken into account. Vias and
lands are marked.
Routing the signal traces. For optimal EMI performance high frequency signals are routed in internal layers
between power or ground planes as power planes behave as ground for AC.
Gerber file generation for manufacturing.

In the design of the PCB artwork, a power plane is the counterpart to the ground plane and behaves as an AC
signal ground, while providing DC voltage for powering circuits mounted on the PCB. In electronic design
automation (EDA) design tools, power planes (and ground planes) are usually drawn automatically as a
negative layer, with clearances or connections to the plane created automatically.

Manufacturers never use the Gerber or Excellon files directly on their equipment, but always read them into
their CAM system. PCB's cannot be manufactured professionally without CAM system. The PCB CAM system
performs the following functions:
Input of the Gerber data
Verify the data; optionally DFM
Compensate for deviations in the manufacturing processes (e.g. scaling to compensate for distortions
during lamination)
Panelize
Output of the digital tools (layer images, drill files, AOI data, electrical test files,)

FSIPD 165
Figure 3.4. PCB design flow process

The steps in the PCB Design flow process (figure 3.4) are described as follows

Brain storming: Generation of as many ideas as possible ideas; Use the needs as the rough guide; It is not
limited by constraints or formal requirements; Diversity of perspectives emerge due to brainstorm in a group

For example, brain storming of energy metering in sensor networks includes measurement of energy in the
meter and the resulting design concepts will be Single-chip battery fuel gauge; High-side sense resistor +
signal processing; Low-side sense resistor + signal processing; Pulse-frequency modulated switching regulator

Evaluate: Requirements and Constraints such as functionality, performance, usability, reliability,


maintainability, and budgetary addresses the important details of the system must satisfy; Correlation
matrix is used to sort out things. Identification of best candidates to take forward; Use requirements and
constraints as the metric; Get buy-in from stakeholders on decisions; it also consider Time-to-market,
Familiarity and Economics includes

Non-recurring engineering (NRE) costs


Unit cost

If none of the candidates pass, there are two options

Go back to brainstorming
Adjust the requirements (hard to change needs though)

Design: Translation of a concept into a block diagram; Translation of a block diagram into components. There
are two basic approaches of design and the Combination of two approaches is good for complex designs with
high-risk subsystems

FSIPD 166
Top-down

Start at a high-level and recursively decompose


Clearly define subsystem functionality
Clearly define subsystem interfaces

Bottom-up

Start with building blocks and increasing integrate


Add glue logic between building blocks to create

Capture: Schematic capture turns a block diagram into a detail design; Selection of parts in library or in stock;
Analyse of parts to see whether it is under budget; Rough floor planning; Placement of the parts; Connection
of the parts; Formation of layout guidelines.

Layout: Process of transforming a schematic (netlist) into a set of Gerber and drill files suitable for
manufacturing; It can affect the board size, component placement, and layer selection; Inputs, outputs and
actions of layout process are described in table 3.1.

Input Schematic or net list


Use Part lbrary

Gerbers photoplots (top, bottom, middle layers)


Copper
Soldermask
Outputs Silkscreen

NC drill files
Aperture
X-Y locations

Manufacturing Drawings
Part name & locations
Pick & place file

Create parts
Define board outline
Actions Floor planning
Define layers
Parts placement
Manual routing (ground/supply planes, RF signals, etc.)
Auto-routing (non-critical signals)
Design rule check (DRC)
Table 3.1. Attributes for layout process

FSIPD 167
Schematic diagram:

A drawing which shows all significant components, parts, or tasks (and their interconnections) of a circuit,
device, flow, process, or project by means of standard symbols. Schematic diagrams for a project may also be
used for preparing preliminary cost estimates.

Because different electronic components have different characteristics, it is necessary to distinguish between
them in any circuit diagram. Of course, we could use the block diagram approach, and just identify each
component with words. Unfortunately, this takes up a lot of space and makes the overall diagram harder to
recognize or understand quickly. We need a way to understand electrical diagrams far more quickly and easily.

The answer is to use schematic symbols to represent electronic components, as shown in the figure 3.5. In
this diagram, we show the schematic symbol of a battery as the electrical source, and the symbol of a resistor
as the load. Even without the words and arrows, the symbols define exactly what this circuit is and how it
behaves. The symbol for each electronic component is suggestive of the behavior of that component. Thus,
the battery symbol above consists of multiple individual cells connected in series. By convention, the longer
line represents the positive terminal of each cell. The battery voltage would normally be specified next to the
symbol.

The zig-zag line represents any resistor. In most cases, its resistance is specified next to the symbol just as
the battery voltage would be given. It is easier and faster to read the symbol and the legend "4.7k" next to it,
than to see a box and have to read "4700-ohm resistor" inside it.
One of the problems that can occur with schematic diagrams is too many lines all over the page. It's not a big
deal when there are only two components in the circuit, but think of what the complete diagram for a modern
television receiver or even a radio receiver would look like. We need a way to reduce the number of lines
showing electrical connections

Figure 3.5. Schematic diagram

Study of schematic diagrams:

Understand how the project is supposed to go. Lay the diagram out flat, and go through it from beginning
to end.
Understand what each symbol in the schematic represents. The schematic will not have much meaning if
you cannot properly identify the symbols. If you don't know a symbol, find a key someplace on the
diagram.
It is important for all embedded designers to be able to understand the diagrams and symbols that
hardware engineers create and use to describe their hardware designs to the outside world
They also contain the information needs to design with the how to successfully communicate the
hardware requirements of the software
Find the starting point, or first step, on the schematic. Follow the lines, arrows and numbers that connect
the symbols to complete each step as illustrated in the diagram

FSIPD 168
Block diaagram:

Block diaagram (figure 3.6) is a diagrram of a systeem in which the principal parts
p or functions are represented by
blocks coonnected by lines that show the relation nships of the blocks. They are heavily u
used in the enngineering
world in hardware design, electronic design, softtware design, and process flow
f diagramss.

The block diagram is typically


t usedd for a higher level, less deetailed descrip more at understanding
ption aimed m
the overrall concepts and less at understanding the details of implem mentation. It is contrast with the
schematic diagram an nd layout diaggram used in the
t electrical engineering world,
w where tthe schematicc diagram
shows the details off each electrical componeent and the layout diagram shows th he details off physical
constructtion.

Because block diagram ms are a visuual language for describingg actions in a complex system, it is po
ossible to
formalizee them into a specialized programmable
p e logic controller (PLC) proggramming language.

As an exxample, a blocck diagram off a radio is no o show each and every wirre and dial an
ot expected to nd switch,
but the schematic
s diaagram is. The schematic diagram of a raadio does not show the wid
dth of each wire
w in the
printed circuit
c board, but
b the layoutt diagram doees.

To makee an analogy to the map making world d, a block diaagram is simiilar to a highway map of an entire
nation. The
T major citties (functions) are listed but the mino or county roaads and city sstreets are not. When
troublesh
hooting, this high level map is useful in narrowing down and isolatting where a pproblem or fauult is.

Block diaagrams rely on


n the principlee of the blackk box where th
he contents are
a hidden froom view eitheer to avoid
being disstracted by th
he details or because
b the details
d are not known. We know what ggoes in, we kn now what
goes out, but we can'tt see how the box does its work.
w

In electrical engineerin w often begin as a very high level blockk diagram, beccoming more and more
ng, a design will
detailed block diagramms as the dessign progressees, finally end
ding in block diagrams
d detaailed enough that each
individuaal block can be
b easily implemented (at which point the block diagram is also a schematic diagram).
This is kn
nown as top down
d design.

Figuree 3.6. Block diiagram

FSIPD 169
Circuit Diagram

A circuit diagram (figure 3.7) (also known as an electrical diagram, elementary diagram, or electronic
schematic) is a simplified conventional graphical representation of an electrical circuit. A pictorial circuit
diagram uses simple images of components, while a schematic diagram shows the components of the circuit
as simplified standard symbols; both types show the connections between the devices, including power and
signal connections. Arrangement of the components interconnections on the diagram does not correspond to
their physical locations in the finished device.

Unlike a block diagram or layout diagram, a circuit diagram shows the actual wire connections being used. The
diagram does not show the physical arrangement of components. A drawing meant to depict what the
physical arrangement of the wires and the components they connect is called "artwork" or "layout" or the
"physical design."

Circuit diagrams are used for the design (circuit design), construction (such as PCB layout), and maintenance
of electrical and electronic equipment. In computer science, circuit diagrams are especially useful when
visualizing different expressions using Boolean algebra.

Figure 3.7. Circuit diagram

Rules for drawing circuit diagrams:

Draw with a pencil unless you use a CAD program.


Represent each component with a simple symbol that includes the pins and pin numbers.
Label each connection (wire) with a unique name. and Use a dot at the intersection of connecting wires.
Do not draw a loop or bridge when unconnected wires cross
Label each component with a unique name. The first letter of the name should indicate the type of
component (e.g., R for resistor, C for capacitor, L for inductor, and so on). The ICs are often specified by U,
but this is not a hard rule.
Place above each component (or group of components) a functional description (e.g., above a group of
decoder ICs would be Address Decoders, above a group of 7-segment LEDs would be Output Display).

FSIPD 170
Label each component with its part number (e.g., 74LS38, MC68HC11). For resistors and capacitors, simply
specify their value and the units.
Put pin LABELS on the inside of each chip block but put pin NUMBERS on the outside of the chip blocks.
Include your name and the date.

Hardware Computing:

Computing hardware (figure 3.8) evolved from machines that needed separate manual action to perform each
arithmetic operation, to punched card machines, and then to stored-program computers. The history of
stored-program computers relates first to computer architecture, that is, the organization of the units to
perform input and output, to store data and to operate as an integrated mechanism.

The Z3 by inventor Konrad Zuse from 1941 is regarded as the first working programmable, fully automatic
modern computing machine. Thus, Zuse is often regarded as the inventor of the computer.

Figure 3.8. Hardware computation

Before the development of the general-purpose computer, most calculations were done by humans.
Mechanical tools to help humans with digital calculations were calculators. It was those humans who used
the machines who were then called computers. Aside from written numerals, the first aids to computation
were purely mechanical devices which required the operator to set up the initial values of an elementary
arithmetic operation, then manipulate the device to obtain the result. A sophisticated (and comparatively
recent) example is the slide rule, in which numbers are represented as lengths on a logarithmic scale and
computation is performed by setting a cursor and aligning sliding scales, thus adding those lengths. Numbers
could be represented in a continuous "analog" form, for instance a voltage or some other physical property
was set to be proportional to the number. Analog computers, like those designed and built by Vannevar Bush
before World War II were of this type. Numbers could be represented in the form of digits, automatically
manipulated by a mechanical mechanism. Although this last approach required more complex mechanisms in
many cases, it made for greater precision of results.

In the United States, the development of the computer was underpinned by massive government investment
in the technology for military applications during WWII and then the Cold War. The latter superpower
confrontation made it possible for local manufacturers to transform their machines into commercially viable
products. It was the same story in Europe, where adoption of computers began largely through proactive
steps taken by national governments to stimulate development and deployment of the technology. The
invention of electronic amplifiers made calculating machines much faster than their mechanical or
electromechanical predecessors. Vacuum tube (thermionic valve) amplifiers gave way to solid state
transistors, and then rapidly to integrated circuits which continue to improve, placing millions of electrical
switches (typically transistors) on a single elaborately manufactured piece of semi-conductor the size of a
fingernail. By defeating the tyranny of numbers, integrated circuits made high-speed and low-cost digital
computers a widespread commodity. There is an ongoing effort to make computer hardware faster, cheaper,
and capable of storing more data.

Computing hardware has become a platform for uses other than mere computation, such as process
automation, electronic communications, equipment control, entertainment, education, etc. Each field in turn

FSIPD 171
has imposed its own requirements on the hardware, which has evolved in response to those requirements,
such as the role of the touch screen to create a more intuitive and natural user interface. As all computers rely
on digital storage, and tend to be limited by the size and speed of memory, the history of computer data
storage is tied to the development of computers. The following activities can also be done with help of
hardware computing

Data Manipulation: Time to execute not critical not predictable

Word processing
Data base management
Spread sheets
Operating systems
Data movement(X-->Y)
Value Testing (if X =Y then)

Math calculation: Time to execute critical predictable

Digital signal processing


Motion control
Engineering simulation
Real Time Signal processing
Addition ( X = Y + Z)
Multiplication ( X =Y x X)

Hardware Processor:

Hardware processor consists of the following components

Micro controller: A microcontroller (figure 3.9) (sometimes abbreviated C, uC or MCU) is a small computer on
a single integrated circuit containing a processor core, memory, and programmable input/output peripherals.
Program memory in the form of NOR flash or OTP ROM is also often included on chip, as well as a typically
small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the
microprocessors used in personal computers or other general purpose applications.
Microcontrollers are used in automatically controlled products and devices, such as automobile engine control
systems, implantable medical devices, remote controls, office machines, appliances, power tools, toys and
other embedded systems. By reducing the size and cost compared to a design that uses a separate
microprocessor, memory, and input/output devices, microcontrollers make it economical to digitally control
even more devices and processes. Mixed signal microcontrollers are common, integrating analog components
needed to control non-digital electronic systems. Some microcontrollers may use 4-bit words and operate at
clock rate frequencies as low as 4 kHz, for low power consumption (single-digit milli watts or microwatts).
They will generally have the ability to retain functionality while waiting for an event such as a button press or
other interrupt; power consumption while sleeping (CPU clock and most peripherals off) may be just nano
watts, making many of them well suited for long lasting battery applications. Other microcontrollers may
serve performance-critical roles, where they may need to act more like a digital signal processor (DSP), with
higher clock speeds and power consumption. Example: 8051, 68HC11, PIC

FSIPD 172
Figure 3.9. 2A Taegaxx microcontroller

Microprocessor: A microprocessor (figure 3.10) incorporates the functions of a computer's central processing
unit (CPU) on a single integrated circuit (IC), or at most a few integrated circuits. Microprocessor is a
multipurpose, programmable device that accepts digital data as input, processes it according to instructions
stored in its memory, and provides results as output. It is an example of sequential digital logic, as it has
internal memory. Microprocessors operate on numbers and symbols represented in the binary numeral
system. The advent of low-cost computers on integrated circuits has transformed modern society. General-
purpose microprocessors in personal computers are used for computation, text editing, multimedia display,
and communication over the Internet. Many more microprocessors are part of embedded systems, providing
digital control over myriad objects from appliances to automobiles to cellular phones and industrial process
control. Example: Pentium-Series, power PC MIPS

Figure 3.10. Intel 4004 microprocessor

Digital Signal Processors (DSP): A digital signal processor (DSP) (figure 3.11) is a specialized microprocessor
with an architecture optimized for the operational needs of digital signal processing. Digital signal processing
algorithms typically require a large number of mathematical operations to be performed quickly and
repeatedly on a series of data samples. Signals (perhaps from audio or video sensors) are constantly
converted from analog to digital, manipulated digitally, and then converted back to analog form. Many DSP
applications have constraints on latency; that is, for the system to work, the DSP operation must be
completed within some fixed time, and deferred (or batch) processing is not viable. Most general-purpose
microprocessors and operating systems can execute DSP algorithms successfully, but are not suitable for use
in portable devices such as mobile phones and PDAs because of power supply and space constraints. A
specialized digital signal processor, however, will tend to provide a lower-cost solution, with better
performance, lower latency, and no requirements for specialized cooling or large batteries.

The architecture of a digital signal processor is optimized specifically for digital signal processing. Most also
support some of the features as an applications processor or microcontroller, since signal processing is rarely
the only task of a system. Example: ADSP-21XX, ADSP-21K

FSIPD 173
Figure 3.11. Digital processing system

Field Programmable Gate Array (FPGA): A field-programmable gate array (FPGA) is an integrated circuit
designed to be configured by a customer or a designer after manufacturinghence "field-programmable". The
FPGA configuration is generally specified using a hardware description language (HDL), similar to that used
for an application-specific integrated circuit (ASIC) (circuit diagrams were previously used to specify the
configuration, as they were for ASICs, but this is increasingly rare).

Contemporary FPGAs have large resources of logic gates and RAM blocks to implement complex digital
computations. As FPGA designs employ very fast I/Os and bidirectional data buses it becomes a challenge to
verify correct timing of valid data within setup time and hold time. Floor planning enables resources
allocation within FPGA to meet these time constraints. FPGAs can be used to implement any logical function
that an ASIC could perform. The ability to update the functionality after shipping, partial re-configuration of a
portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding
the generally higher unit cost), offer advantages for many applications.

FPGAs contain programmable logic components called "logic blocks", and a hierarchy of reconfigurable
interconnects that allow the blocks to be "wired together"somewhat like many (changeable) logic gates that
can be inter-wired in (many) different configurations. Logic blocks can be configured to perform complex
combinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, the logic blocks also
include memory elements, which may be simple flip-flops or more complete blocks of memory.

Some FPGAs have analog features in addition to digital functions. The most common analog feature is
programmable slew rate and drive strength on each output pin, allowing the engineer to set slow rates on
lightly loaded pins that would otherwise ring unacceptably, and to set stronger, faster rates on heavily loaded
pins on high-speed channels that would otherwise run too slowly. Another relatively common analog feature
is differential comparators on input pins designed to be connected to differential signaling channels. A few
"mixed signal FPGAs" have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog
converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip.[6]
Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable
interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal
programmable interconnect fabric.

Hardware Architecture:

Micro-controller Architecture:

A micro-controller incorporates the following:

The CPU core


Memory (both ROM and RAM)
Some parallel digital I/O

Microcontrollers will also combine other devices such as:

A Timer module to allow the microcontroller to perform tasks for certain time periods.

FSIPD 174
A serial I/O port to allow data to flow between the microcontroller and other devices such as a PC or
another microcontroller.
An ADC to allow the microcontroller to accept analogue input data for processing.

The microcontroller's building blocks

To illustrate the functions and interconnectivity of the building blocks of the microcontroller, we shall
construct the microcontroller block by block:

Memory unit

Memory is part of the microcontroller whose function is to store data. The easiest way to explain it is to
describe it as one big closet with lots of drawers. If we suppose that we marked the drawers in such a way
that they cannot be confused, any of their contents will then be easily accessible. It is enough to know the
designation of the drawer and so its contents will be known to us for sure. Memory components are exactly
like that. For a certain input we get the contents of a certain addressed memory location and that's all. Two
new concepts are brought to us: addressing and memory location. Memory consists of all memory locations,
and addressing is nothing but selecting one of them. This means that we need to select the desired memory
location on one hand, and on the other hand we need to wait for the contents of that location. Besides
reading from a memory location, memory must also provide for writing onto it. This is done by supplying an
additional line called control line.

For example, we will designate this line as R/W (read/write). Control line is used in the following way: if
r/w=1, reading is done, and if opposite is true then writing is done on the memory location. Memory is the
first element, and we need a few operation of our microcontroller.

Central Processing Unit

The block that will have a built in capability to multiply, divide, subtract, and move its contents from one
memory location onto another are called "central processing unit" (CPU). Its memory locations are called
registers.
Registers are therefore memory locations whose role is to help with performing various mathematical
operations or any other operations with data wherever data can be found. Look at the current situation. We
have two independent entities (memory and CPU) which are interconnected, and thus any exchange of data is
hindered, as well as its functionality.

For example, we wish to add the contents of two memory locations and return the result again back to
memory; we would need a connection between memory and CPU. Simply stated, we must have some "way"
through data goes from one block to another.

Bus

The way" is called "bus". Physically, it represents a group of 8, 16, or more wires. There are two types of
buses: address and data bus. The first one consists of as many lines as the amount of memory we wish to
address and the other one is as wide as data, in our case 8 bits or the connection line. First one serves to
transmit address from CPU memory, and the second to connect all blocks inside the microcontroller. As far as
functionality, the situation has improved, but a new problem has also appeared: we have a unit that's capable
of working by itself, but which does not have any contact with the outside world, or with us. In order to
remove this deficiency, let's add a block which contains several memory locations whose one end is connected
to the data bus, and the other has connection with the output lines on the microcontroller which can be seen
as pins on the electronic component.

Input-output unit

Those locations we've just added are called "ports". There are several types of ports: input, output or
bidirectional ports. When working with ports, first of all it is necessary to choose which port we need to work
with, and then to send data to, or take it from the port. When working with it the port acts like a memory
location. Something is simply being written into or read from it, and it could be noticed on the pins of the
microcontroller.

FSIPD 175
Serial communication

Beside stated above we've added to the already existing unit the possibility of communication with an
outside world. However, this way of communicating has its drawbacks. One of the basic drawbacks is the
number of lines which need to be used in order to transfer data. What if it is being transferred to a distance of
several kilometers? The number of lines times number of kilometers doesn't promise the economy of the
project. It leaves us having to reduce the number of lines in such a way that we don't lessen its functionality.
Suppose we are working with three lines only, and that one line is used for sending data, other for receiving,
and the third one is used as a reference line for both the input and the output side. In order for this to work,
we need to set the rules of exchange of data. These rules are called protocol. Protocol is therefore defined in
advance so there wouldn't be any misunderstanding between the sides that are communicating with each
other. For example, if one man is speaking in French, and the other in English, it is highly unlikely that they
will quickly and effectively understand each other. Let's suppose we have the following protocol. The logical
unit "1" is set up on the transmitting line until transfer begins. Once the transfer starts, we lower the
transmission line to logical "0" for a period of time (which we will designate as T), so the receiving side will
know that it is receiving data, and so it will activate its mechanism for reception. Let's go back now to the
transmission side and start putting logic zeros and ones onto the transmitter line in the order from a bit of
the lowest value to a bit of the highest value. Let each bit stay on line for a time period which is equal to T,
and in the end, or after the 8th bit, let us bring the logical unit "1" back on the line which will mark the end of
the transmission of one data. The protocol we've just described is called in professional literature NRZ (Non-
Return to Zero).

As we have separate lines for receiving and sending, it is possible to receive and send data (info.) at the same
time. So called full-duplex mode block which enables this way of communication is called a serial
communication block. Unlike the parallel transmission, data moves here bit by bit or in a series of bits what
defines the term serial communication comes from. After the reception of data we need to read it from the
receiving location and store it in memory as opposed to sending where the process is reversed. Data goes from
memory through the bus to the sending location, and then to the receiving unit according to the protocol.

Timer unit

Since we have the serial communication explained, we can receive, send and process data.
However, in order to utilize it in industry we need a few additionally blocks. One of those is the timer block
which is significant to us because it can give us information about time, duration, protocol etc. The basic unit
of the timer is a free-run counter which is in fact a register whose numeric value increments by one in even
intervals, so that by taking its value during periods T1 and T2 and on the basis of their difference we can
determine how much time has elapsed. This is a very important part of the microcontroller whose
understanding requires most of our time.

Watchdog

One more thing is requiring our attention is a flawless functioning of the microcontroller during its run-time.
Suppose that as a result of some interference (which often does occur in industry) our microcontroller stops
executing the program, or worse, it starts working incorrectly. Of course, when this happens with a computer,
we simply reset it and it will keep working. However, there is no reset button we can push on the
microcontroller and thus solve our problem. To overcome this obstacle, we need to introduce one more block
called watchdog. This block is in fact another free-run counter where our program needs to write a zero in
every time it executes correctly. The counter alone will reset the microcontroller upon achieving its maximum
value if the program gets "stuck" or zero will not be written in, This will result in executing the program again,
and correctly this time around. That is an important element of every program to be reliable without man's
supervision.

Analog to Digital Converter (ADC)

As the peripheral signals usually are substantially different from the ones that microcontroller can
understand (zero and one), they have to be converted into a pattern which can be comprehended by a
microcontroller. This task is performed by a block for analog to digital conversion or by an ADC. This block is
responsible for converting an information about some analog value to a binary number and for follow it
through to a CPU block so that CPU block can further process it.

FSIPD 176
Finally, the microcontroller is now completed, and all we need to do now is to assemble it into an electronic
component where it will access inner blocks through the outside pins. The figure3.12 shows what a
microcontroller looks like inside.

Figure 3.12. Micro controller Architecture

Embedded Hardware testing:

Before a new application, operating system, computer system or peripheral can be fully supported by OTS
(Office of Technology Services), it first must pass a rigorous testing procedure. Only once it has been
thoroughly tested, we recommend it for campus use.
Without stable hardware any program will fail. The frustration and expense of supporting bad hardware can
drain an organization, delay progress, and frustrate everyone involved.
Commodity hardware changes often, so new evaluations happen periodically each time we purchase systems
and minor re-evaluations happen for revised systems for our clusters, about twice a year. The following
activities are involved in hardware testing

Writing embedded code.


A PCB with all necessary components affixed
Data necessary to implement.
Processor/controller understandable Machine language (sequence of 1s and 0s-Binary).

Embedded hardware without embedding firmware is a dumb device

3.3 Prototyping

As discussed in the previous chapter, modeling and computer simulations are the modes of depicting and
analyzing the real-world systems in a simplified manner. With the help of these tools, one is able to represent
the most complex real-world systems using the laws of physics and equations of mathematics which in turn
helps to understand the system behavior and thus predict the future of the systems state and conditions.
But although these models are quite accurate to depict the systems behaviors, may lack accuracy due to the
shortcomings of the assumptions or poor understanding of the underlying physics behind the systems
behavior. So, in order to have a better analysis and study of the systems behavior under real conditions, the
concept and applications of physical models or prototypes comes into picture.

FSIPD 177
After going through this section one will be able to understand the role of the prototypes, prototyping
procedures, types of prototypes, its uses and advantages.

3.3.1 Introduction to prototypes and prototyping

A prototype is a physical representation of a product which is developed for the help to resolve one or more
issues during the product development. A prototype can be considered to be an early release of the model or
sample to represent a product which is built to test the concept of the product or process or it is the model or
sample which can act as a thing to be replicated or learned from. Prototype is a well-known term in the field
of mechanical, electrical, electronics, software engineering etc. In the modern era of mechanization the
launching of any product is guide by the prelaunch of the prototypes, e.g. automobile prototypes before the
launch of the actual product. Prototypes provide the required specifications for a real working system rather
than analyzing a theoretical model. Some of the definitions available for prototypes are:

a representation of a design, made before the final solution exists. - Moggridge, B. Designing
Interactions. The MIT Press, 2007.
[A]n Experience Prototype is any kind of representation, in any medium, that is designed to understand,
explore or communicate what it might be like to engage with the product, space or system we are
designing - Buchenau, M. and Suri, J.F. Experience prototyping. Proceedings of the 3rd conference on
Designing interactive systems: processes, practices, methods, and techniques, ACM (2000), 424-433.
Prototypes are the means by which designers organically and evolutionarily learn, discover, generate, and
refine designs. - Lim, Y., Stolterman, E., and Tenenberg, J. The anatomy of prototypes: Prototypes as
filters, prototypes as manifestations of design ideas. ACM Trans. Comput.-Hum. Interact. 15, 2 (2008), 1-
27.
A software prototype is a dynamic visual model providing a communication tool for customer and
developer that is far more effective than either narrative prose or static visual models for portraying
functionality. - Pomberger, G., Bischofberger, W.R., Kolb, D., Pree, W., and Schlemm, H. Prototyping-
Oriented Software Development - Concepts and Tools. Structured Programming 12, 1 (1991), 43-60.

The American Heritage Dictionary gives the following definitions for a prototype (TAHDotEL04):
An original type, form, or instance serving as a basis or standard for later stages.
An original, full-scale, and usually working model of a new product or new version of an existing product.
An early, typical example.

Moggridge established that prototyping as a core activity in design process across different domains. This is
shown in the figure 3.13.

Figure 3.13. Design process stages according to Moggridge

Prototyping is the art/method of developing the prototypes. Prototyping can be described as the process of
producing early working versions or the prototypes to predict the nature of the future application system and
also to provide the mode for conducting the experiments.

FSIPD 178
Difference between model and prototype: A prototype is the full size or 'experimental' model of a device or
process. It is also sometimes considered to be the first or pre-launching complete item which later turns into
a fully commercially produced product or process. Whereas a model is an arrangement of the parts that
demonstrates the way they work together where the scale is arbitrary.

Objectives of prototypes

The purpose of the prototypes includes:


To develop a rough vision to predict the nature of the performance of the products.
To develop a concept model with no working features to obtain early feedbacks from the customers.
To create a rough idea in order to visualise and inspire possible improvements.
To study the product features and models in order to polish difficult features.
To develop functional and semi functional models.
To simulate the product activities.
To create photographic quality model for marketing and evaluation of the product in use through video
demonstrations.
To study the appearance or visual feel of the product.
To develop a sample batch of the products in order to check and rectify the manufacturing problems and
process variables.
To produce a small batch of the products in order to receive customers feedbacks.
Types of prototypes

The prototypes can be classified into six general classes which are used in modern-day product development
processes.

1) Proof-of-concept models
2) Industrial design prototypes
3) Design of experiment (DOE) experimental prototypes
4) Alpha prototypes
5) Beta prototypes
6) Preproduction prototypes

1) Proof-of-concept models

Proof-of-concept models are used to answer specific questions of feasibility about a product. They are usually
fabricated from simple, readily available materials, they focus on a component or sub-system of the product,
and they are constructed post-concept generation, usually during concept selection and product embodiment.
The general question proof-of-concepts answers is whether the imagined physics of the concept on paper
indeed actually happen, and what any unforeseen physics might be.

2) Industrial design prototypes

Industrial design prototypes demonstrate the look and the feel of the product. Generally, they initially
constructed out of simple materials such as foam or foam core and seek to demonstrate many options
quickly.

3) Design of experiment (DOE) experimental prototypes

The DOE or design of experiment experimental prototypes are focused physical models where empirical data
is sought to parameterise, lay out, or shape aspects of the product. The focus with DOE prototypes is usually
models a sub-system of a product while converging to target performance of the subsystem. This class of
prototypes is fabricated from similar materials and geometry as the actual product, with the DOE prototype
being just similar enough to replicate the real products physics, but otherwise made as simple, cheaply, and
quickly as possible.

FSIPD 179
4) Alpha prototypes

To answer the questions regarding overall layout, alpha prototypes are constructed. Alpha prototypes are
fabricated using materials, geometry, and layout that the design team believes will be used for the actual
product. The alpha prototype is the first system construction of the subsystems that are individually proven
in the subsystem DOE prototyping and/ or design. Alpha prototypes also usually include some functional
features for testing and measurement of the product as a system.

5) Beta prototypes

Beta prototypes, on the other hand, are the first full-scale functional prototypes of the product, constructed
from the actual materials as the final product. They may not necessarily be fabricated using the same
production processes as the final product though. Plastic parts on beta prototypes are typically CNC machined
rather than individually injection molded.

6) Preproduction prototypes

Preproduction prototypes are the class of physical prototypes/ models, which are used to perform a final part
production and assembly assessment using the actual production tooling. Many design firms strive to make
the beta prototype be both the preproduction unit and the actual unit; i.e. to have no corrections needed in
the beta prototypes. In any case, small batches of the product are produced for the production prototypes to
verify product performance for predicted full scale capacities.

There may be other classification of prototype as mentioned below:

1) Mock-up
2) Rapid Prototype
3) Virtual prototype
4) Feasibility prototype
5) Horizontal prototype
6) Vertical prototype
7) Scenario-based prototype
8) Video prototype

1) Mock-up

Architects use mock-ups or scaled prototypes to provide three-dimensional illustrations of future buildings.
Mock-ups are also useful for interactive system designers, helping them move beyond two-dimensional
images drawn on paper or transparencies. Generally made of cardboard, foam core or other found materials,
mock-ups are physical prototypes of the new system. The mock-up provides a deeper understanding of how
the interaction will work in real-world situations than possible with sets of screen images.
Mock-ups allow the designer to concentrate on the physical design of the device, such as the position of
buttons or the screen. The designer can also create several mock-ups and compare input or output options,
such as buttons vs. trackballs. Designers and users should run through different scenarios, identifying
potential problems with the interface or generating ideas for new functionality. Mock-ups can also help the
designer envision how an interactive system will be incorporated into a physical space.

2) Rapid prototype

The goal of rapid prototyping is to develop prototypes very quickly, in a fraction of the time it would take to
develop a working system. By shortening the prototype-evaluation cycle, the design team can evaluate more
alternatives and iterate the design several times, improving the likelihood of finding a solution that
successfully meets the user's needs.
How rapid prototyping is rapid depends on the context of the particular project and the stage in the design
process. Early prototypes, e.g. sketches, can be created in a few minutes. Later in the design cycle, a prototype
produced in less than a week may still be considered rapid if the final system is expected to take months or
years to build. Precision, interactivity and evolution all affect the time it takes to create a prototype. Not
surprisingly, a precise and interactive prototype takes more time to build than an imprecise or fixed one.
The techniques presented in this section are organized from most rapid to least rapid, according to the
representation dimension introduced in section 2. Off-line techniques are generally more rapid than on-line
one. However, creating successive iterations of an on-line prototype may end up being faster than creating
new off-line prototypes.

FSIPD 180
3) Virtual prototype

VP is an innovative and powerful virtual simulation tool for facilitating rapid product development. Virtual
prototyping (VP) technique has been studied and implemented in recent years in engineering design. Quite
often this term was used and interpreted in many different ways, which has caused confusion and even
misunderstanding among readers.

Some of the definitions of virtual prototyping are:


Virtual Prototyping (VP) is a relatively new technology which involves the use of Virtual Reality (VR) and
other computer technologies to create digital prototypes. - Gowda, S., Jayaram, S., and Jayaram, U., 1999,
Architectures for Internet-based Collaborative Virtual Prototyping, Proceedings of the 1999 ASME
Design Technical Conference and Computers in Engineering Conference, DETC99/CIE-9040, Las Vegas,
Nevada, September 11-15.
By virtual prototyping, we refer to the process of simulating the user, the product, and their combined
(physical) interaction in software through the different stages of product design, and the quantitative
performance analysis of the product. - Song, P., Krovi, V., Kumar, V., and Mahoney, R., 1999, Design and
Virtual Prototyping of Humanworn Manipulation Devices, Proceedings of the 1999 ASME Design
Technical Conference and Computers in Engineering Conference, DETC99/CIE-9029, Las Vegas, Nevada,
September 11-15.
In the mechanical engineering definition of virtual prototyping (VPME), the idea is to replace physical
mock-ups by software prototypes. This includes also all kinds of geometrical and functional simulations,
whether or not involving humans. - Antonino, G. S. and Zachmann, G., 1998, Integrating Virtual Reality
for Virtual Prototyping, Proceedings of the 1998 ASME Design Technical Conference and Computers in
Engineering Conference, DETC98/CIE-5536, Atlanta, Georgia, September 13-16.

Components of a Virtual Prototype

From figure 3.14, one can see that various interrelated models are built to virtually present, analyse and test a
product. The user interface serves as the integration component that coordinates the behaviour of models
and provides useful information to the system user.
Depending on applications, a virtual prototype may only include a subset of these components.

Figure 3.14. Components of virtual prototypes

4) Feasibility prototype

To create the feasibility of a solution, all of the pieces must fit together properly. The feasibility, or technical,
prototype proves out some technical assertion which is the key to the feasibility of the preferred alternative.
It verifies that critical components of the technical architecture integrate properly and are capable of meeting
the business needs.

The kinds of integration problems examined with a feasibility prototype are too complex to be addressed with
paper analyses and simple reviews of manufacturers' specifications. Physical access to the equipment is
required. Proof of concept tests is constructed to validate a conceptual solution.

FSIPD 181
5) Horizontal prototype

The purpose of a horizontal prototype is to develop one entire layer of the design at the same time. This type
of prototyping is most common with large software development teams, where designers with different skill
sets address different layers of the software architecture. Horizontal prototypes of the user interface are
useful to get an overall picture of the system from the users perspective and address issues such as
consistency (similar functions are accessible through similar user commands), coverage (all required functions
are supported) and redundancy (the same function is/is not accessible through different user commands).

User interface horizontal prototypes can begin with rapid prototypes and progress through to working code.
Software prototypes can be built with an interface builder, without creating any of the underlying
functionality making it possible to test how the user will interact with the user interface without worrying
about how the rest of the architecture works. However some level of scaffolding or simulation of the rest of
the application is often necessary, otherwise the prototype cannot be evaluated properly. As a consequence,
software horizontal prototypes tend to be evolutionary, i.e. they are progressively transformed into the final
system.

6) Vertical prototype

The purpose of a vertical prototype is to ensure that the designer can implement the full, working system,
from the user interface layer down to the underlying system layer. Vertical prototypes are often built to
assess the feasibility of a feature described in a horizontal, task-oriented or scenario-based prototype.

Vertical prototypes are generally high precision, software prototypes because their goal is to validate an idea
at the system level. They are often thrown away because they are generally created early in the project, before
the overall architecture has been decided, and they focus on only one design question. For example, a vertical
prototype of a spelling checker for a text editor does not require text editing functions to be implemented and
tested. However, the final version will need to be integrated into the rest of the system, which may involve
considerable architectural or interface changes.

Vertical Prototypes do not normally include:


Edits and controls
Security features
Audit trails
Exception handling
Record locking

The Vertical Prototype is normally developed during the later stages of analysis.

7) Scenario-based prototypes

Scenario-based prototypes are similar to task-oriented ones, except that they do not stress individual,
independent tasks, but rather follow a more realistic scenario of how the system would be used in a real-
world setting. Scenarios are stories that describe a sequence of events and how the user reacts. A good
scenario includes both common and unusual situations, and should explore patterns of activity over time.

8) Video prototype

Video prototypes use video to illustrate how users will interact with the new system. They differ from video
brainstorming in that the goal is to refine a single design, not generate new ideas.
Video prototypes may build on paper & pencil prototypes and cardboard mock-ups and can also use existing
software and images of real-world settings.

Prototype design procedure

As already we have explored in the previous section on the types of prototype, here we present the basic
procedure followed in prototype design.

FSIPD 182
Steps in prototype design:
1) The objective(s) of the prototype from the point of view of the customer needs to be identified.
2) The functionality of these customers needs to be determined.
a. Module interfaces if present to be identified.
3) The basic physical principles needed to understand the probable experiments to be performed on the
prototype to be identified.
4) The measurement system for the prototype to be specified.
a. If the measurement system(s) directly relate to customer needs and also if these system(s) corresponds
to engineering specifications or not need to be stated.
5) If the prototype will be focused or comprehensive, scaled or actual geometry, and produce from actual
materials or not need to be specified.
6) If the rapid prototyping technology will be adapted to develop the prototype needs to be decided and if
decided which technology will be appropriate needs to be stated. And if not which are the other
technologies and materials will be preferred needs to be specified.
7) Alternative prototype concepts needs to be sketched, cost, appropriate scale, and alternative build plans
to be determined, a preferred concept to be chosen, and a fabrication process plan to be developed.
8) The procedure to test the prototype, techniques to control the factors responsible for minimizing the
experimental error, techniques and sensors to measure the response, number of test to be conducted/
replicated, etc. needs to be mentioned. The type of the test to be conducted, i.e. destructive or non-
destructive and the accuracy of measurements desired needs to be declared.

These procedures are shown as a block diagram in the figure 3.15.

Figure 3.15. Prototype design procedure

Advantages and disadvantages of prototypes

The prototypes developed have a number of usefulness for different reasons. These advantages are listed in
table 3.2.

Advantages
Early validation of applications with users, clients.
Users can take an active part in the development of a product.
Users are encouraged to share needs and wishes for the final product.
They produce more visible results earlier (good for managers to show-off).

FSIPD 183
Improved collaboration & communication among developers, analysts, users.
Encourages reflection about the product.
Finds answers to questions about the design.
Many prototypes are very easy to build.
Reduced risk of project failure.

Table 3.2. Advantages of prototypes

There are a number of disadvantages rather pitfalls to prototyping. They are called pitfalls because they
might be avoided with careful planning. These disadvantages/ pitfalls are listed table 3.3.

Advantages
Attempt to use prototyping techniques before securing cooperation from all parties involved in the
procedure.
Established management procedures might not involve prototyping.
Reduction in programming discipline.
Pressure to later use the prototype as the real-thing (from client or management).
Overpromising or misleading with the prototype (prototyping something that cannot be included with
the available resources).
Trap of overdesign (too much time is spent on the prototype).
Depending how the prototype was designed it might be hard to extend.

Table 3.3. Pitfalls of prototypes

Applications of prototypes

The applications of prototypes are given in the table 3.4 and 3.5.

Sl. No. Categories Applications


1 Communication To take feedbacks from customers suppliers, vendors and
management.
2 Demonstration To show to the customers, clients and vendors, the
achievement of the project goal and milestones of the
company.
3 Scheduling/milestone To take the preliminary decisions and to avoid the series of
concept development and embodiment design at the end.
4 Feasibility To check whether the specific ideas developed will work or not
To determine whether these ideas will satisfy customer needs
or not.
To discover the unexpected phenomena.
For these measurements are taken, recorded and analysed.
Parametric modelling To develop a logical model of a product or sub-system
systematically.
For this the experiments are conducted and decisions for
optimizing the design variables are decided basing on the
results of the experiments.
6 Architectural interfacing To check whether the module interfaces will perform correctly
or not.
To test the compatibility and assembly of the parts.
To study the manufacturing processes and typical part
problems.

Table.3.4. Use of prototypes according to engineering areas

FSIPD 184
Sl. No. Areas/desciplines Applications
1 Mechanical Engineering Functional proof of the concepts.
Product component layout and interconnects.
Verification of virtual modelling (acoustics, vibration, heat
transfer, stress level, kinematics, etc.)
Machine design (elements and mechanisms)
Fabrication and testing of packaging
Studies of manufacturing processes
Material selection
Tool design
Analysis of assembly and time motion studies.
Housing studies for mechanical deformations, stress/strain,
heat transfer, vibration, mechanical interfaces, etc.
Validation of mechanical assembly and component
drawings.
2 Electrical Engineering Layouts and physical models of printed circuit boards.
Test fixtures for electronic function and control
Electronic function (bread boarding)
Power supply, modulation and control
Assessment of UL ratings
Standard component studies and integration
3 Industrial Design Alternatives for testing aesthetics and artistic impression
(feel), usually embodied in early foam models
Studies of sematic product statement
Arrangement of internal components and its effect on shape
New product concepts
Ergonomic studies
4 Electronic Engineering To build an actual circuit of a theoretical design and to check
if it works.
To provide a physical platform for fixing the model circuit if it
does not work
To fabricate electrically correct circuit boards by the use of
techniques such as wire wrap or using Vero board or
breadboard
5 Software Engineering To produce new objects existing objects using object
oriented programming / prototype based programming

Table- 3.5. Applications of prototypes

3.3.2 Rapid prototyping

As described in the earlier section, prototyping or model making is one of the important steps to finalize a
product design. It helps in conceptualization of a design. Before the start of full production a prototype is
usually fabricated and tested. Manual prototyping by a skilled craftsman has been an age old practice for
many centuries. Rapid Prototyping (RP) is a process of layer-by-layer material deposition, started during early
1980s. With the enormous growth in Computer Aided Design and Manufacturing (CAD/CAM) technologies
when almost explicit solid models could define a product and also manufacture it by CNC machining the
technique of rapid prototyping became popular.

This section will throw light on these rapid layering techniques in details.

FSIPD 185
Historical development of Rapid Prototyping
The historical developments of rapid prototyping are listed in table 3.6.

Year of inception Technology

1770 Mechanization

1946 First computer

1952 First Numerical Control (NC) machine tool

1960 First commercial laser

1961 First commercial Robot

1963 First interactive graphics system (early version of Computer Aided Design)

1988 First commercial Rapid Prototyping system

Table- 3.6. Development of Rapid Prototyping and related technologies

Rapid Prototyping

The term rapid prototyping (RP) refers to a class of technologies that can automatically construct physical
models from Computer-Aided Design (CAD) data. These "three dimensional printers" allow designers to
quickly create tangible prototypes of their designs, rather than just two-dimensional pictures. Such models
have numerous uses. They make excellent visual aids for communicating ideas with co-workers or customers.
In addition, prototypes can be used for design testing. For example, an aerospace engineer might mount a
model air foil in a wind tunnel to measure lift and drag forces. Designers have always utilized prototypes; RP
allows them to be made faster and less expensively.

In addition to prototypes, RP techniques can also be used to make tooling (referred to as rapid tooling) and
even production-quality parts (rapid manufacturing). For small production runs and complicated objects,
rapid prototyping is often the best manufacturing process available. Of course, "rapid" is a relative term. Most
prototypes require from three to seventy-two hours to build, depending on the size and complexity of the
object. This may seem slow, but it is much faster than the weeks or months required to make a prototype by
traditional means such as machining. These dramatic time savings allow manufacturers to bring products to
market faster and at a lower price.

The Basic Process

Although several rapid prototyping techniques exist, all employ the same basic five-step process. The steps
(figure 3.16) are:
Create a CAD model of the design
Convert the CAD model to STL format
Slice the STL file into thin cross-sectional layers
Construct the model one layer atop another
Clean and finish the model

Solid modellers, such as Pro/ENGINEER, tend to represent 3-D objects more accurately than wire-frame
modellers such as AutoCAD, and will therefore yield better results. The designer can use a pre-existing CAD
file or may wish to create one expressly for prototyping purposes.
The various CAD packages use a number of different algorithms to represent solid objects. To establish
consistency, the STL (stereolithography, the first RP technique) format has been adopted as the standard of
the rapid prototyping industry.

The second step, therefore, is to convert the CAD file into STL format. This format represents a three-
dimensional surface as an assembly of planar triangles, "like the facets of a cut jewel." The file contains the

FSIPD 186
coordinates of the vertices and the direction of the outward normal of each triangle. Because STL files use
planar elements, they cannot represent curved surfaces exactly. Increasing the number of triangles improves
the approximation, but at the cost of bigger file size. Large, complicated files require more time to pre-
process and build, so the designer must balance accuracy with manageability to produce a useful STL file.

In the third step, a pre-processing program prepares the STL file to be built. Several programs are available,
and most allow the user to adjust the size, location and orientation of the model. Build orientation is
important for several reasons. First, properties of rapid prototypes vary from one coordinate direction to
another. For example, prototypes are usually weaker and less accurate in the z (vertical) direction than in the
x-y plane. In addition, part orientation partially determines the amount of time required in building the
model. Placing the shortest dimension in the z direction reduces the number of layers, thereby shortening
build time.

The preprocessing software slices the STL model into a number of layers from 0.01 mm to 0.7 mm thick,
depending on the build technique. The program may also generate an auxiliary structure to support the model
during the build. Supports are useful for delicate features such as overhangs, internal cavities, and thin-
walled sections.

Figure 3.16. Rapid prototyping principle

The fourth step is the actual construction of the part. Using one of several techniques (described in the next
section) RP machines build one layer at a time from polymers, paper, or powdered metal. Most machines are
fairly autonomous, needing little human intervention.

The final step is post-processing. This involves removing the prototype from the machine and detaching any
supports. Some photosensitive materials need to be fully cured before use.

Prototypes may also require minor cleaning and surface treatment. Sanding, sealing, and/or painting the
model will improve its appearance and durability.

Rapid prototyping techniques:

Most commercially available rapid prototyping machines use one of six techniques. Here, few important RP
processes namely Stereolithography (SL), Selective Laser
Sintering (SLS), Three Dimensional Ink-Jet Printing, Laminated Object Manufacturing
(LOM), and Fused Deposition Modeling (FDM) are described.

Stereolithography
Patented in 1986, stereolithography (figure 3.17) started the rapid prototyping revolution. The technique
builds three-dimensional models from liquid photosensitive polymers that solidify when exposed to
ultraviolet light. As shown in the figure below, the model is built upon a platform situated just below the
surface in a vat of liquid epoxy or acrylate resin. A low-power highly focused UV laser traces out the first layer,
solidifying the models cross section while leaving excess areas liquid.

FSIPD 187
Figure 3.17. Stereolithography

Next, an elevator incrementally lowers the platform into the liquid polymer. A sweeper re-coats the solidified
layer with liquid, and the laser traces the second layer atop the first. This process is repeated until the
prototype is complete. Afterwards, the solid part is removed from the vat and rinsed clean of excess liquid.
Supports are broken off and the model is then placed in an ultraviolet oven to complete the curing. Because it
was the first technique, stereolithography is regarded as a benchmark by which other technologies are judged.
Early stereolithography prototypes were fairly brittle and prone to curing-induced warpage and distortion, but
recent modifications have largely corrected these problems.

Selective Laser Sintering

Developed by Carl Deckard for his masters thesis at the University of Texas, selective laser sintering (figure
3.18) was patented in 1989. The technique, shown in Figure 3, uses a laser beam to selectively fuse powdered
materials, such as nylon, elastomer, and metal, into a solid object. Parts are built upon a platform, which sits
just below the surface in a bin of the heat-fusable powder. A laser traces the pattern of the first layer,
sintering it together. The platform is lowered by the height of the next layer and powder is reapplied. This
process continues until the part is complete. Excess powder in each layer helps to support the part during the
build.

Figure 3.18. Selective laser sintering

Three Dimensional Ink-Jet Printing

Unlike the above techniques, Ink-Jet Printing (figure 3.19) refers to an entire class of machines that employ
ink-jet technology. The first was 3D Printing (3DP), developed at MIT and licensed to Soligen Corporation,
Extrude Hone, and others. As shown in the picture below, parts are built upon a platform situated in a bin full
of powder material. An ink-jet printing head selectively "prints" binder to fuse the powder together in the

FSIPD 188
desired areas. Unbound powder remains to support the part. The platform is lowered, more powder added and
leveled, and the process repeated. When finished, the green part (not fully cured) is sintered and then
removed from the unbound powder. Soligen uses 3DP to produce ceramic molds and cores for investment
casting, while Extrude Hone hopes to make powder metal tools and products.

Figure 3.19. 3-D Ink-Jet Printing

Laminated Object Manufacturing (LOM)

Laminated Object Manufacturing (figure 3.20) was developed in 1985 by Hydronetics in Chicago, IL. Helisys,
Inc. is now the primary manufacturer LOM machines. This method can make use of a large selection of
materials to create the model. However, paper is the most common, forming essentially a wooden finished
product.
The process starts by coating a support platform with adhesive. Rollers feed a sheet of paper across the
platform and then press and adhere the paper to the platform. A laser then performs a "cookie cutter"
operation. It cuts out the model and any features on that plane. The laser also cross-hatches the material
that is not in the model, allowing for easier removal after the process is done. The paper is left in place but
has in effect been scored. The platform then drops down by one layer, the stamped out scrap paper is rolled
onto the take-up spool and a new length of paper is rolled into position. The sheet is then coated with
adhesive and the next sheet is applied and cut. Eventually, the model is detached from the platform and the
model is broken out of the solid structure.

Material from paper to composite sheets can be used in this process. Thickness ranges from 0.002 to 0.015
inches. Since there is no material phase change, there is no problem with shrinkage or warping due to internal
stresses. The process gives essentially a laminated wood end product, which is very sturdy.

Figure 3.20. Laminated object manufacturing

FSIPD 189
Fused Deposition Modeling (FDM)

Fused Deposition Modeling (figure 2.21) was developed in 1988 by S. Scott Crump. This method of Rapid
Prototyping has been described as applying decorative icing to a cake. The process again makes use of
horizontal slices that come from the STL file which the CAD software generates. Each layer is created by a
heated nozzle moving around the build area and depositing molten or semi-molten material onto the
previous layers, building up the model. The heated nozzle accurately heats the material (usually a
thermoplastic) to approximately 1-5 degrees F above the melting point. This allows for rapid cooling and
solidification upon deposition. The material is fed into the nozzle in the form of wire or filament. The slices
range between 0.01 and 0.125 inches thick. The filament or wire is usually about 0.05 inches in diameter.
Many different materials can be used, as long as the nozzle can heat the material to melting.

Figure 3.21. Fused deposition modeling

Applications of Rapid Prototyping

Rapid prototyping is widely used in the automotive, aerospace, medical, and consumer products industries.
Although the possible applications are virtually limitless, nearly all fall into one of the following categories:
prototyping, rapid tooling, or rapid manufacturing.

Prototyping
As its name suggests, the primary use of rapid prototyping is to quickly make prototypes for communication
and testing purposes. Prototypes dramatically improve communication because most people, including
engineers, find three-dimensional objects easier to understand than two-dimensional drawings. Such
improved understanding leads to substantial cost and time savings. As Pratt & Whitney executive Robert P.
DeLisle noted: "Weve seen an estimate on a complex product drop by $100,000 because people who had to
figure out the nature of the object from 50 blueprints could now see it." Effective communication is especially
important in this era of concurrent engineering. By exchanging prototypes early in the design stage,
manufacturing can start tooling up for production while the art division starts planning the packaging, all
before the design is finalized.
Prototypes are also useful for testing a design, to see if it performs as desired or needs improvement.
Engineers have always tested prototypes, but RP expands their capabilities. First, it is now easy to perform
iterative testing: build a prototype, test it, redesign, build and test, etc. Such an approach would be far too
time-consuming using traditional prototyping techniques, but it is easy using RP.
In addition to being fast, RP models can do a few things metal prototypes cannot. For example, Porsche used
a transparent stereolithography model of the 911 GTI transmission housing to visually study oil flow. Snecma,
a French turbo-machinery producer, performed photoelastic stress analysis on a SLA model of a fan wheel to
determine stresses in the blades.

FSIPD 190
Rapid Tooling
A much-anticipated application of rapid prototyping is rapid tooling, the automatic fabrication of production
quality machine tools. Tooling is one of the slowest and most expensive steps in the manufacturing process,
because of the extremely high quality required. Tools often have a complex geometry, yet must be
dimensionally accurate to within a hundredth of a millimeter. In addition, tools must be hard, wear-resistant,
and have very low surface roughness (about 0.5 micrometers root mean square). To meet these requirements,
molds and dies are traditionally made by CNC-machining, electro-discharge machining, or by hand. All are
expensive and time consuming, so manufacturers would like to incorporate rapid prototyping techniques to
speed the process. Peter Hilton, president of Technology Strategy Consulting in Concord, MA, believes that
"tooling costs and development times can be reduced by 75 percent or more" by using rapid tooling and
related technologies.

Rapid Manufacturing

Rapid Manufacturing refers to an additive fabrication technique in which manufacturing of solid objects by
the sequential delivery of energy and/or material to specified points in space is done. Rapid Manufacturing
used for parallel batch production provides a large advantage in speed and cost over the alternative
manufacturing techniques such as plastic injection molding or die casting. Custom parts, replacement parts,
short run production, or series production are included in rapid manufacturing.

Rapid manufacturing is known rapid prototyping if the part is used only in the development process, i.e. a
natural extension of Rapid Prototyping is rapid manufacturing (Rapid Manufacturing), the automated
production of saleable products directly from CAD data. Currently only Rapid Prototyping machines produce
only a few final products, but the number will increase as metals and other materials become more widely
available. Rapid Manufacturing will never completely replace other manufacturing techniques, especially in
large production runs where mass-production is more economical.

Advantages of Rapid Manufacturing

1. For short production runs, Rapid Manufacturing is much cheaper, since it does not require tooling.
2. Rapid Manufacturing is also ideal for producing custom parts tailored to the users exact specifications.

A University of Delaware research project uses a digitized 3-D model of a persons head to construct a
custom-fitted helmet. NASA is experimenting with using Rapid Prototyping machines to produce spacesuit
gloves fitted to each astronauts hands. From tailored golf club grips to custom dinnerware, the possibilities
are endless.

Applications of Rapid Manufacturing

The other major use of Rapid Manufacturing is for products that simply cannot be made by subtractive
(machining, grinding) or compressive (forging, etc.) processes. This includes objects with complex features,
internal voids, and layered structures.
The industrial applications may include Rapid Manufacturing of large products by layer-based manufacturing
from metals, plastics, or composite materials for example, the military (MPH-Optomec) and aerospace
(Boeing) sectors. This may also include rapid manufacturing of small products and Microsystems such as in
medical (Siemens), consumer electronics, diagnostics and sensor technologies (microTEC). In present day
Rapid Manufacturing technology is being used to automotive, motor sports, jewelry, dentistry, orthodontics,
medicine and collectibles.

3.4 System integration, testing, certification and documentation


3.4.1 Manufacturing or Purchase and Assembly
The act of choosing between manufacturing a product in-house or purchasing it from an external supplier and
assemble them can be called as make-or-buy decision. In a make-or-buy decision, the two most important
factors to consider are cost and availability of production capacity. An enterprise may decide to purchase the
product rather than producing it, if is cheaper to buy than make or if it does not have sufficient production
capacity to produce it in-house. With the phenomenal surge in global outsourcing over the past decades, the
make-or-buy decision is one that managers have to struggle with very frequently. And for this decision to be
taken correctly, the design stage is very important in product development. Most of the product lifecycle costs
are committed at design stage. The product design is not just based on good design but it should be possible

FSIPD 191
to produce by manufacturing as well. Often an otherwise good design is difficult or impossible to produce.
Typically a design engineer will create a model or design and send it to manufacturing for review and invite
feedback. This process is called a design review. If this process is not followed thoroughly, the product may
fail at the manufacturing stage.
Design for manufacturability (also sometimes known as design for manufacturing or DFM) is the general
engineering art of designing products in such a way that they are easy to manufacture. The basic idea exists in
almost all engineering disciplines, but of course the details differ widely depending on the manufacturing
technology. This design practice not only focuses on the design aspect of a part but also on the producibility,
that is the relative ease to manufacture a product, part or assembly. DFM describes the process of designing
or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing
costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to
address them. The design of the component can have an enormous effect on the cost of manufacturing.
Other factors may affect the manufacturability such as the type of raw material, the form of the raw material,
dimensional tolerances, and secondary processing such as finishing.

Design for assembly (DFA) is a process by which products are designed with ease of assembly in mind. If a
product contains fewer parts it will take less time to assemble, thereby reducing assembly costs. In addition,
if the parts are provided with features which make it easier to grasp, move, orient and insert them, this will
also reduce assembly time and assembly costs. The reduction of the number of parts in an assembly has the
added benefit of generally reducing the total cost of parts in the assembly. This is usually where the major
cost benefits of the application of design for assembly occur.

Differences between DFM and DFA

Design for Assembly (DFA)

It is concerned only with reducing product assembly cost


It minimizes number of assembly operations
The individual parts tend to be more complex in design

Design for Manufacturing (DFM)

It is concerned with reducing overall part production cost.


It minimizes complexity of manufacturing operations.
It uses common datum features and primary axes.

Similarities between DFM and DFA:

Both DFM and DFA seek to reduce material, overhead, and labor cost.
They both shorten the product development cycle time.
Both DFM and DFA seek to utilize standards to reduce cost

Design for Manufacturing and Assembly (DFMA):


Design for Manufacture and Assembly (DFMA) is the combination of two methodologies: Design for
Manufacture (DFM) and Design for Assembly (DFA). DFMA is used as the basis for concurrent
engineering studies to provide guidance to the design team in simplifying the product structure, to reduce
manufacturing and assembly costs, and to quantify improvements. The practice of applying DFMA is to
identify, quantify and eliminate waste or inefficiency in a product design. DFMA is therefore a component
of Lean Manufacturing (minimal wastage of resources). DFMA is also used as a benchmarking tool to study
competitors products, and as a cost tool to assist in supplier negotiations.

FSIPD 192
The method of DFMA involves the following steps:
1. The design is being proposed
2. The cost of manufacturing is estimated.
3. The cost of assembly is estimated
4. Then the reduction in the cost of components, assembly and supporting product is being attempted.
5. Now the impact of DFMA decision on other factors is being considered.
6. Then the cost of manufacture is recomputed. If the cost reduction is good enough then the design is
accepted otherwise the process is again repeated from the design phase.

Benefits of DFMA:

It reduces number of parts


It reduces part cost
It reduces assembly time
It reduces product development cycle time
It helps in product simplification and improved quality
There is improved communication between design, manufacturing, purchasing and management

3.4.2 Integration of Mechanical, Embedded and S/W systems

Before proceeding for the integration of these systems, let us understand these systems separately:

Mechanical System: Any physical system which uses power to do work or vice versa and involves force,
motion etc. is called a mechanical system.

Embedded System: An embedded system is a computer system with a dedicated function within a larger
mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a
complete device often including hardware and mechanical parts. By contrast, a general-purpose computer,
such as a personal computer (PC), is designed to be flexible and to meet a wide range of end-user needs.

Software system: Any system made up of components based on softwares is called a software system.

The integration of mechanical, electronic and software systems in todays products are very common and has
become inevitable. Integration of mechanical components, electronics, and information technology is carried
to create new and improved automation systems. To survive in the market, companies have to develop
integrated products as these are far more efficient than the conventional products. A light motor vehicle like a
city car is a very good example of the integration process. The car has a mechanical body and an engine, but
the music system, air conditioning system, digital signals etc are all embedded systems, the GPS tracker
system used in the car is a software system. Thus the car as a whole is an example of an integration of the
three systems.

This type of integration of mechanical, electronics and software systems is known as mechatronics design.
Mechatronics is a subset of electronics industry, which composes of systematic integration of mechanical,
electrical, electronics and software components. Mechatronics is the synergistic integration of sensors,
actuators, signal conditioning, power electronics, decision and control algorithms, and computer hardware
and software to manage complexity, uncertainty, and communication in engineered systems.

FSIPD 193
Elementts of Mechatrronics: The main
m elementss of mechatronics are show
wn in figure 3.2
22. These are:

1. Mechanical Elem
ments

Mechaniccal elements refer to a mechanical stru


ucture, mechanism, thermmo-fluid, and hydraulic asp pects of a
mechatroonics system m. Mechanicaal elements may include static/dynamic characteeristics. A mechanical
m
element interacts with its environm
ment purposeefully. Mechan
nical elementts require phyysical power to
o produce
motion, force,
f heat, ettc.

2. Electromechanical elements

mechanical elements refer to: Sensors an


Electrom nd Actuators

Senssors

A varietyy of physical variables can be measurred using sen nsors, e.g., ligght using photo-resistor, level and
displacem
ment using po otentiometerr, direction/tilt using magn
netic sensor, sound
s using m
microphone, stress
s and
pressure using strain gauge, touch h using micro o-switch, tem
mperature using thermistoor, and humid dity using
conductivvity sensor.

Actu
uators

DC servo
omotor, steppper motor, reelay, solenoid
d, speaker, ligght emitting diode (LED), shape mem
mory alloy,
electrom
magnet, and pu
ump apply com mmanded acttion on the ph hysical process

IC-baased sensors and actuatorss like digital-ccompass, potentiometer ettc.

Figure 3.22. The main


m elementts of mechattronics

FSIPD 194
2. Computer/Information System

Computer elements refer to hardware/software utilized to perform computer-aided dynamic system analysis,
optimization, design, and simulation, virtual instrumentation, rapid control prototyping, hardware-in-the-
loop simulation, PC-based data acquisition and control

3. Electrical/ Electronic

Electrical elements refer to electrical components like resistor (R), capacitor (C), inductor (L), transformer etc,
circuits, and analog signals.

Electronic elements refer to analog/digital electronics, transistors, thyristors, opto-isolators, operational


amplifiers, power electronics, and signal conditioning.

The electrical/electronic elements are used to interface electromechanical sensors and actuators to the
control interface/computing hardware elements.

4. Control Interface/ computing hardware elements

Control interface/computing hardware elements refer to analog-to-digital (A2D) converter, digital-to-analog


(D2A) converter, digital input/output (I/O), counters, timers, microprocessor, microcontroller, data acquisition
and control (DAC) board, and digital signal processing (DSP) board.

Control interface hardware allows analog/digital interfacing communication of sensor signal to the control
computer and communication of control signal from the control computer to the actuator. Control computing
hardware implements a control algorithm, which uses sensor measurements, to compute control actions to
be applied by the actuator.

The Mechatronic Design process (figure 3.23) involves the following steps as shown in Figure 3.23.:

Recognition of the need.


Concept Design and Functional Specification.
Mathematical Modelling.
Sensor and Actuator Selection.
Control System Design.
Design Optimization.
Life Cycle Optimization.

From design to production the basic mechatronic process involves the following steps:

Designing preliminary costs and performance specification.


Optimizing packaging design by using modeling and simulation techniques and tools.
Finalizing the PCB layout through iterations.
Time and Cost reduction by using digital prototyping techniques.
Final approval of the design.
Releasing the design for manufacturing.

FSIPD 195
Figure 3.23. Mechatronic Design Process

Benefits of using softwares:

Use of Software in mechatronics has great benefits. Some of these are:

The solution provides a rich, integrated environment for the development and management of
mechanical, electrical, electronic and embedded software content in a single source of product and
process knowledge.
They can define and capture the complete set of requirements across the vehicle in a single requirements
environment, accessible globally any time of day
They can effectively access and manage the impact of requirements and product changes
They understand the system definition and interfaces across domains and avoid integration issues by
linking requirements, functions, logical and physical interfaces, architecture definition, and product
structure
They can define and manage interfaces and dependencies between the different mechatronics systems
and domains
They can leverage integrations with best-in-class tools for modeling, software, electronics, and
mechanical design, allowing engineers to work within familiar tools sets while capturing the information
in one location for re-use and improved productivity

Major Challenges of Mechatronics:

The main challenges for mechatronics are:

Improving product quality to reducing costs.


Maintaining sustainability.
Reach the market at a faster rate.
Better designs to dissipate hear generated from electronic components.
Safe disposal of hazardous material generated form electronic component production.

FSIPD 196
Applications of mechatronics:

In a very short time due to the products efficient performance, mechatronics has found applications in
various fields. Some of these are:

Smart consumer products:


Home security, camera, microwave oven, toaster, dish washer, laundry washer-dryer, climate
control units, etc.
Medical:
Implant-devices, assisted surgery etc.
Defense:
Unmanned air, ground, and underwater vehicles, smart munitions, jet engines, etc.
Manufacturing:
Robotics, machines, processes, etc.
Automotive:
Climate control, antilock brake, active suspension, cruise control, air bags, engine management, safety,
etc.
Network-centric, distributed systems:
Distributed robotics, tele-robotics, intelligent highways, etc.

3.4.3 Introduction to product verification process and stages- Industry specific (DFMEA,FEA, CFD)

In this section discussed about product verification process and stages by using various tools for various type
of analysis. In product verification analysis we use various type of failure analysis and minimize or optimize
the failure of product. By using FMEA, FEA, CFD, FTA ( Fault tree analysis),etc used in product verification,
which are described in below.

FMEA

Failure Mode and Effects Analysis (FMEA) was one of the first systematic techniques for failure analysis. It
was developed by reliability engineers in the 1950s to study problems that might arise from malfunctions of
military systems. A FMEA is often the first step of a system reliability study. It involves reviewing as many
components, assemblies, and subsystems as possible to identify failure modes, and their causes and effects.
For each component, the failure modes and their resulting effects on the rest of the system are recorded in a
specific FMEA worksheet. There are numerous variations of such worksheets. A FMEA is mainly a qualitative
analysis. A successful FMEA activity helps to identify potential failure modes based on experience with
similar products and processes - or based on common physics of failure logic. It is widely used in development
and manufacturing industries in various phases of the product life cycle. Effects analysis refers to studying
the consequences of those failures on different system levels.

Objective of FMEA:

Identify and prevent safety hazards


Minimize loss of product performance or performance degradation
Improve test and verification plans (in the case of System or Design FMEAs)
Improve Process Control Plans (in the case of Process FMEAs)
Consider changes to the product design or manufacturing process
Identify significant product or process characteristics
Develop Preventive Maintenance plans for in-service machinery and equipment
Develop online diagnostic techniques

FSIPD 197
The three most common types of FMEAs are:

System FMEA (SFMEA)


Design FMEA (DFMEA)
Process FMEA (PFMEA)

SFMEA is at highest-level analysis of an entire system, made up of various subsystems. The focus is on
system-related deficiencies, including

System safety and system integration


Interfaces between subsystems or with other systems
Interactions between subsystems or with the surrounding environment
Single-point failures (where a single component failure can result in complete failure of the entire
system).

DFMEA is at the subsystem level (made up of various components) or component level. Design FMEA usually
assumes the product will be manufactured according to specifications. The Focus is on product design-related
deficiencies, with emphasis on

Improving the design ,


Ensuring product operation is safe and reliable during the useful life of the equipment.
Interfaces between adjacent components

FEA

Finite Element Analysis (FEA) was first developed in 1943 by R. Courant, who utilized the Ritz method of
numerical analysis and minimization of vibrational calculus to obtain approximate solutions to vibration
systems. Shortly thereafter, a paper published in 1956 by M. J. Turner, R. W. Clough, H. C. Martin, and L. J.
Topp established a broader definition of numerical analysis. The paper centered on the "stiffness and
deflection of complex structures".

By the early 70's, FEA was limited to expensive mainframe computers generally owned by the aeronautics,
automotive, defense, and nuclear industries. Since the rapid decline in the cost of computers and the
phenomenal increase in computing power, FEA has been developed to an incredible precision. Present day
supercomputers are now able to produce accurate results for all kinds of parameters.

FEA consists of a computer model of a material or design that is stressed and analyzed for specific results. It
is used in new product design, and existing product refinement. A company is able to verify a proposed design
will be able to perform to the client's specifications prior to manufacturing or construction. Modifying an
existing product or structure is utilized to qualify the product or structure for a new service condition. In case
of structural failure, FEA may be used to help determine the design modifications to meet the new condition.

There are generally two types of analysis that are used in industry: 2-D modeling, and 3-D modeling. While 2-
D modeling conserves simplicity and allows the analysis to be run on a relatively normal computer, it tends to
yield less accurate results. 3-D modeling, however, produces more accurate results while sacrificing the ability
to run on all but the fastest computers effectively. Within each of these modeling schemes, the programmer
can insert numerous algorithms (functions) which may make the system behave linearly or non-linearly.
Linear systems are far less complex and generally do not take into account plastic deformation. Non-linear
systems do account for plastic deformation, and many also are capable of testing a material all the way to
fracture.

FEA uses a complex system of points called nodes which make a grid called a mesh. This mesh is programmed
to contain the material and structural properties which define how the structure will react to certain loading
conditions. Nodes are assigned at a certain density throughout the material depending on the anticipated
stress levels of a particular area. Regions which will receive large amounts of stress usually have a higher node

FSIPD 198
density than those which experience little or no stress. Points of interest may consist of: fracture point of
previously tested material, fillets, corners, complex detail, and high stress areas. The mesh acts like a spider
web in that from each node, there extends a mesh element to each of the adjacent nodes. This web of vectors
is what carries the material properties to the object, creating many elements.

A wide range of objective functions (variables within the system) are available for minimization or
maximization:

Mass, volume, temperature


Strain energy, stress strain
Force, displacement, velocity, acceleration

There are multiple loading conditions which may be applied to a system. Next to some examples are shown:

Point, pressure, thermal, gravity, and centrifugal static loads


Thermal loads from solution of heat transfer analysis
Enforced displacements
Heat flux and convection
Point, pressure and gravity dynamic load

Each FEA program may come with an element library, or one is constructed over time. Some sample elements
are:

Rod elements
Beam elements
Plate/Shell/Composite elements
Shear panel
Solid elements
Spring elements
Mass elements
Rigid elements

Many FEA programs also are equipped with the capability to use multiple materials within the structure such
as Viscous damping elements such as:

Isotropic, identical throughout


Orthotropic, identical at 90 degrees
General anisotropic, different throughout

Types of Engineering Analysis

Structural analysis consists of linear and non-linear models. Linear models use simple parameters and
assume that the material is not plastically deformed. Non-linear models consist of stressing the material
past its elastic capabilities. The stresses in the material then vary with the amount of deformation as in.

Vibrational analysis is used to test a material against random vibrations, shock, and impact. Each of these
incidences may act on the natural vibrational frequency of the material which, in turn, may cause resonance
and subsequent failure.

Fatigue analysis helps designers to predict the life of a material or structure by showing the effects of cyclic
loading on the specimen. Such analysis can show the areas where crack propagation is most likely to occur.
Failure due to fatigue may also show the damage tolerance of the material.

FSIPD 199
Heat Transfer analysis models the conductivity or thermal fluid dynamics of the material or structure. This
may consist of a steady-state or transient transfer. Steady-state transfer refers to constant thermo
properties in the material that yield linear heat diffusion.

Results of Finite Element Analysis

FEA has become a solution to the task of predicting failure due to unknown stresses by showing problem
areas in a material and allowing designers to see all of the theoretical stresses within. This method of product
design and testing is far superior to the manufacturing costs which would accrue if each sample was actually
built and tested.

CFD

CFD is called Computational fluid dynamics, is a branch of fluid mechanics that uses numerical
methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to
perform the calculations required to simulate the interaction of liquids and gases with surfaces defined
by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research
yields software that improves the accuracy and speed of complex simulation scenarios such
as transonic or turbulent flows. Initial experimental validation of such software is performed using a wind
tunnel with the final validation coming in full-scale testing, e.g. flight tests.

The fundamental basis of almost all CFD problems are the NavierStokes equations, which define any single-
phase (gas or liquid, but not both) fluid flow. These equations can be simplified by removing terms
describing viscous actions to yield the Euler equations. Further simplification, by removing terms
describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic
and supersonic flows (not transonic orhypersonic) these equations can be linearized to yield the linearized
potential equations.

FTA

Fault tree analysis (FTA) is a top down, deductive failure analysis in which an undesired state of a system is
analyzed using Boolean logic to combine a series of lower-level events. This analysis method is mainly used in
the fields of safety engineering and reliability engineering to understand how systems can fail, to identify the
best ways to reduce risk or to determine (or get a feeling for) event rates of a safety accident or a particular
system level (functional) failure. FTA is used in the aerospace, nuclear power, chemical and process,
pharmaceutical, petrochemical and other high-hazard industries; but is also used in fields as diverse as risk
factor identification relating to social service system failure.

In aerospace, the more general term "system Failure Condition" is used for the "undesired state" / Top event
of the fault tree. These conditions are classified by the severity of their effects. The most severe conditions
require the most extensive fault tree analysis. These "system Failure Conditions" and their classification are
often previously determined in the functional Hazard analysis.

Why FTA:

To exhaustively identify the causes of a failure


To identify weaknesses in a system
To assess a proposed design for its reliability or safety
To identify effects of human errors
To prioritize contributors to failure
To identify effective upgrades to a system
To quantify the failure probability and contributors
To optimize tests and maintenance

FSIPD 200
Fault Tree

Negative analytical trees or fault trees are excellent troubleshooting tools. They can be used to prevent or
identify failures prior to their occurrence, but are more frequently used to analyze accidents or as investigative
tools to pinpoint failures. When an accident or failure occurs, the root cause of the negative event can be
identified. Each event is analyzed by asking, How could this happen? In answering this question, the
primary causes and how they interact to produce an undesired event are identified. This logic process
continues until all potential causes have been identified. Throughout this process, a tree diagram is used to
record the events as they are identified. Tree branches stop when all events leading to the negative event are
complete. Symbols are used to represent various events and describe relationships:

And gate - represents a condition in which all the events shown below the gate (input gate) must be present
for the event shown above the gate (output event) to occur. This means the output event will occur only if all
of the input events exist simultaneously.

Or gate - represents a situation in which any of the events shown below the gate (input gate) will lead to the
event shown above the gate (output event). The event will occur if only one or any combination of the input
events exists.

There are five types of event symbols:

1. Rectangle - The rectangle is the main building block for the analytical tree. It represents the negative event
and is located at the top of the tree and can be located throughout the tree to indicate other events capable
of being broken down further. This is the only symbol that will have a logic gate and input events below it.

2. Circle A circle represents a base event in the tree. These are found on the bottom tiers of the tree and
require no further development or breakdown. There are no gates or events below the base event.

3. Diamond The diamond identifies an undeveloped terminal event. Such an event is one not fully
developed because of a lack of information or significance. A fault tree branch can end with a diamond. For
example, most projects require personnel, procedures, and hardware. The tree developer may decide to
concentrate on the personnel aspect of the procedure and not the hardware or procedural aspects. In this case
the developer would use diamonds to show procedures and hardware as undeveloped terminal events.

4. Oval An oval symbol represents a special situation that can only happen if certain circumstances occur.
This is spelled out in the oval symbol. An example of this might be if switches must be thrown in a specific
sequence before an action takes place.

5. Triangle The triangle signifies a transfer of a fault tree branch to another location within the tree. Where
a triangle connects to the tree with an arrow, everything shown below the connection point transfers to
another area of the tree. This area is identified by a corresponding triangle that is connected to the tree with a
vertical line. Letters, numbers or figures identify one set of transfer symbols from another. To maintain the
simplicity of the analytical tree, the transfer symbol should be used sparingly.

FTA involves the following steps:

1. Define the top event.


2. Know the system.
3. Construct the tree.
4. Validate the tree.
5. Evaluate the tree.
6. Study tradeoffs.
7. Consider alternatives and recommend action.

FSIPD 201
Define the top event: To define the top event the type of failure to be investigated must be identified. This
could be whatever the end result of an incident may have been, such as a forklift overturning.

Determine all the undesired events in operating a system: Separate this list into groups having common
characteristics. Several FTAs may be necessary to study a system completely. Finally, one event should be
established representing all events within each group. This event becomes the undesired event to study.

Know the system: All available information about the system and its environment should be studied. A job
analysis may prove helpful in determining the necessary information.

Construct the fault tree. This step is perhaps the simplest because only the few symbols are involved and
the actual construction is pretty straightforward. Principles of construction. The tree must be constructed
using the event symbols listed above. It should be kept simple. Maintain a logical, uniform, and consistent
format from tier to tier. Use clear, concise titles when writing in the event symbols. The logic gates used
should be restricted to the AND gate and OR gate with constraint symbols used only when necessary. An
example would be the use of the oval constraint symbol to illustrate a necessary order of events that must
happen to have an event occur. The transfer triangle should be used sparingly if at all. The more the transfer
triangle is used, the more complicated the tree becomes. The purpose of the tree is to keep the procedure as
simple as possible.

Validate the tree. This requires allowing a person knowledgeable in the process to review the tree for
completeness and accuracy.

Evaluate the fault tree. The tree should then be scrutinized for those areas where improvements in the
analysis can be made or where there may be an opportunity to utilize alternative procedures or materials to
decrease the hazard.

Study tradeoffs. In this step, any alternative methods that are implemented should be further evaluated.
This will allow evaluators to see any problems that may be related with the new procedure prior to
implementation.

Consider alternatives and recommend action. This is the last step in the process where corrective action or
alternative measures are recommended.

3.4.4 Introduction to product validation processes and stages Industry specific (Sub-system Testing/
Functional Testing/ Performance Testing/ Compliance Testing)

Introduction -

Product Verification and validation is the process of ensuring that a product design meets requirements.
Verification confirms that products properly reflect the requirements specified for them, ensuring that you
built it right. Validation confirms that the product, as provided, will fulfill its intended use, ensuring that
you built the right thing. Typical inputs are requirements and design representations such as 2D/3D
mechanical CAD drawings and models, electrical schematics, and software code. Typical outputs are a
determination of whether the design component or system met requirements, descriptions of failure modes,
summarized test results, and recommendations for design improvement.

There are two main types of product validation i.e. internal and external.

Internal product validation: When the testing / validation are done within the same organisation, it is called
internal product validation. There are three stages of internal product validation:

Alpha testing: In this stage, a small no. of prototypes are tested to check the specifications, performance,
etc. Once, this test is cleared, beta testing is done.

FSIPD 202
Betaa testing: In this stage, a feew products made
m by the manufacturin
ng units are teested in order to check
the manufacturability of the product with t the producction capabilities of the plant. On
h reference to
passsing through this
t test, the product
p is reaady for producction.
Releease: The firstt batch of the production is tested for its performaance, compliance testing, etc.e Many
alpha and beta sttage tests aree repeated. If he product faails to pass th
he test, then there are chances that
the production
p off the referred product
p may stop.
s

External product validation: Wheen testing is done


d ustomers or any other external agency, it is called
by the cu
dation. There are three stages of extern
external product valid nal product validation:
v

Testting of workin
ng models: Pro ototypes or working
w modells are given to
o customers aand their feed
dbacks are
takeen back directlly to the R&D department.
Free trials: A few
w beta productts are given tot specific cusstomers for a short duratioon of time to
o get their
feed
dbacks. This may
m bring any possible deficciency up for thet observation.
Pre release: Prroducts of firrst batch of the production are given to the custoomers. The feedbacks
f
received at this sttage are almost similar to that
t of real market responsse.

There aree basically fivve types of teests which aree carried out on
o products for
f the purposse of validatio
on. These
tests are briefly descriibed in the tabble 3.24.

Table 3.7.
3 Categoriess of test

Source: TCS
T

FSIPD 203
Time line of product validation: Figure 3.24. shows the time line of the product validation process.

Figure 3.24. Product validation time line

Source: TCS

Various phases of this time line are discussed further.

Component testing: In this stage, individual components are tested (Functional, Endurance, Abuse testing)
e.g. piston, spark plug. This stage of testing is very important because if a component does not withstand the
requirements, then it may result in failure or complete product.

Subsystem testing: In this stage, subsystems (e.g. gear box) are tested for their specifications. These tests
are basically functional and endurance test.

System testing: In this step, individual systems (e.g. braking system) are tested for performance, endurance,
function. Sometimes it happens that any defect which remains undetected in subsystem testing stage
detected in system testing.

Product level testing: In this stage, complete product is tested for performance and fulfillment of
requirements. The tests at this stage include functional tests, performance tests, field tests, endurance tests,
drop test, crash test, abuse test, etc. Without passing through this test, no product gets approval for
production.

Compliance testing: In this stage, products are tested to check whether they fulfill the standards set by the
government bodies e.g. environmental and safety regulations.

FSIPD 204
Examples of product validation:
v

1. Valid
dation of an automobile:
a V
Validation proccess of autom
mobile is given
n in table3.7.

Table3.8. Valid
dation processs of automobile

Source: TCS
T

2. Valid
dation of a mobile: Validation process off mobile is givven below in table
t 3.9. :

Table 3.9.Vaalidation proceess of mobile


Source: TCS
T

FSIPD 205
3.4.5 Product Testing standards and certification Industry specific

Product Testing: Product testing, also called consumer testing or comparative testing, is a process of
measuring the extent to which a product fulfills the claims made by the manufacturer, often in comparison to
other similar products. The theory is that since the advent of mass production manufacturers
produce branded products which they assert and advertise to be identical within some technical standard.
Product testing seeks to ensure that consumers can understand what products will do for them and which
products is the best value. Product testing is a strategy to increase consumer protection by checking the
claims made during marketing strategies such as advertising, which by their nature are in the interest of the
entity distributing the service and not necessarily in the interest of the consumer.

Types of testing standards:

Functional Test
Performance Test
Efficiency Test
Safety/Security Test (For Software)

Product certification: Product certification or product qualification is the process of certifying that a certain
product has passed performance tests and quality assurance tests, and meets qualification criteria stipulated
in contracts, regulations, or specifications (typically called "certification schemes" in the product certification
industry).

Most product certification bodies (or product certifiers) are accredited to ISO/IEC Guide 65:1996, an
international standard for ensuring competence in those organizations performing product certifications. The
organizations which perform this accreditation are called Accreditation Bodies, and they themselves are
assessed by international peers against the standard. Accreditation bodies which participate in the
International Accreditation Forum (IAF) Multilateral Agreement (MLA) also ensure that these accredited
Product Certifiers meet additional requirements set forth in "IAF GD5:2006 - IAF Guidance on the Application
of ISO/IEC Guide 65:1996".

Examples of some certification schemes include the Safety Equipment Institute for protective headgear, the
U.S. Federal Communications Commission (FCC) Telecommunication Certification Body (TCB) program for
radio communication devices, the U.S. Environmental Protection Agency Energy Star program,
the International Commission on the Rules for the Approval of Electrical Equipment Product Safety
Certification Body Scheme (IEECE CB Scheme), and the Green guard Environmental Institute Indoor Air Quality
program. Certification schemes are typically written to include both the performance test methods that the
product must be tested to, as well as the criteria which the product must meet to become certified.

Certification process: A product might be verified to comply with a specification or stamped with a
specification number. This does not, by itself, indicate that the item is fit for any particular use. The person or
group of persons who own the certification scheme (i.e., engineers, trade unions, building code writers,
government, industry, etc.) have the responsibility to consider the choice of available specifications, choose
the correct ones, set qualification limits, and enforce compliance with those limits. The end users of the
product have the responsibility to use the item correctly. Products must be used in accordance with their
listing for certification to be effective. Product certification is often required in sensitive industry and
marketplace areas where a failure could have serious consequences, such as negatively effecting the health
and welfare of the people or person using that product. For example, certification is stringent
in aerospace applications, since the demands for low weight tend to lead to high stress on components,
requiring appropriate metallurgy and accuracy in manufacturing. Other sensitive product area examples
include food, pharmaceuticals, healthcare products, dangerous goods, and products which have RF
emissions such as computers and cellular telephones.

FSIPD 206
The process for certification of a product is generally summed up in four steps:

Application (including testing of the product)


Evaluation (does the test data indicate that the product meets qualification criteria)
Decision (does a second review of the product application concur with the Evaluation)
Surveillance (does the product in the marketplace continue to meet qualification criteria.

3.4.6 Product Documentation ( Compliance Documentation, Catalogue, Brochures, user manual,


maintenance Manual, Spares parts list, Warranty, Disposal Guide, IETMS, Web Tools)

Introduction: Product documentation refers to any type of documentation that describes handling,
functionality and architecture of a technical product or a product under development or use. The intended
recipient for product technical documentation is both the (capable) end user as well as the administrator /
service or maintenance technician. In contrast to a mere "cookbook" manual, technical documentation aims
at providing enough information for a user to understand inner and outer dependencies of the product at
hand. The technical writer's task is to translate the usually highly formalized technical documentation
produced during the development phase into more readable purpose. It helps in easy to understand to
maintain and use by users. Product documentation in form of technical documentation, catalogue, Brochures,
user manual, maintenance Manual, Spares parts list, information about product warranty, disposal guide, etc.
The details of the product documentation are as follows:

Technical documentation: The term 'technical documentation' refers to different documents with product-
related data and information that are used and stored for different purposes. Different purposes mean:
Product definition and specification, design, manufacturing, quality assurance, product liability, product
presentation; description of features, functions and interfaces; intended, safe and correct use; service and
repair of a technical product as well as its safe disposal.

This broader view, in which all documents that are generated during the product life cycle are viewed as part
of the technical documentation is certainly justified. After all, the aim is to make available the technical
know-how and product history for subsequent users of the information (be they engineers or operators,
patent agents or public prosecutors specializing in product liability.

The focus for service providers in the field of technical documentation is, however, mainly on documents that
are required after the production process by sales people, system integrators, installation staff, operators,
service technicians, waste disposal companies etc. The reasons are simple:

Great demands are placed on the documents in terms of comprehensibility and clarity (with respect to
the specific target group), graphical design, adherence to standards/directives / public laws, linguistic
correctness etc.

The documents are passed on to the public, i.e. are part of the public presentation of the manufacturer

For the design of the documents, relatively little manufacturer-specific knowledge and know-how-
especially no company secrets-are normally required. Instead, a lot of experience with the tools and
target media is required, what becomes particularly apparent in case of an online publication such as help
system (Win-Help, HTML-Help, Java-Help or "simply" DHTML-Help).

Catalogue: Catalogue gives comprehensive details about product by the manufacturing company and
developing organization.

Brochures: A brochure (also referred to as a pamphlet) is a leaflet. Brochures are advertising pieces mainly
used to introduce a company or organization, and inform about products and/or services to a target audience.
Brochures are distributed by radio, handed personally or placed in brochure racks. They are usually present
near tourist attractions. The most common types of single-sheet brochures are the bi-fold (a single sheet
printed on both sides and folded into halves) and the tri-fold (the same, but folded into thirds). A bi-fold

FSIPD 207
brochure results in four panels (two panels on each side), while a tri-fold results in six panels (three panels on
each side). Other folder arrangements are possible: the accordion or "Z-fold" method, the "C-fold" method,
etc. Larger sheets, such as those with detailed maps or expansive photo spreads, are folded into four, five, or
six panels. When two card fascia are affixed to the outer panels of the z-folded brochure, it is commonly
known as a "Z-card". Booklet brochures are made of multiple sheets most often saddle stitched (stapled on
the creased edge) or "perfect bound" like a paperback book, and result in eight panels or more. Brochures are
often printed using four color process on thick glossy paper to give an initial impression of quality. Businesses
may turn out small quantities of brochures on a computer printer or on a digital printer, but offset
printing turns out higher quantities for less cost. Compared with a flyer or a handbill, a brochure usually uses
higher-quality paper, more color, and is folded.

Spare parts list: A spare part, spare, service part, repair part, or replacement part, is an interchangeable
part that is kept in an inventory and used for the repair or replacement of failed parts. Spare parts are an
important feature of logistics management and supply chain management, often comprising dedicated spare
parts management systems.

IETMS: An IETM or Interactive Electronic Technical Manual is a portal to manage technical documentation.
IETMs compress volumes of text into just CD-ROMs or online pages which may include sound and video, and
allow readers to locate needed information far more rapidly than in paper manuals. IETMs came into
widespread use in the 1990s as huge technical documentation projects for the aircraft and defense industries.
In the late 1970's, the U.S. military Services formulated concepts for Interactive Electronic Technical Manuals
(IETMs) to replace Technical Manuals (TMs) presented on paper and microform. Comprehensive research and
development programs were conducted, including the Navy Technical Information Presentation System
(NTIPS) and the Air Force Computer-based Maintenance Aided System (CMAS). In the 1980's, pilot systems
were developed and tested under operational conditions. Significant quantitative payoffs were demonstrated,
with overwhelming field-user preference for IETMs over paper-based TMs. Based on these successes, the Joint
Industry/Government Page less TM Committee was formed and worked to standardize IETM approaches and
technology. The Tri-Service IETM Working Group developed DoD (Department of defense) specifications for
the acquisition of IETMs; and the CALS ISG Standards Division reviewed and concurred with these
specifications. IETM authoring and presentation systems have emerged in the commercial marketplace. In
the 1990's, DoD programs are acquiring IETMs to support weapon systems, such as Paladin, Apache,
Comanche, AEGIS, FDS, BSY-2, F-22, JSTARS, and V-22. Commercial applications are underway in the airline,
automotive, and railroad industries. Internationally, IETMs are proposed for the NATO NH-90 helicopter and
the European Fighter Aircraft. Thus, IETMs have progressed from concepts, through pilot development, field
testing, and standardization, to military and commercial implementations. Clearly, IETMs have moved "From
Research to Reality.

Common IETM Standards

MIL-PRF-87268/9 (U.S. DoD)


Metafile for Interactive Documents (U.S. Navy)
MIL-STD-2361 (U.S. Army)
MIL-PRF-28001C (CALS)
STEP Product Documentation (ISO)
S1000D

Classes of IETMs

Class I Electronically Indexed Page Images


Class II Electronic Scrolling Documents
Class III Linearly Structured IETMs
Class IV Hierarchically Structured IETMs
Class V Integrated Database IETMs

FSIPD 208
User manual: The first word of each group already says it is about using the product. Because "manual" is
usually associated with a "book", "user manual" is the book, in which the usage is described. The publication
medium is specified. On the other hand, the term "user instructions" is media independent.

Maintenance manual: Maintenance manual gives instruction about product to user about how to use the
product and how to maintain when product not working properly.

Warranty: Generally meaning of warranty is promise or guarantee. It provides assurance by one party to the
other party that specific facts or conditions are true or will happen. This factual guarantee may be enforced
regardless of materiality which allows for a legal remedy if that promise is not true or followed. Although
warranties are used in many contractual circumstances, they are a common feature in consumer
law for automobiles or real estate purchases. For example, new car sales typically include a factory warranty
which guarantees against the malfunction of the car for a certain time period. In real estate transactions, a
general warranty deed may promise good title to a parcel of land while a limited warranty provides a limited
guarantee of good title.

A warranty may be express or implied, depending on whether the warranty is explicitly provided and the
jurisdiction. Warranties may also state that a particular fact is true at one point in time or that the fact will be
continuing into the future (a "promissory" or continuing warranty). Warranty data consists of claims data and
supplementary data. Claims data are the data collected during the servicing of claims under warranty and
supplementary data are additional data such as production and marketing data. This data can help determine
product reliability and plan for future modifications.

Terminal questions

1. Write short notes on


a) Technical documentation,
b) Catalogue,
c) IETMs,
d) Warranty,
e) CFD,
f) FEA,
g) FMEA,
h) DFMEA
2. What is the importance of integration of user interface design with industrial design?
3. Briefly explain the architecture of microcontroller.
4. State difference between schematic, block and circuit diagram.
5. What are the various concept generating techniques?
6. What are prototypes? Describe the different types of rapid prototyping technologies. What is the
difference between Rapid Prototyping and Rapid Manufacturing?

References

1. Kevin Otto, Kristin Wood, Product design techniques in reverse engineering and new product
development, Pearson, India, 2001.
2. Bjrn Hartmann, Gaining Design Insight: Through Interaction Prototyping Tools, Ph.D. Dissertation,
Stanford University Computer Science Department, September 2009.
3. Beaudouin-Lafon & Mackay, Prototype Development and Tools
4. Houde, S., and Hill, C., What Do Prototypes Prototype? in Handbook of Human-Computer Interaction
(2nd Ed.), M. Helander, T. Landauer, and P. Prabhu (eds.): Elsevier Science B. V: Amsterdam, 1997.
5. M. Stanek, D. Manas, M. Manas, J. Navratil, K. Kyas, V. Senkerik and A. Skrobak, , Comparison of
Different Rapid Prototyping Methods, International Journal of Mathematics and Computers in
Simulation, Issue 6, Volume 6, 2012, p 550-557.
6. Kevin Otto, Kristin Wood, Product design techniques in reverse engineering and new product
development, Pearson, India, 2001.

FSIPD 209
7. Huy Nguyen and Michael Vai, RAPID Prototyping Technology, Lincoln Laboratory Journal Volume 18,
NUMBER 2, 2010, p 17-27
8. Mihaela Iliescu, Kamran Tabeshfar, Adelina Ighigeanu, Gabriel Dobrescu, U.P.B. Sci. Bull., Importance of
rapid prototyping to product design, Series D, Vol. 71, Issue 2, 2009, p 118-124
9. Pulak M. Pandey, Rapid Prototyping Technologies, Applications and Part Deposition Planning
10. Burgess, John A., Design Assurance for Engineers and Managers, Marcel Dekker, Inc., New York, 1984.
pp. 150-165
11. MIL-STD-1540D, Product Verification Requirements for Launch, Upper Stage and Space Vehicles, U.S.
Department of Defense, Government Printing Office, Washington, D.C.,1999
12. "Designing in a Big Way - a practical guide for managers", High Level Designs Ltd
13. R. S. Khandpur ,Printed circuit boards: design, fabrication, assembly and testing, Tata McGraw-Hill.
p.10, 2005
14. Cem Kaner, Exploratory Testing, Quality Assurance Institute Worldwide Annual Software Testing
Conference, Florida Institute of Technology, Orlando, FL, November 2006
15. Jiantao Pan, Software Testing, Carnegie Mellon University
16. Gelperin, D.; B. Hetzel ,"The Growth of Software Testing", CACM 31 (6): 687,1988
17. http://www.lia.org
18. Internet materials, You tube etc.

FSIPD 210
Module 4
Sustenance Engineering and
End-of-Life (EOL) Support

FSIPD 211
Sustenance Engineering and End-of-Life (EOL) Support

4.1 Sustenance

Sustenance is an action of retaining of someone or something in life or existence. In engineering terms it can
be considered as an action of upholding a product in existence in market. It involves the optimization of
product or equipment, procedures, and departmental budgets to achieve better maintainability, reliability,
and availability of product or equipment. Considering the demand for newer product versions and continuous
customer requests for effective servicing, any companys real challenges arise after a product is successfully
launched. Besides, the existence of multiple competitors in the market adds to the pressure, compounding
the risk associated with the need to change. Resolving this and other issues related to product updates,
enhancements, customizations and servicing etc. can help boost customer satisfaction levels. However,
tackling all these at a global level requires multiple teams across locations, adding to the time, cost, and
resources involved. Basically, the three most important ways of sustaining a product in market are
maintenance, repair and enhancements or upgrades. Therefore, the End-of-life support is very much essential
for any product.

Objectives:
The following sessions makes you

To identify the importance of maintenance and repair in product sustenance


To study the product enhancements and up gradations and its role in product sustenance
To manage product obsolescence and product configuration
To understand the EoL disposal of products

End-of-life (EOL) Support


EOL is a term used with respect to a product supplied to customers, indicating that the product is in the end
of its useful life, and a vendor will no longer be marketing, selling, or sustaining it. The vendor may also be
limiting or ending support for the product. In the specific case of product sales, the term end-of-sale (EOS)
has also been used. The term lifetime, after the last production date, depends on the product and is related to
a customer's expected product lifetime. Different lifetime examples include toys, fast food chains (weeks or
months), cars (10 years), and mobile phones (3 years).
Product support during EOL varies by product. For hardware with an expected lifetime of 10 years after
production ends, the support includes spare parts, technical support and service. Spare part lifetimes are
price-driven due to increasing production costs: when the parts can no longer be supplied through a high-
volume production site (often closed when series production ends), the cost increases.

In the computing field, this has significance in the production and supportability of software and hardware
products. For example, Microsoft marked Windows 98 for end-of-life on June 30, 2006. Its software produced
after that date, such as Office 2007 (released November 30, 2006), is not supported on Windows 98 or any
prior versions. Depending on vendor, this may differ from end of service life, which has the added distinction
that a system or software will no longer be supported by the vendor providing support. Many hardware
products are now engineered with end of life in mind. End of life ultimately leads to the concept of disposal -
what is done with the end product after its useful life is over. Therefore, the maintenance and repairs are
required to increase the performance and gives support to enhance the productivity of the system.

FSIPD 212
4.1.1 Maintenance and Repair:

Maintenance is the activity performed to reduce the unavailability of a product, softwares or services.
Maintenance is a set of organized activities that are carried out in order to retain a product or software in its
best operational condition or acceptable productivity condition or the operation ready state with minimum
cost acquired.

Engineering maintenance is an important sector of the economy. To be able to compete successfully both at
national and international levels, production systems and equipment must perform consistently at very high
levels. Requirements for increased product quality, reduced production time and enhanced operating
performance within a rapidly changing customer demand environment continue to demand a high
maintenance performance. In some cases, maintenance is required to increase operational efficiency and
revenues and customer satisfaction while reducing capital, operating and support costs. This may be the
largest challenge facing production enterprises these days. For this, maintenance strategy is required to be
aligned with the production logistics and also to keep updated with the current best practices.

The main activities of maintenance include: Repair and Replacement. Repair involves reconditioning achieved
by adjusting and correctly setting up the machine as per initial specification. It may include replacements,
overhauling, adjustments, etc. Therefore, the definition of repairs the set of organized activities conducted to
get equipment working again after it has broken down. It can be also known as Breakdown or corrective
maintenance. Thus REPAIR is a subset of MAINTENANCE.Overhaul can be defined as a comprehensive
inspection and restoration of an item or a piece of equipment to an acceptable level in terms of durability
time or usage limit.

On the other hand, replacement involves the change of worn out parts with new parts

Adjustment can be defined asa small alteration or movement made to achieve a desired fit, appearance, or
result. It involves periodical alteration of specified variable elements of material for the purpose of achieving
the optimum system performance.

Evolution of Maintenance

Over the past century, the philosophy behind maintenance has changed drastically. In the pre-world war era,
the idea of maintenance was limited to fixing the machine after it breaks down but it was easy to see its
limitations. When the machine failed, it often leads to decline in production. Also, if the equipment needed
to be replaced, the cost of replacing it alone was substantial. So the impact on the efficiency of product
system was very high. Health, safety and environment issues related to malfunctioning equipment were
additional problems. So for obvious reasons maintenance was considered as added cost to the process and
non-value adding to the production system.

But over the time, the industries began to accept maintenance as a positive activity. Today the idea of
maintenance involves activities performed to prevent break downs, avoid failures, unnecessary production
loss and safety violations. The equipment is maintained before break down occurs thus reducing the impact
on the efficiency of the production system. Today the industries consider maintenance as an indispensable
part of the production system. The main objectives of maintenance are as follows:

Maximizing production or increasing facilities availability at the lowest cost and at the highest quality
and safety standards.
Reducing critical breakdowns and emergency shutdowns.
Optimizing resources utilization.
Reducing downtime that is reducing the time period during which a system fails to perform its primary
function.
Improving spares stock control
Improving equipment efficiency and reducing scrap rate
Minimizing energy usage.
Providing reliable cost and budgetary control.

FSIPD 213
Identifying and implementing cost reductions..
Enhance capital equipment productive life.
Allow better planning and scheduling of needed maintenance work.
Minimize production losses due to equipment failures.
Promote health and safety of maintenance personnel.
Categories of maintenance
The different categories of maintenance are shown in the figure4.1. The main two categories are: Unplanned
Maintenance and planned maintenance.

Figure- 4.1 Categories of Maintenance

Unplanned Maintenance: It is also known as Run to Failure Maintenance or Reactive Maintenance. The
required repair, replacement, or restore action performed on a machine or a facility after the occurrence of a
failure in order to bring this machine or facility to at least its minimum acceptable condition. There are two
types of unplanned maintenance. These are:

1) Emergency Maintenance: It is carried out as fast as possible in order to bring a failed machine or facility to
a safe and operationally efficient condition. Example: A temporary fix given for an onboard navigation system
for aircraft till it completes the journey.

2) Breakdown maintenance: It is performed after the occurrence of a complex failure for which advanced
provision has been made in the form of repair method, spares, materials, labour and equipment. Example: A
proper repair work done on the onboard navigation system so that it is back to service as per specifications

FSIPD 214
Disadvantages of Unplanned Maintenance:

Its activities are expensive in terms of both direct and indirect cost.
Using this type of maintenance, the occurrence of a failure in a component can cause failures in other
components in the same equipment, which leads to low production availability.
Its activities are very difficult to plan and schedule in advance.

Advantages of Unplanned Maintenance

Unplanned Maintenance is useful when:

When the failure of a component in a system is unpredictable.


The cost of performing run to failure maintenance activities is lower than performing other activities of
other types of maintenance.
The equipment failure priority is too low in order to include the activities of preventing it within the
planned maintenance budget.

Planned Maintenance: It is also known as proactive maintenance.This type of maintenance is carried out at
predetermined intervals or according to prescribed criteria and is intended to reduce the probability of failure
or the degradation of the functioning and the effects limited. It is notably targeted at satisfying all the
objectives of maintenance. It can be further classified into four types these are:

1) Predictive Maintenance: Predictive maintenance is a set of activities that detect changes in the physical
condition of equipment (signs of failure) in order to carry out the appropriate maintenance work for
maximizing the service life of equipment without increasing the risk of failure. It is classified into two kinds
according to the methods of detecting the signs of failure:

Condition-based predictive maintenance: It depends on continuous or periodic condition monitoring


equipment to detect the signs of failure. Example: Using the vehicle brake wear indicator (small pointer)
to determine the change of brake pad, on-board diagnostic lights like battery / engine oil used to
determine the time for maintenance.
Statistical-based predictive maintenance: It depends on statistical data from the meticulous recording
of the stoppages of the in-plant items and components in order to develop models for predicting failures.
Example: Using the wear out rates of tools in similar machines to determine change frequency.

Limitation of Predictive Maintenance

The main drawback of predictive maintenance is that it depends heavily on information and the correct
interpretation of the information.

2) Preventive maintenance: Preventive maintenance is maintenance performed in an attempt to avoid


failures, unnecessary production loss and safety violations. In this type of maintenance, equipment is
maintained before break down occurs or in other words preventive maintenance is committed to the
elimination or prevention of corrective and breakdown maintenance tasks. It has many different variations
and is subject of various researches to determine best and most efficient way to maintain equipment. It is
effective in preventing age related failures of the equipment. For random failure patterns which amount to
eighty percent of the failure patterns, condition monitoring proves to be effective. In most plants, preventive

FSIPD 215
maintenance is limited to periodic lubrication, adjustments, and other time-driven maintenance tasks. These
programs are not true preventive programs. In fact, most continue to rely on breakdowns as the principal
motivation for maintenance activities. A comprehensive preventive maintenance program will include
predictive maintenance, time-driven maintenance tasks, and corrective maintenance to provide
comprehensive support for all plant production or manufacturing systems. It will utilize regular evaluation of
critical plant equipment, machinery, and systems to detect potential problems and immediately schedule
maintenance tasks that will prevent any degradation in operating condition. Preventive maintenance can be
classified into:
Routine Maintenance: This type of maintenance is repetitive and periodic in nature such as lubrication,
cleaning, and small adjustment. Example: Engine oil change, Brake adjustment
Running Maintenance: This type of maintenance is carried out while the machine or equipment is
running and they represent those activities that are performed before the actual preventive maintenance
activities take place. Example: Cleaning of machine non-moving parts.
Opportunity maintenance: This type of maintenance is carried out when an unplanned opportunity
exists during the period of performing planned maintenance activities to other machines or facilities.
Example: Lubrication of machine parts when another machine in the line breaks down
Window maintenance: This type of maintenance is carried out when a machine or equipment is not
required for definite period of time. Example: Maintenance of vehicle when going out of station / when
waiting for other spare parts
Shutdown preventive maintenance: This type of maintenance activities are carried out when the
production line is in total stoppage situation. Example: Tool turret replacements when product line
stopped due to some other reason.

Elements of Preventive Maintenance: There are seven main elements of preventive maintenance
(Figure.4.2) as follows:
Inspection: Periodically inspecting materials or items to determine their serviceability by comparing their
physical, electrical, mechanical characteristics etc. to expected standards.
Servicing: Cleaning, lubricating, charging, preservation, etc. of items/equipments periodically to prevent
the occurrence of early failures.
Calibration: It is the process of periodical evaluation and adjustment of the parameters or characteristics
of an item by comparison to a standard; it consists of the comparison of two instruments, one of which is
certified standard with known accuracy, to detect and adjust any irregularity in the accuracy of the
characteristic or parameter of the item which is being compared to the established standard value.
Testing: Periodically testing or checking out to determine serviceability and detect electrical or
mechanical-related degradation.
Alignment: Making changes to an items specified variable elements for the purpose of achieving
optimum performance.
Adjustment: Periodically adjusting specified variable elements of material for the purpose of achieving
the optimum system performance
Installation: Periodic replacement of limited-life items or the items experiencing time cycle or wear
degradation, to maintain the specified system tolerance.

Purpose of Preventive Maintenance: Some of the main objectives of PM are to:


Enhance capital equipment productive life.
Reduce critical equipment breakdowns.
Allow better planning and scheduling of needed maintenance work.
Minimize production losses due to equipment failures.
Promote health and safety of maintenance personnel.

FSIPD 216
Inspection

Installation Servicing

Elements of
preventive
maintenance
Adjustment Calibration

Alignment Testing

Figure-4.2 Elements of Preventive Maintenance

Difference between Predictive and Preventive Maintenance

The main difference between preventive maintenance and predictive maintenance is that predictive
maintenance uses monitoring the condition of machines or equipment to determine the actual mean time to
failure whereas preventive maintenance depends on industrial average life statistics.

3) Corrective Maintenance: Actions such as repair, replacement, or restore will be carried out after the
occurrence of a failure in order to eliminate the source of this failure or reduce the frequency of its occurrence.

Objectives:

Maximization of the effectiveness of all critical plant systems


Elimination of breakdowns
Elimination of unnecessary repair.
Reduction of the deviations from optimum operating conditions

There are 3 types of corrective maintenance:

Remedial maintenance: A set of activities that are performed to eliminate the source of failure without
interrupting the continuity of the production process. Example: Adjustment of cam shaft timing to avoid
engine overheating and stopping (failure)
Deferred maintenance: A set of corrective maintenance activities that are not immediately initiated
after the occurrence of a failure but are delayed in such a way that will not affect the production process.
Example: Changing of pulley belts the slackness caused incorrect press force and thus incorrect
stampings
Shutdown corrective maintenance: A set of corrective maintenance activities that are performed when
the production line is in total stoppage situation. Example: Correct the CNC program - feed rate to avoid
unacceptable roughness value in work piece during the production line stoppage.

FSIPD 217
Difference between preventive maintenance and corrective maintenance

The difference between corrective maintenance and preventive maintenance is that for the corrective
maintenance, the failure should occur before any corrective action is taken.

Difference between unplanned (breakdown) maintenance and corrective maintenance

Corrective maintenance is different from run to failure maintenance in that its activities are planned and
regularly taken out to keep plants machines and equipment in optimum operating condition.

Process of Corrective Maintenance:

The way to perform corrective maintenance activities is by conducting four important steps:

1. Fault detection: This step involves finding out any existing flaw, error or problem in the system.
2. Fault isolation: After the fault has been discovered, the flawed or the faulty part is withdrawn from the
system to prevent its impact on the system operation.
3. Fault elimination: In the fault elimination step several actions could be taken such as adjusting,
aligning, calibrating, reworking, removing, replacing or renovation.
4. Verification of fault elimination: After the fault has been removed the system is checked again to
ensure that the flaw has been removed and there is no arousal of any new fault or flaw.

4) Improvement Maintenance: Improvement maintenance is a set of activities performed on the product


such that its regular maintenance becomes relatively easy and less expensive or to decrease failure rate when
predictive or preventive maintenance methods fail to do so. There are 3 types of improvement maintenance:

Design-Out Maintenance: Redesign those parts of the equipment which consume high levels of
maintenance effort or spares cost or which have unacceptably high failure rates. If the maintenance cost
or downtime cost of equipment is too costly, then the Design-Out Maintenance strategy can often be
effective. Example: Redesign the tire to tubeless tire to prevent frequent punctures. The high
maintenance costs may have been caused by a number of factors, including:
o Poor maintenance.
o Operation of equipment outside of its original design specification
o A poor initial design.
Engineering services maintenance: This type of maintenance includes construction and construction
modification, removal and installation, and rearrangement of facilities. Example: Change of installation
method in heavy machinery to reduce operating loads
Shutdown improvement maintenance: This is a set of improvement maintenance activities that are
performed while the production line is in a complete stoppage situation. Example: Carrying out some
minor modifications in machines like auto cleaning scrubber to reduce burr collection and improved life.

FSIPD 218
4.1.2 Enhancement

Enhancement can be defined as Any product modification or upgrade that increases capabilities or
functionalities beyond original manufacturer specifications Enhancement is the action of improving the
performance of the existing attributes of a product or equipment. Companies often introduce products with
enhanced features to compete with the dominant brands in the market. These enhanced features can help
firms differentiate their products, though in different ways. For instance, enhanced features enable a new
product to claim superiority over competitors on the basis of a common ground (e.g., Xerox printer is 3 times
faster than HPs fastest. Mileage of an average Hero Honda bike is 40 km/l while that of a Bajaj is 70 km/l. If
a particular product is not enhanced it may become obsolete. Obsolescence is the state of being which occurs
when an object, service, or practice is no longer wanted even though it may still be in good working order.

The need of enhancements

Enables to satisfy the changing needs of the targeted market


Marketers and engineering managers are often tempted to include everything in the baseline product
and go for a home run product that satisfies the needs of more than one market segment
The scope of a project that incorporates all capabilities becomes too large for timely market
introduction product cost becomes too high for competitiveness and profitability in most segments

Frequency of enhancements

Too frequent product upgrades affects the implementation / usage of end customers
Best is to bundle a set of capabilities and features together and introduce them collectively at an
appropriate time. When the previous product baseline is showing signs of market erosion

Categories of Product Enhancement: There are 2 categories of Product enhancement. Namely Internal and
External.

Internal: When the product is being enhanced by the original equipment manufacturer (OEM) and sold
directly to the customer. Example: iPhone 4 to iPhone 4s (Some added functionalities in 4s), Personalized
impressions in iPhone.

External: When the product is being enhanced by anybody outside the original equipment manufacturer
(OEM). It can take place after the product is sold to the customer or can happen as a pre-delivery activity.
Example: Performance exhaust systems being fitted into a car or bike, HTC's enhanced Android OS, DC
Designs enhanced cars etc

Scopes of Product Enhancement

The various scopes in which product enhancement can be carried out are:

Aesthetics: It involves cosmetic changes to the baseline product Example: Limited edition cars offered
during product milestones (5 Year completion in market), Enfield Desert storm is an aesthetically
enhanced standard bullet motorcycle.
Performance: It involves the modifications to some of the systems to increase the performance.
Example: Sports tuned exhaust system for higher torque
Functional: It involves addition or modification of some system to provide new functionality. Example:
Rear view camera system in a car not offered by OEM.

FSIPD 219
Customization: User driven feature configuration or catering to unique needs of customer achieved by a
one-off product builds No design change, just a new configuration. Example: Customer specified
Custom colour is offered in Ferrari cars, Swiss watch with custom logo, Real Team Jerseys from Adidas.
Personalization: Modification of a product to include customers' specific need. Example: Photo Cake,
Photo imprint on T-Shirt / Personal etching in iPhone

Process of Product Enhancement

The various steps involved in the process of product enhancement are shown in figure 4.3:

Figure-4.3. Process of Product Enhancement

Baseline product launched with optimum features and cost to penetrate target market.
Derivative product with added features and capabilities to the baseline are developed through a series of
incremental innovation steps.
This enhanced product is introduced to the same target market to seek customer upgrades or to attract
new customers.
The old product may be phased out if enhanced product's demand is huge.

Case Study: Enhancement of an Android Smart Phone

Now the problem we have on hand is the enhancement of an Android Smart Phone. Here, we will consider the
possible enhancements in all the five scopes of product enhancements. The following table 4.1summarizes
the whole case study:

Scopes Enhancements Possible Category


Aesthetics 1) Optional additional back covers in varied Internal
colours.
External
2) Mobile pouch / Silicone cover

FSIPD 220
Performance Usage of memory card of higher capacity External
than standard Improves the storage
capacity
Functional Installation of 3rd party android applications External & Internal
like Whatsapp, Games, Wikipedia and also
applications from OEM's android store
thereby extending the phone's functionality.

Bluetooth headset
Customization Usage of 3rd party ring tones, Usage of fancy External & Internal
stickers and other accessories

Personalization Screen saver with personal photos External & Internal

Assigning voice dial functionalities / Calling


Image for some contacts

Personal etching on phone

Table- 4.1 Case study of android smart phone

4.2.1 Obsolescence Management:

Obsolescence as defined in the International Standard IEC 62402:20071 is the transition from availability
from the original manufacturer to unavailability. It is the state of being which occurs when an object, service,
or practice is no longer wanted even though it may still be in good working order. Obsolescence frequently
occurs because a replacement has become available that has, in total, more advantages than the problem
related to repurchasing the replacement. Obsolete refers to something that is already disused or discarded, or
antiquated. Typically, obsolescence is preceded by a gradual decline in popularity.It can also lead to
companies going out of business or stop selling/supporting a particular product. For example Zen Car
produced by Maruti Suzuki became obsolete so was the case with Chetak scooter produced by the Bajaj
company. Obsolescence can be easily seen in electronics and software products like the E-series and N-series
of Nokia mobile phones. Softwares are always upgraded and generally no support is provided to the older
versions. For example, Microsofts Windows 98 became obsolete on June 30, 2006. Any software produced
after that date by Microsoft such as Office 2007 (released November 30, 2006), is not supported on Windows
98 or any prior versions.
Planned obsolescence or built-in obsolescence in industrial design is a policy of planning or designing
a product with a limited useful life, so it will become obsolete after a certain period of time. Planned
obsolescence has potential benefits for a producer because to obtain regular use of the product the consumer
is under pressure to purchase again, whether from the same manufacturer (a replacement part or a newer
model), or from a competitor who might also rely on planned obsolescence). For an industry, planned
obsolescence stimulates demand by encouraging purchasers to buy sooner if they still want a functioning
product. Planned obsolescence is common in many different products including
sunglasses, headphones, shoes, watches etc. There is potential backlash of consumers who learn that the
manufacturer invested money to make the product obsolete faster; such consumers might turn to a producer
(if any exists) that offers a more durable alternative. Such products generally come with the tag of limited
edition or special edition.

FSIPD 221
Obsolescence Management is the coordinated activities to direct and control an organization with regard to
obsolescence. The objective of OM is to ensure that obsolescence is managed as an integral part of design,
development, production and in-service support in order to minimize the financial and availability impact
throughout the product life cycle. The various Steps involved in obsolescence management are as follows:

1) Identify: In this process the potential elements or triggers of obsolescence are identified. These triggers
can be identified by the signs and symptoms of obsolescence. Some of these signs of obsolescence are:

Notification of a part that will be discontinued in the future.


A system that uses a unique part that can only be produced by a single manufacturer.
Dwindling of parts for a system, but no replacements over time.
Planning in a new system design that does not consider future obsolescence problems.
A parts list that contains an end-of-life cycle part before a system has gone into production.
2) Analyze: The core methodology for analysis has been to make direct contact with the supplier of an item.
Direct contact takes the form of phone, e-mail or other communication with a competent supplier
representative. This is essential in the management of commercial off-the-shelf products and assemblies.
The main items of concern in a DMSMS analysis are:

Is the item an active product?


Is the item a good seller (generates good revenue for the company)?
Is the item slated for obsolescence for any reason (e.g. replaced by a newer version)?

3) Assess: The next step is identify the criticality of the analysis. Depending upon the situation of
obsolescence the criticality can be divided into low, medium and high which is ultimately a probability-
impact-cost analysis. The obsolescence criticality is low then we can go for obsolescence resolution which is a
reactive process and is taken only after a product has gone obsolete, on the other hand if the obsolescence
criticality is medium or high then we have to go for obsolescence mitigation which is a pro-active approach in
order to avoid obsolescence.

The criticality can be defined based on the following conditions:


a. Low
Replacement available (same footprint)
b. Medium
Replacement available, different footprint (new layout required)
c. High
No direct replacement available, different functionality

Design modification required


New Layout
Software Changes could be required

d. Highest
No direct replacement available, process technology obsolete
New Component Design
Module Redesign
New Layout
Software

FSIPD 222
4) Obsolescence mitigation and resolution: The two ways to tackle obsolescence are mitigation and
resolution. While mitigation is the pro-active way of avoiding obsolescence, resolution is the reactive way of
reducing it.

Obsolescence mitigation measures


The strategy followed in the obsolescence management is usually a combination of mitigation measures.
Obsolescence risk can be mitigated by taking actions in three main areas: supply chain, design and planning.

Supply chain
The mitigation measures that can be taken in the supply chain are risk mitigation buy (RMB) and partnering
agreements with suppliers.

a) Risk Mitigation Buy (RMB)


The RMB approach involves purchasing and storing enough obsolete items to meet the systems forecasted
lifetime requirements. For this the optimization of the process is required to determine the number of parts
required for the RMB to minimize life cycle cost. The key cost factors identified are: procurement, inventory,
disposal and penalty costs. The main benefit of this approach is that readiness issues are alleviated and it
avoids requalification testing.
However, several drawbacks have been identified:
Initial high cost, incurring in significant expenses in order to enlarge the stock.
It is difficult to forecast the demand and determine RMB quantity accurately. Therefore, it is common to
have excess or shortage of stock problems.
This approach assumes that the system design will remain static. Any unplanned design refresh may
make stock obsolete and hence no longer required.
The customer is in a poor negotiation position because of the high dependence on a particular supplier.
b) Partnering agreements with suppliers
Nowadays, the defence industry has less control over the supply chain for COTS (Commercial off-the-shelf)
electronic or critical components. These types of components are becoming obsolete at an increasingly fast
pace. Therefore, it is advisable to make partnering agreements with suppliers to ensure the continuous
support and provision of critical components.

Design for obsolescence


The fact that technological products like electronics, electrical and mechanical products will be affected
by technology obsolescence during their lifetime is unavoidable. One of the attempts to mitigate this
obsolescence was to address this threat at the design stage. Managing obsolescence via quickly turning over
the product design is impractical because the product design is fixed for long periods of time, so it is
necessary to be done at the beginning of the project. Therefore, strategies such as the use of open system
architecture, modularity and increase of standardization in the designs will definitely ease the resolution of
obsolescence issues that may arise at the component or LRU (Line-replaceable unit) level. The impact of
product obsolescence on the life cycle cost and functionality can be drastically reduced considering the
following guidelines:
Managing the processes used to select and manage components to assure cost-effectiveness, reliability,
safety and functionality.
Developing new approaches to using components manufactured for other industries (incorporating
COTS).
Therefore, a system like defence department should get ready to make use of electronic components
designed for the commercial market. However, the incorporation of COTS in the system is a double-edged
sword due to their shorter life cycle. The decision may increase the frequency of obsolescence issues in
the system, exacerbating the problem.

An integrated approach involves the following steps:

o Anticipated and synchronized technology insertion route - Modular / Flexible / Open architecture
o Replace the obsolete component by replacement/redesign activities
o Use Standard / Off-The-Shelf part (COTS)
o Check for Material compliance (REACH, ROHS, Bio-compatibility)
o Study Regular Market survey report and identify risk
o Procure Obsolete Components
o Use integrated SCM Manage Multiple supplier

FSIPD 223
o Life Time Buy of Critical components
o Efficient Contract management system
o Industry standards like IPC 1752 or AIAG
o Use integrated PLM, ERP
o Counterfeit Management system
o Use of multi-sourced components
At the design stage, it is important to take into account the number of suppliers and manufacturers that are
producing a particular component (implementing a particular technology) before including that component in
the bill of materials (BOM). It is necessary to make sure that the components included in the BOM can be
provided by multiple suppliers to minimize the number of critical components.

Planning
Planning is an effective way of mitigating obsolescence. It implies the development of an OMP, a technology
roadmap and the use of obsolescence monitoring tools.

o Obsolescence management plan (OMP)


It has become a common practice for the original equipment manufacturer (OEM) to produce a document
called the OMP (Obsolescence management plan) to satisfy the Ministry of Defence (MOD) demand. The OMP
describes the proactive approach to be taken by the OEM to manage, mitigate and resolve obsolescence
issues across the life cycle of the PSS. This document provides the OEM and the customer with a common
understanding of the obsolescence risk and allows the agreement of the most suitable obsolescence
management strategy.

o Technology road mapping


The use of technology road mapping facilitates the selection of technologies to go ahead with while
considering timeframes. It enables the identification, evaluation and selection of different technology
alternatives. Furthermore, it identifies technology gaps, which can be regarded as the main benefit of this
approach because it helps to make better technology investment decisions. The use of this technique may
help to plan the technology refreshes that the system may require within the in-service phase of the product
life cycle, solving and preventing obsolescence issues.
o Monitoring
Nowadays, there are many commercial tools available that enable the monitoring of the BOM. In general, they
match the BOM with huge databases, providing information about the current state of each component that
is whether it is already obsolete or not and a forecast about when it will become obsolete. The forecasting is
based on an algorithm that takes into account several factors such as type of component and technology
maturity. These algorithms are currently been improved to take into account other factors such as market
trends. The monitoring tools may provide information about FFF alternatives to replace obsolete
components. All this information provides the basis forth planning and proactive management of
obsolescence.

Obsolescence resolution approaches

When a part becomes obsolete, a resolution approach must be applied immediately to tackle the problem. Its
important to make sure that no pre-existing capabilities are lost with the resolution approach selected. There
are several resolution approaches in the literature which are described as follows, but the suitability of them
depends on the individual case. The different approaches are classified according to the replacement used
into four categories

1. Same component

o Existing stock
It is stock of the obsolete part available within the supply chain that can be allocated to the system. This is
the first resolution approach that should be explored because it is inexpensive, but it is just a short-term
solution. Therefore, a long-term solution should be implemented afterwards.

o Last time buy (LTB)


The LTB is the purchase and store of a supply of components, as a result of a product dis-continuance notice
from a supplier, sufficient to support the product through outfits life cycle or until the next planned
technology refresh. This resolution approach differs from the RMB in the fact that the LTB is triggered by a

FSIPD 224
supplier announcing a future end of production, whereas the RMB is a risk mitigation option triggered by the
users risk analysis. The main advantage of this approach is that it allows extending the time since the
product change notification is received until performing a redesign. This is a common and effective approach,
but in general, it is used as a short-term solution until a more permanent solution can be placed.

o Lifetime buy
One strategy used to combat obsolescence is to buy additional inventory during the production run of a
system or part, in quantities sufficient to cover the expected number of failures. This strategy is known as
a lifetime buy. An example of this is the many 30 and 40-year-old railway locomotives being run by small
operators in the United Kingdom. These operators will often buy more locomotives than they actually require,
and keep a number of them stored as a source of spare parts.
o Authorized aftermarket sources
Occasionally, the obsolete part can be procured from third parties authorized by the OEM once the OEM has
stopped producing it. This is a beneficial solution because it is relatively inexpensive.

o Reclamation (cannibalization)
The reclamation approach, also known as cannibalization, consists in using serviceable parts salvaged from
other unserviceable systems. This approach is especially useful during the last stage of the in-service phase in
legacy systems, but the used part may be just as problem-prone as the one it is replacing.

o Other approaches: grey market and secondary market


The grey market is the trade of new goods through distribution channels which are unauthorised, unofficial or
unintended by the original manufacturer. Some companies rely on the grey market as an alternative to
performing are design. However, this is very risky due to the increasing probability of purchasing counterfeit
components when using these sources, especially in sectors such as the defence and aerospace where
counterfeit components can compromise the safety of people. Besides, testing of all the components to
ensure that they are not a counterfeit is usually not feasible. Therefore, this is an inadvisable approach. It is
tempting to buy obsolete components in the secondary market using Internet tools such as eBay. How-ever,
several authors agree that this is a chancy solution because the used part may be just as
Problem prone as the one it is replacing. Furthermore, this approach is as prone to counterfeits as the grey
market.

2. FFF replacement: This is the abbreviation of form, fit and function replacement. There are two types of
FFF replacement:

o Equivalent
An equivalent is a functionally, parametrically and technically interchangeable replacement without any
additional changes. The main benefit of this approach is that it is inexpensive (as requalification tests are not
required) and frequently a long-term alternative. However, it is difficult to find a replacement with the same
form, fit and function.
o Alternate
An equivalent can be defined as a part available whose performance may be less capable than that specified
for one or more reasons (e.g. quality or reliability level, tolerance, parametric, temperature range). Equivalent
items may perform fully (in terms of form, fit and function) in place of the obsolete item, but testing is
required. Updating is the process of assessing the capability of a commercial part to meet the performance
and functionality requirements of the applications, taking into account that the part is working outside the
manufacturers specification range.

3. Emulation
The emulation approach consists in developing parts (or software) with identical form, fit and function than
the obsolete ones that will be replaced using state-of-the-art technologies. The emulator can be interface
software that allows continuing the use of legacy software in new hardware where otherwise the legacy
software would not work properly. The fact that this solution infrequently based on COTS components with a
built-in adapter can turn it into a short-term solution.

4. Redesign
The redesign alternative involves making a new design for obsolete parts by means of upgrading the system,
with the aims of improving its performance, maintainability and reliability, as well as enabling the use of
newer components The cost for redesign can include engineering, program management, integration,
qualification and testing. This is considered the most expensive alternative (especially for the military, taking
into account the re-qualification/recertification requirements). Therefore, this long-term solution should be

FSIPD 225
used as a last resort and only when functionality upgrades (technology insertion) become necessary. Redesign
can be classified into:
o Minor Redesign
For example, a minor redesign would represent a change to the layout of a circuit board.
o Major Redesign
For example, a major redesign would be a circuit board replacement.
5) Monitoring:
In order to keep obsolescence in check, it is needed to regularly monitor the current state of each component
used in the product. There are many commercial tools available that enable the monitoring of the BOM. In
general, they match the BOM with huge databases, providing information about the current state of each
component, whether it is already obsolete or not and a forecast about when it will become obsolete. The main
question that arises is the type of surveillance needs to be in place. Depending upon the type of product and
tendency of obsolescence monitoring fields may include information about:
1) Design changes
2) Product deep dives
3) Audit findings
4) Post Production Risk Reviews
5) CAPA (Corrective and Preventive Actions)
Obsolescence Management Tools:
Typical Features of obsolescence management tools include:
o Constant electronic monitoring of Bill of materials (BOMs).
o Automated real time electronic (living) library of parts availability.
o Identification of critical items.
o Life Cycle modelling.
o Real time component procurement monitoring.
o Procurement problem identification with solution alternatives.
o Automated data retrieval.
o Configuration management.
o Automatic Indenturing capability.
o Parametric part search.
o Access to an electronic marketplace.
o Data sharing.

Obsolescence Management Tools Classification:


Most of the obsolescence management tools are focused on the monitoring of the BOM and identification of
alternative components for the obsolete ones. Some of them can do obsolescence forecasting and costing as
well. Furthermore, most of them are focused on electronic and electro mechanical components as they are
more prone to obsolescence due to the ongoing change in technology. The models have been classified into
three categories as:
Component level that represents the models that forecast the next obsolescence event for each
independent electronic component.
Assembly level that represents the tools that manage an assembly (LRU), which is composed of
components, determining the optimal time to change its baseline during production and operation due to
part level obsolescence.
System level that represents those models that address the obsolescence for the entire system, taking
into account different aspects such as hardware and software integration. Those models are able to
forecast obsolescence at the system level across the remaining life cycle and optimize the change
frequency. The data inputs required for this type of model are not usually available in most databases.

FSIPD 226
Obsolescence Management Tools Selection:
Essential criteria for selection/comparison
Trigger adherence - Does it take care of all triggers?
Material Diversity - Does it satisfy for all materials?
Database Robustness These criteria determines the amount of databases in-built the tool can hold.
Customer Base this criterion determines the number of companies in the world is using the tool?
Validation - Is the tool validated by some authentic body?
Interface This criteria determines the user-friendliness of the tool
Availability This criteria determines the availability of the tool
Learning curve - Does it require rigorous training?
Platform dependency - Is this available in all hardware?
OS dependency - Is this available for all operating systems?
Regulation compatibility - Does it take care of relevant regulations?
Output/Report Quality - What are the various outputs/reports from the tools?

4.2.2 Configuration Management

A configuration consists of the functional, physical, and interface characteristics of existing or planned
hardware, firmware, software or a combination there of as set forth in technical documentation and
ultimately achieved in a product. Configuration management provides a mechanism for identifying,
controlling and tracking the versions of each software and/or product item. In many cases earlier versions still
in use must also be maintained and controlled. Configuration management permits the orderly development
of a system, subsystem, or configuration item. A good configuration management program ensures that
designs are traceable to requirements, that change is controlled and documented, that interfaces are defined
and understood, and that there is consistency between the product and its supporting documentation.
Configuration management provides documentation that describes what is supposed to be produced, what is
being produced, what has been produced, and what modifications have been made to what was produced.

The fundamental purpose of Configuration Management (CM) is to establish and maintain the integrity and
control of software products throughout a projects life cycle. This includes products such as performance
requirements, functional and physical attributes, and design and operation information. CM is a discipline
applying both technical and administrative direction for the control of change and integrity of the product
data and documentation. CM involves identifying the configuration of the software (i.e., software work
products) at given points in time, systematically controlling changes to the configuration, and maintaining
the integrity and traceability of the configuration throughout the projects life cycle.

Configuration Management is the foundation of any project. Without it, no matter how talented the staff,
how large the budget, how robust the development and test processes, or how technically superior the
development tools, project discipline will collapse and success will be left to chance. Do Configuration
Management right, or forget about improving your development process. The common example of the
application of CM is a car. In a car, the interior, the engine capacity, and the exterior paint color can vary.
When the customer selects this car in the Web shop, the possible characteristics for the product are displayed
automatically. The customer can select the characteristic values that he or she wants. Only characteristics
that are compatible with the previously selected characteristic values are shown. If the various characteristics
of a product affect its price, the price that is displayed is also recalculated.

FSIPD 227
The benefits of configuration management are as follows:

More product options to customer


Facilitate Just-In Time production technology
Better response to customer requirement
Better product data management
Customer delivery of right product at right time
Competitive advantage over others

4.2.3. EoL Disposal

All kinds of products have a certain life i.e. there is a certain time period up to which a products performs
satisfactorily, after which it may fail or the performance is not as per requirement. The end part of this time
period is called as End of life or EOL. This term is largely used for the products which are being sold during the
EOL to indicate the end of life or say to indicate that the manufacturer / supplier will not support the product
with spare parts etc. E.g. old models of laptops which had already surpassed the end-of-life were fitted with
DDR1 or DDR2 RAMs which are not being produced by the manufacturers any more. Now, the DDR3 models of
RAM are being manufactured which are fitted in current generation of laptops; but the DDR3 models of RAM
are not compatible with older laptops.

After sales, there are two modes by which a product reaches attains the EOL:

1. Due to continuous innovation in technologies, new upgraded products are frequently introduced in
market which tempts the buyer to buy and replace the old one by the new one. In this case, the old product
has reached EOL without having or causing any failure. People usually do not wish to throw away such old
products as they are still functional. Hence, such products are disposed by donating to the needy.
2. Breakdown of products is a very common phenomenon, if the product becomes un repairable or the cost
of repair is very high, then we cannot dispose by donating. In such condition, proper mode of disposal should
be found / known.

While selecting the mode of disposal, effect of the products material on environment should be considered
and accordingly a suitable mode of disposal should be finalised.

The most common method of disposal is landfill; but before land filling it must be made sure that the filler
material is biodegradable otherwise there will be adverse effects on soil.

Another environment friendly method of disposal is recycling. Recycling involves reprocessing of a material to
make it reusable. There are two advantages of recycling, first that there will be reduction in damage of
environment; second that the requirement of new material will decrease.

Therefore, whenever a new product is under development, the material of the product should be widely
chosen by keeping the effect of the materials on environment also in mind. However, usually this factor is not
given much importance, but needs to take seriously in order to protect the environment from degradation and
intoxication. Government has taken serious steps to control the waste in order to protect the environment.
The few govt. guidelines on waste electrical and electronic equipment directive are as follows:

FSIPD 228
Waste electrical
e and
d electronic equipment directive (WEEE directivve): Waste electrical and electronic
equipmeent directive (WEEE
( directive) is a Europpean directivee for Waste electrical
e and electronic eq
quipment.
Figure 4.4 shows the symbol of WE EEE. This sym
mbol can be with
w or without a single black line undern neath the
symbol. The
T black linee indicates thhat the goodss were introduuced after 20005, when WEEE directive came
c into
force. Go
ood without black line indiccates that the goods were introduced in markets in beetween 2002 - 2005.

4.4. WEEE symbol


Fig4

The WEEEE directive sets


s targets for collection, recovery, reccycling of elecctrical goods. It also imposes some
responsibilities on the
t manufacturer and distributors. These T respon nsibilities incclude making certain
ments which can
arrangem c give fair chhances to thee users to retu
urn WEEE freee of charge.

Categorisation of WE
EEE:

On thet basis of time


t of introd
duction into market:
1. Historic WEEE: Products which h are introducced in markett before 2005 5 (having symbol without black
b line)
come under this categgory. Responssibility of recyccling these WEEEs
W is of thee owner.
2. Non Historic WE EEE: Productss which are introduced in market
m after 2005
2 (having ssymbol with black
b line)
come under this categgory. Responssibility of colllecting and reecycling of theese WEEEs iss of the manu ufacturing
and distrributor.

On the
t basis of typet of produ uct:
1. Largge household appliances
a
2. Smaall household appliances
3. IT an
nd telecommu unications equipment
4. Conssumer equipm ment
5. Lighting equipmeent
6. Electtrical and elecctronic tools
7. Toyss, leisure and sports equipm ment
8. Medical devices
9. Mon nitoring and coontrol instrum ments
10. Autoomatic dispen nsers

FSIPD 229
Waste Packaging Disposal: Waste packaging disposal directives ensure proper disposal of the packaging
materials after their use. Objectives of waste packaging disposal are:

Prevention, reduction and elimination of pollution caused by packaging waste.


Management of packaging and packaging waste.

There are several regulations for waste packaging disposal like Producer Responsibility Obligations (Packaging
Waste Regulations (GB & NI), Waste Management (Packaging) Regulations (Eire), Site Waste Management
Plans Regulations,etc. which impose responsibilities and measures on users, manufacturers and distributors
for the proper disposal of waste packaging. The main objectives of these kinds of regulations are as follows:

To ensure that all waste packaging are recycled.


To ensure measurement of weight and constituents of all the production batches packaging materials are
recorded and audited to ensure the recommended standards.
Companies which recycle the waste packaging are licensed by various environmental agencies.
To impose responsibility on users to ensure proper disposal of packaging waste by the end users.

Recycling: Recycling ( Figure 4.5) is a process by which waste materials are converted into useful products.
Importance of recycling can be understood by following points:

Land fill sites emit lots of green house gases and harmful chemicals. Recycling helps to reduce such
pollutants.
Recycling reduces the need of various raw materials, preventing from deforestation.
Energy consumed in making products from raw materials is much higher than making recycled products.
Hence, recycling helps in saving energy.
If, recycling is not practised, continuous land filling would result in shortage of land.
Cost involved in making products from raw material is much higher than making recycled products.
Hence, recycling helps in improving economy.

FSIPD 230
The common examples of recycling various products like papers (Figure 4.5),glass( Figure 4.6) and aluminium
cans (Figure 4.7) has been shown in the following diagrams :

Figure 4.5. Recycling of paper

FSIPD 231
Figure 4.6. Recycling of glass

FSIPD 232
Figure 4.7. Recycling of aluminium cans

Waste Reporting: Waste reporting a process in which complete analysis of waste is done; complete analysis
involves measurement, disposal method of waste, remedy etc. Precisely, below mentioned steps lead to
completion of waste reporting:

1. Identification of waste champion: The term waste champion refers to a person who is well
experienced in waste management, has authority to get access to various sections of industry. Such a person
is identified and based on the collected data, a strategy is formed.

FSIPD 233
2. Identification of types of waste involved in the particular process: In this step, all the waste is
analysed to understand its characteristics. Waste of each stream is analysed to understand the following
points:
what it is
how it is contained
how the collection contract is set up
who carries the waste
where it goes and the quantity
who looks after the paper work and
what audits have been carried out on the disposal contractors and their operations.

3. Measurement of waste: In this method, quantity and value of all the waste is determined. It is very
important to measure the waste; because without quantifying the waste, its impact on environment and
economy cannot be estimated.
4. Analysis of existing disposal mode: In this step, the existing modes of disposal are identified and
analysed. Then a decision is made, that whether to incorporate any changes or not. If yes, then alternate
disposal methods are suggested.
5. Setting targets for improvement: If the alternate disposal methods are suggested in previous step,
then in this step, certain targets are set for the improvement.
6. Identification of benefits: In this step, possible benefits are evaluated. These benefits are supposed to
be outcomes of the improvements made in previous steps. These benefits can be in form of input savings,
waste cost savings, recovered value.
7. Reporting of outcome: After going through all the steps, the total (overall) outcome is documented and
reporting is done.
8. Repeat the cycle: All the above mentioned steps are always repeated in a cycle to ensure continuous
improvement in waste management.

Terminal Questions:
1. What is sustenance?
2. Discuss the various categories of maintenance.
3. What is obsolescence management? Discuss the various steps of obsolescence management?
4. Define configuration management. Explain the benefits of configuration management.
5. Explain the waste management principles.
6. Discuss the 5 enhancements and present the upgraded product from the following product list:
Two Wheeler, Mixer Grinder, Fan, Home Computer, Watch, Smart Phone, ATM Machine, Air Conditioner,
Android App Restaurant Locator, Online Movie ticket booking portal.

FSIPD 234
References

1. G. Bayraksan, W. Lin, Y. Son, and R. Wysk, eds., Proceedings of the 2007 Industrial Engineering
Research Conference
2. Sandborn P, Myers J. Designing engineering systems for sustainability. In: Misra KB, editor. Handbook
of performability engineering. London: Springer; 2008. p. 81103.
3. Singh P, Sandborn P. Obsolescence driven design Refresh planning for sustainment-dominated
systems. The Engineering Economist 2006; 51(2):115-139.
4. John A. Scott and David NisseLawrence, Chapter 7 Software Configuration Management ,Livermore
National Laboratory.
5. Susan Dart, Software Engineering Institute, Concepts in Configuration Management Systems,
Carnegie-Mellon University, Pittsburgh, PA. 15123-3890, USA
6. B.S. Dhillon , Engineering Maintenance: A Modern Approach
7. Mohamed Ben-Daya , Salih O. Duffuaa Abdul Raouf , JezdimirKnezevic, Daoud Ait-Kadi,Handbook of
Maintenance Management and Engineering
8. Lindley R. Higgins, P.E.R. Keith Mobley Editor in Chief The Plant Performance Group, Knoxville, Tenn.
Ricky Smith Associate Editor President, Technical Training Division Life Cycle Engineering, Charleston,
Engineering Handbook S.C. Sixth Edition
9. Francisco Javier Romero Rojo & Rajkumar Roy & EssamShehab, Obsolescence management for long-life
contracts: state of the art and future trends, 8 June 2009, Springer-Verlag London Limited 2009
10. www.sto.co.uk
11. www.recycling-guide.org.uk
12. Guidelines for company reporting on waste by environment and heritage service (an agency within the
department of the environment)
13. www.hse.gov.uk
14. Internet materials, You tube etc.

FSIPD 235
FSIPD 236
Module 5
Business Dynamics
Engineering Services Industry

FSIPD 237
Business Dynamics-Engineering Services Industry

Business can never be considered to be static because each and every activity in any business enterprise has
to change in order to meet the changing demands from its customers or users. Hence, the business
enterprises have to be placed in a dynamic state that has to change according to the requirements of the
external environment.
In the field of the business there may be the industries which produce products or goods to be delivered to the
end users i.e. the customers and also there may the industries which focus to serve the users with the
services it provides. Hence, the subsequent sections throw light on the dynamics of business keeping in view
the engineering service industry. The sections also describe the essentials of integrated product development
processes.

Objectives:

The following sessions of this module gives

Brief explanation of the growth of engineering services industry in India and product
development in industry
The interlinking between industry and academia.
Introduction to vertical specific product development processes
Brief definition product development Trade-offs
Intellectual property rights and security and configuration management
Importance of security and configuration management

Product can be a physical product or a service provided by a company and thus it becomes necessary to study
the engineering service industry.

5.1 The Industry

5.1.1 Overview of Engineering Services Industry

Outsourcing is the process of delegating a companys business process to third parties or external agencies,
leveraging benefits ranging from low cost labor, improved quality to product and service innovation. When
outsourcing transgresses national boundaries and is managed by companies located in other countries,
outsourcing takes the form of offshoring. A hotly debated topic with pros and cons, both outsourcing as well
as offshoring have a direct impact on a companys top and bottom line and have become key components of
defining how successful enterprises are run. Given below, as reflected by companies, are the top 10 reasons to
outsource:

Lower operational and labor costs are among the primary reasons why companies choose to outsource.
When properly executed it has a defining impact on a companys revenue recognition and can deliver
significant savings.
Companies also choose to outsource or offshore so that they may continue focusing on their core
business processes while delegating mundane time consuming processes to external agencies.
Outsourcing and offshoring also enable companies to tap in to and leverage a global knowledge base,
having access to world class capabilities.
Freeing up internal resources that could be put in to effective use for other purposes is also one of the
primary benefits realized when companies outsource or offshore.
Many times stranded with internal resource crunches, many world class enterprises outsource to gain
access to resources not available internally.

FSIPD 238
Outsourcing, many a time is undertaken to save costs and provide a buffer capital fund to companies
that could be leveraged in a manner that best profits the company.
By delegating responsibilities to external agencies companies can wash their hands off functions that are
difficult to manage and control while still realizing their benefits.
Outsourcing and especially offshoring helps companies mitigate risk and is also among the primary
reasons embarked upon.
Outsourcing also enables companies to realize the benefits of re-engineering.
Some companies also outsource to help them expand and gain access to new market areas, by taking the
point of production or service delivery closer to their end users.

To summarize among the reasons to outsource, companies undertake outsourcing and offshoring for a variety
of reasons depending upon their vision and purpose of the exercise. While this may vary from company to
company, the fruits of labor are visible among some of the leading enterprises worldwide, where in
outsourcing and offshoring have become a core component of day to day business strategies.

Independent Software Vendor is a person or company that develops software. It implies an organization that
specializes in software only and is not part of a computer systems or hardware manufacturer. ISVs generally
create application software rather than system software such as operating systems and database
management systems

Independent Hardware Vendor is an organization that makes electronic equipment. It implies a company that
specializes in a niche area, such as display adapters or disk controllers, rather than a computer systems
manufacturer.

The advantages of outsourcing are as follows:

Swiftness and Expertise: Most of the times tasks are outsourced to vendors who specialize in their field. The
outsourced vendors also have specific equipment and technical expertise, most of the times better than the
ones at the outsourcing organization. Effectively the tasks can be completed faster and with better quality
output.

Concentrating on core process rather than the supporting ones: Outsourcing the supporting processes
gives the organization more time to strengthen their core business process.

Risk-sharing: One of the most crucial factors determining the outcome of a campaign is risk-analysis.
Outsourcing certain components of your business process helps the organization to shift certain
responsibilities to the outsourced vendor. Since the outsourced vendor is a specialist, they plan your risk-
mitigating factors better.

Reduced Operational and Recruitment costs: Outsourcing eludes the need to hire individuals in-house;
hence recruitment and operational costs can be minimized to a great extent. This is one of the prime
advantages of offshore outsourcing.

Business process outsourcing (BPO) is a subset of outsourcing that involves the contracting of the
operations and responsibilities of specific business functions (or processes) to a third-party service provider.
Originally, this was associated with manufacturing firms, such as Coca Cola that outsourced large segments
of its supply chain. BPO is typically categorized into back office outsourcing which includes internal business
functions such as human resources or finance and accounting, and front office outsourcing - which includes
customer related services such as contact center services. BPO that is contracted outside a company's country
is called offshore outsourcing. BPO that is contracted to a company's neighboring (or nearby) country is called
near shore outsourcing.

Often the business processes are information technology-based, and are referred to as ITES-BPO, where ITES
stands for Information Technology Enabled Service. Knowledge process outsourcing (KPO) and legal process
outsourcing (LPO) are some of the sub-segments of business process outsourcing industry.

FSIPD 239
Engineering services are those service functions that deal with or related to core engineering processes.
Examples are:

CAD / CAM (computer aided manufacturing / design)


Auto design
Failure analysis of structural steel

The distinction that needs to be drawn here is between engineering functions and engineering service
functions. An engineering function could be auto engine manufacturing. A related engineering service
function would be designing the engine. It is similar to the distinction between manufacturing and
manufacturing support services.

Engineering process outsourcing (EPO) for the architecture, engineering and construction (AEC) industry is a
resource for the industries of the built environment. The EPO industry supports architecture, engineering and
construction industries worldwide.

Some of the users of EPO are as follows: Architect, Structural Engineer, Mechanical Engineer, Electrical
Engineer, Landscape architects, Interior designer, Environmental Engineer, civil engineering, general
contractor, subcontractor, quantity surveyor, Design-build teams, project management, construction
management, contract management, Facility management, Building Owners and Managers Association,
manufacturers, cities

Indian vendors are strengthening as providers of Information Technology Outsourcing (ITO) and Business
Process Outsourcing (BPO) to the companies all over the world. There is a possibility of a third major services
growth in India Engineering Process Outsourcing (ESO) to its rapidly evolving economy.

Engineering Services is a huge market. The global spending for engineering services is considered as $750
billion per year which is equal to the countrys GDP (Gross Domestic Product).It is expected to increase to $1
trillion by 2020.Out of the total expenditure only 12 percent is used in off-shore markets and shares with
Canada, China, Mexico and Eastern Europe. It is estimated that there will be an increase of 25 to 30 percent
by 2020. The country should build the capacities, capabilities, infrastructure and its international reputation if
it needs to become the preferred destination of the above high valued services.

The following problems are faced by India which prevent its economic growth

Less availability of projected number of engineers with specialized skills to meet the potential demand
Indias weak engineering and physical infrastructures, those hamper growth

It is difficult for India to succeed in its goals without enhancing manufacturing capabilities as ESO has close
links with manufacturing sectors, those are opposed to ITO/BPO.

The story of Indias economic growth over the last twenty years is well known, it is often explained as a
services-driven phenomenon. However, some manufacturing sectors have played an important role in this
economic growth, and the automotive sector is prominent among them. The automotive sectors contribution
is not only in terms of revenues, profits, taxes and employment, but more importantly in quality processes,
efficiency improvements, and product innovation.

ESO require a serious commitment from Indias business and political leadership to achieve a moderate
degree of success and to make India, an attractive business destination. India must equip five to seven cities
with world class infrastructure by 2020. ESO cant be Indias boon if it doesnt have developments in
education and physical plants. Booz Allen and Duke Universitys Centre for International Business Education
and Research (CIBER) in 2005 has surveyed the following details

36 percent of the companies have sent their engineering offshore


31 percent sent their research & development offshore

FSIPD 240
16 percent shipped out a portion of their product design

Offshoring of engineering services has been done in only advanced countries, only 9 percent of worlds fund is
used in low-cost countries. The global spending on Engineering Services is depicted in table 5.1.

Spend 2006 2020

Global Engineering and R&D Services $850 billion $1.1 Trillion

Offshore Engineering Services $25 billion $180-200 billion

Indias Share $ 2.5 billion $ 30-40 billion

Table 5.1.Global Spending

Examination of global demand for ESO across five sectors say- Automobile, Aerospace, High-tech/Telecom,
Utilities, and Construction/Industrial shows a high percentage of the global engineering spend. The following
trends across different kinds of service offerings for each of the above sectors are examined

Product and component design


Plant Design
Process Engineering
Plant maintenance and operations

This study focuses on core innovation services at demand, supply and evolving competitive dynamics. Indias
potential depends on the amount of experience of engineers and vendors and expertise available in the
country. Vendors are the providers of Information Technology Outsourcing (ITO), Business Process
Outsourcing (BPO) and Engineering Services Outsourcing (ESO) have more years of experience than engineers
who work for vendors has only few year experience which vary substantially between sectors is shown in table
5.2.

Experience (Years)
Sectors Engineers Vendors
High-Tech/Telecom 4.6 12
Automobile 5 10
Aerospace 4.1 5
Utilities 3 4
Constructional/Industrial 5 5
Table 5.2. Experience at various sectors

Challenges of Indian economy:

The most crucial challenge is the cultivation of talent. At present, there are 35,000 engineers work in
engineering services; it may increase to 250,000 by 2020 to reach its potential. It doesnt have trained
professionals to handle the work developed by ESO.

FSIPD 241
Infrastructure is another major challenge. India lags than other Asian countries in aspects of speed and cost
of internet access, road infrastructure, port infrastructure, Air infrastructure and telecom infrastructure.
Indias telecom infrastructure is adequate than other infrastructures.

The following six key steps should be taken by the stakeholders in India from within the business community
or elsewhere to make India an attractive, viable destination for engineering services and to capture $40 billion
of worlds offshored engineering.

Build an Engineered in India brand name


Improve domain expertise
Focus on creation of infrastructure
Improve the workspace in terms of quality and quantity
Align government priorities with business development
Leverage local business and local demand

Technology Business Research (TBR) believes the Engineering Services Outsourcing (ESO) industry is at the
beginning stages of a growth trajectory. We believe ESO vendors, particularly in India, are positioned to
capture a larger share of the market due to the alignment of their current capabilities and increased focus on
the ESO market. HCL, for example, has utilized its product-focused background and specialized knowledge in
IT to become the largest ESO vendor in India.

TBR believes companies reliant on manufacturing and engineering need to predict needs and prepare for
opportunities they may not be able to address on their own. By partnering with an ESO vendor committed to
expanding capabilities, constantly improving its track record, and pushing the current limitations to bring
about the next wave of products, opportunities become limitless.

Engineering services are tasks that involve the nonphysical acts of engineering, such as the preparation,
design, and consulting work supporting engineering. One example is the design of a jet engine; however, the
actual building of the engine is considered an engineering function. Engineering service providers solely focus
on the services for engineering and rarely work on the engineering processes they establish, consult, and/or
manage.

An array of sources indicate the global offshored ESO market will increase from less than $100 billion
currently to at least double that amount by 2020, 15% to 20% of which will be from Indian companies

The Engineering industry forms the basis of all major industries across the world. Important industries such
as infrastructure, manufacturing, processing, and metallurgical are heavily dependent on the engineering
industry for their growth. Currently, Engineering contributes 12 percent to the global Gross Domestic Product
(GDP). Within Engineering, the global Research & Delivery industry reported estimated revenues of USD 1
trillion in 2012. The industry is expected to create revenues of USD 1.4 trillion by 2017, demonstrating an
aggressive growth rate. The Services industry is also experiencing a similar thrust and is predicted to generate
revenues of USD 40-45 billion by 2015 and USD 60-70 billion 2017, at a Compound Annual Growth Rate
(CAGR) of X percent.

The Indian Engineering industry is the largest among the industrial segments in the country and provides
direct and indirect employment to more than 4 million skilled and non-skilled professionals. With a major
contribution of 3 percent towards the countrys GDP, the industry currently has a turnover of INR XX billion.
The industry has demonstrated exceptional growth over the past five years due to major investments and
policy initiatives by the Indian government and by domestic and foreign players. Global industry leaders are
looking at this market as a manufacturing hub, owing to lower prices of raw materials and services, and the
availability of a skilled labour force.

FSIPD 242
Indian Engineering R&D offshore market:

Figure 5.1. R&D offshore market

Source: At the release of the report on Indian Engineering services, Bangalore, May 20 (India Tech Online
photo).From left: Vikas Saigal ( Booz), Som Mittal (NASSCOM President), Krishna Mikkilineni (Honeywell),
Ketan Bakshi ( NASSCOM, Eng. Services Chair) and BVR Mohan Reddy (InfoTech)

The National Association of Software and Service Companies (NASSCOM) in association with the
management consulting firm Booz & Co, has released the key findings of the study Global ER&D: Accelerating
Innovation with Indian Engineering The study aims at understanding the changes in customer perspectives
about ER&D services sourcing, the growth trends in the Indian service provider landscape and the opportunity
by 2020. It also identifies and prioritizes key verticals, so that the industry can invest systematically in
creating a sustainable ecosystem. The report, says the engineering services landscape in India has evolved
significantly over the last four years, reflecting maturity, diversification and enhanced virtualization to
partner with global corporations. The Indian ER&D market in 2009 is estimated at $ 8.3 billion with employee
strength of 150,000, reflecting almost a threefold growth in revenues, employees and number of offshore
development centers. This is expected to grow further with the global ER&D spend surpassing $1 trillion in
2009 and expected to touch $1.4 trillion by 2020. The report estimates that India has the potential to capture
$ 40-45 billion in ER&D Services by 2020. The study reviews eleven major verticals in detail - Telecom,
Semiconductors, Consumer Electronics, Medical Devices, Industrial Automation, Computing Systems,
Automotive, Aerospace, Construction and Heavy Machinery, Energy and Infrastructure. It also reviews Indias
performance across the broad range of services including embedded software and hardware design services,
testing, prototyping building, engineering analysis and modeling, core product development and design
services. At the product development level, strong capabilities exist in India in areas of Automotive interiors
and exteriors, aero-structures and propulsion in Aerospace, access networks, core networks, devices in
Telecom and development of small-medium size products in the Construction/Heavy Machinery vertical.

Major growth triggers have been identified as:

Continued ER&D investment critical for innovation and penetrating new markets.
Increasing use of electronics, Fuel efficiency/ Alternate Fuels and convergence of technologies driving
future ER&D spend
Greater focus on emerging markets resulting from rise of a new consumer segment with varied
requirements.
Increasing sophistication and maturity of the ER&D services industry
Changing customer perceptions wherein India is being viewed as a strategic partner, focused on
innovation rather than just sustenance engineering.

FSIPD 243
Key highlights:

Global ER&D spend surpassed USD 1 trillion in 2009 and is expected to touch ~ USD 1.4 trillion by 2020.
Automotive, Consumer Electronics and Telecom are the top spenders on ER&D
Indian ER&D services market reflected revenue growth of over 40% in the last three years with 2009
revenues amounting to USD 8.3 billion and an increase in employee base from 54000 in 2006 to 150,000
in 2009
Indian ER&D services market expected to reach USD 40-45 billion by 2020 with exports revenues at about
USD 35-40 billion and domestic revenues at USD 4-6 billion
Infrastructure, Aerospace and Energy expected to contribute ~80 per cent of the domestic revenue
Potential to emerge as a Frugal engineering hub

Engineering Research and Development (ER&D):

Applied Research: is gaining knowledge necessary for determining the means by which a recognized and
specific need may be met and includes investigations directed to the discovery of new knowledge having
specific commercial objectives with respect to products, processes or services.

Development: the systematic utilization of the knowledge gained from research toward the production of
useful materials, devices, systems, or methods, including design and development of prototypes and
processes.

Trend: Spending for service-sector R&D is rising and has passed spending for manufacturing R&D, 1998:
26.5% versus 24%

Types of R&D: cooperation between two firms for joint R&D, R&D consortia between competitors, federal
laboratory-industry R&D collaboration, and university laboratory collaboration.

ER&D value chain:

It is the concept derived from business management first described by Michael Porter (1985), analysis tool for
strategic planning. It is a chain of activities. A product or service passes through all activities of the chain in
order and at each activity it gains some value. Porter distinguishes between two categories primary activities
and support activities. Each of them is linked to efficiency-improving activities or support activities of an
organization. R&D according to the value chain definition includes both activities related to improving the
physical product or process as well as market and consumer research. Costs and value drivers are identified
for each value activity (figure 5.2).

FSIPD 244
Figure 5.2. Value chain

ER&D vaalue chain forr various indu


ustries:

ous activities of engineering research and developmen


The vario nt are shown in figure 5.3 aand table 5.3..

Figuree 5.3.ER&D Acctivities

FSIPD 245
Pharameeuticals
Au
utomotive Aerospacee Semicondu
S uctors
and Bio
o Tech
Fundamental Funndamental M
Material Ressearch Chemical Research
R
Reseaarch Ressearch N
New Productt Biological Research
New product Neew Product Developmen
D t Product trrials and
Development Devvelopment A
Application Testing
Manu ufacturing an
nd Ind
dustrializatio
on Engineering
E Claims Tessting
Quality Engineering MRRO M
Manufacturing
Afterr market Engineering
E
Engin
neering

Table 5.3.The Engineeering Research & Developm


ment of variou
us industries include differeent activities based on
t type of industry and wo
the ork performed d.

Over view
w of Global R&D:
R

The markket across varrious verticalss are increasin


ngly shifting to
owards emergging economiccs as follows

Medical Devices

One of the fastest grrowing verticaal segment gllobally owing to increased governmental spend in healthcare
h
and also increased insstances of lifestyle diseases

Consum er Electronics

The markket is expecteed to reach over USD 1.3 trillion dominatted by US and China

Industriaal Automatio
on

The United States andd Chinese markets are expeected to be th


he powerhousses driving 9.5
5 percent grow
wth in the
ndustrial auto
global in omation markket Emergingg geographiess are expected to grow att an average of 10-15
percent Year-on-Year
Y

Semicon
nductor

South eaast Asian coun


ntries will dom
minate the semiconductor industry in future as Asia aaccounts for more
m than
50% of the market. Grrowing techno ological advan
ncements hass spurred dem
mand

Indepen dent Softwa re Vendor (IS


SV)

Post thee recession in 2009, softwware segment is poised to witness a higher growth. The marketss for both
enterprisse and softwaare have witneessed significaant growth

Telecom
m

The induustry is witnesssing de-regulation in som


me of the worlds largest teelecom markeets like India and
a China
fueling fu
urther growthh prospects fo
or the industryy

FSIPD 246
Computer Hardware and Storage

U.S and China are amongst the fastest growing nations in this segment. The market expected to hit USD
220+ Billion by 2016

Aerospace

Europe are the leading markets currently, By 2030, regions outside Europe and North America are expected to
own about half the commercial aircraft in service

Automotive

While the sales in North America and Asia grew significantly, Europe continued to witness a decrease in
automotive sales owing to European debt crisis

This has created an increased focus on R&D in these high growth markets with US and Europe still
dominating the R&D spends

Contribution to R&D (%)


Countries
ISV Semiconductor Automotive Telecom Aerospace
North America 35 23 17 13 9

Europe 54 7 54 20 10

Rest of the world 35 31 - 28 -

Japan 49 8 49 - -

Table 5.4. Contribution to R&D

The table 5.4 shows the contribution of various countries to Engineering research & development (%) in
various fields like semiconductor, automotive, telecom, aerospace and independent software vendor. This
dynamism has been seen across the sector in the commercial vehicles, utility vehicles, cars, two-wheelers
and automobile component industries. According to a study by the Confederation of Indian Industry, quality
defect rates in manufacturing dropped from as high as 12% in 1998 to 100 ppm in 2008i the Indian
automotive sector which was at the vanguard of the quality movement can legitimately take credit for this
substantial improvement. Companies across the automotive sector spectrum have won prestigious Deming
and JQM awards. The automotive sector is the most prominent location of product innovation in Indian
manufacturing. It accounts for the second highest aggregate spending by industry on research and
development, following only the pharmaceutical industry.

The major areas of focus for most R&D organizations across various verticals are Energy efficiency and
convergence and rest listed table 5.5.

Engineering Sectors Focus Areas

Automotive Energy Efficiency, Automotive Electronics


Green Energy, Integrated Avionics, Higher Electronics
Aerospace
Content
Consumer Electronics Connectivity, Device convergence, Digitization
Homecare Solutions, Medical robotics, Wearable
Medical Devices
Technologies
Mobile-Cloud Convergence, Device & Network
Telecom
Convergence

FSIPD 247
Redesign for cloud, Mobile Use, Complex Data
ISV Analytics, Context Aware,
Vertical Market Specialization

Industrial Automation Sustainability, Industrial Automation 2.0, Safety


Application Optimized Storage, Miniaturization, Energy
Computer Peripherals
Efficiency

Semiconductor Optoelectronics, Sensors/MEMs


Table 5.5.Key Technology focus areas

Product has a definite life which passes through different stages during its lifetime which is termed to be the
product life cycle.

5.1.2 Product Development in Industry versus Academia

5.1.2.1 Product Development Life Cycle

In industry, product lifecycle management (PLM) is the process of managing the entire lifecycle of a product
from its conception, through design and manufacture, to service and disposal. PLM integrates people, data,
processes and business systems and provides a product information backbone for companies and their
extended enterprise.

Benefits of PLM:

Some of the advantages of product life cycle management are as follows

Reduced time to market


Increase full price sales
Improved product quality and reliability
Reduced prototyping costs
More accurate and timely request for quote generation
Ability to quickly identify potential sales opportunities and revenue contributions
Savings through the re-use of original data
A framework for product optimization
Reduced waste
Savings through the complete integration of engineering workflows
Documentation that can assist in proving compliance for RoHS or Title 21 CFR Part11
Ability to provide contract manufacturers with access to a centralized product record
Seasonal fluctuation management* Improved forecasting to reduce material costs
Maximize supply chain collaboration

Areas of PLM:

Within PLM there are five primary areas;

Systems engineering (SE)


Product and portfolio management (PPM)
Product design (CAD)
Manufacturing process management (MPM)
Product Data Management (PDM)

FSIPD 248
Systems engineering is focused on meeting all requirements, primary meeting customer needs, and
coordinating the systems design process by involving all relevant disciplines. An important for life cycle
management is a subset within Systems Engineering called Reliability Engineering. Product and portfolio
management is focused on managing resource allocation, tracking progress vs. plan for new product
development projects that are in process (or in a holding status). Portfolio management is a tool that assists
management in tracking progress on new products and making trade-off decisions when allocating scarce
resources. Product design is the process of creating a new product to be sold by a business to its customers.
Manufacturing process management is a collection of technologies and methods used to define how products
are to be manufactured. Product data management is focused on capturing and maintaining information on
products and/or services through their development and useful life. Change management is an important
part of PDM/PLM.

Introduction to development process:

The core of PLM (product lifecycle management) is in the creation and central management of all product
data and the technology used to access this information and knowledge. PLM as a discipline emerged from
tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people
and the processes through all stages of a products life. It is not just about software technology but is also a
business strategy.

For simplicity the stages described are shown in a traditional sequential engineering workflow. The exact
order of event and tasks will vary according to the product and industry but the main processes are:

Conceive

Specification
Concept design

Design

Detailed design
Validation and analysis (simulation)
Tool design

Realize

Plan manufacturing
Manufacture
Build/Assemble
Test (quality check)

Service

Sell and deliver


Use
Maintain and support
Dispose

The major key point events are:

Order
Idea
Kickoff
Design freeze
Launch

FSIPD 249
The reality is however more complex, people and departments cannot perform their tasks in isolation and one
activity cannot simply finish and the next activity start. Design is an iterative process, often designs need to
be modified due to manufacturing constraints or conflicting requirements. Where a customer order fits into
the time line depends on the industry type and whether the products are for example, built to order,
engineered to order, or assembled to order.

Phases of product lifecycle and corresponding technologies:

Many software solutions have developed to organize and integrate the different phases (figure 5.4) of a
products lifecycle. PLM should not be seen as a single software product but a collection of software tools and
working methods integrated together to address either single stages of the lifecycle or connect different tasks
or manage the whole process. Some software providers cover the whole PLM range while others single niche
application. Some applications can span many fields of PLM with different modules within the same data
model. An overview of the fields within PLM is covered here. It should be noted however that the simple
classifications do not always fit exactly, many areas overlap and many software products cover more than one
area or do not fit easily into one category. It should also not be forgotten that one of the main goals of PLM is
to collect knowledge that can be reused for other projects and to coordinate simultaneous concurrent
development of many products. It is about business processes, people and methods as much as software
application solutions. Although PLM is mainly associated with engineering tasks it also involves marketing
activities such as product portfolio management (PPM), particularly with regards to new product development
(NPD). There are several life-cycle models in industry to consider, but most are rather similar. What follows
below is one possible life-cycle model; while it emphasizes hardware-oriented products, similar phases would
describe any form of product or service, including non-technical or software-based products:

Phase 1: Conceive

Imagine, specify, plan, and innovate

The first stage is the definition of the product requirements based on customer, company, market and
regulatory bodies viewpoints. From this specification, the product's major technical parameters can be
defined. In parallel, the initial concept design work is performed defining the aesthetics of the product
together with its main functional aspects. Many different media are used for these processes, from pencil and
paper to clay models to 3D CAID computer-aided industrial design software.

In some concepts, the investment of resources into research or analysis-of-options may be included in the
conception phase e.g. bringing the technology to a level of maturity sufficient to move to the next phase.
However, life-cycle engineering is iterative. It is always possible that something doesn't work well in any
phase enough to back up into a prior phase perhaps all the way back to conception or research. There are
many examples to draw from.

Phase 2: Design

Describe, define, develop, test, analyze and validate

This is where the detailed design and development of the products form starts, progressing to prototype
testing, through pilot release to full product launch. It can also involve redesign and ramp for improvement to
existing products as well as planned obsolescence. The main tool used for design and development is CAD.
This can be simple 2D drawing / drafting or 3D parametric feature based solid/surface modeling. Such
software includes technology such as Hybrid Modeling, Reverse Engineering, KBE (knowledge-based
engineering), NDT (Nondestructive testing), and Assembly construction. This step covers many engineering
disciplines including: mechanical, electrical, electronic, software (embedded), and domain-specific, such as
architectural, aerospace, automotive, ... Along with the actual creation of geometry there is the analysis of
the components and product assemblies. Simulation, validation and optimization tasks are carried out using
CAE (computer-aided engineering) software either integrated in the CAD package or stand-alone. These are
used to perform tasks such as: - Stress analysis, FEA (finite element analysis); kinematics; computational
fluid dynamics (CFD); and mechanical event simulation (MES). CAQ (computer-aided quality) is used for tasks

FSIPD 250
such as Dimensional
D t
tolerance (enggineering) anaalysis. Anotheer task perforrmed at this sstage is the so
ourcing of
bought out
o componen nts, possibly with
w the aid off procurement systems.

Phase 3:: Realize

Manufaccture, make, build, procurre, produce, sell


s and delivver

Once thee design of the products components is complete the method of manufactturing is defined. This
includes CAD tasks su uch as tool design; creation n of CNC Machhining instructions for the products parrts as well
as tools to manufactture those paarts, using in ntegrated or separate CAM M computer--aided manu ufacturing
softwaree. This will alsso involve anaalysis tools fo mulation for operations succh as casting, molding,
or process sim
and die press formin ng. Once the manufacturing method has h been iden ntified CPM comes into play.
p This
involves CAPE (comp puter-aided production
p en
ngineering) or CAP/CAPP (productioon planning) tools for
carrying out factory, plant
p and faciility layout an
nd productionn simulation. For example: press-line simulation;
and indu ustrial ergonomics; as well as tool selection management. Once components aare manufactu ured their
geometrical form and d size can bee checked agaainst the origginal CAD datta with the u use of compu uter-aided
inspectioon equipment and softwaare. Parallel to the engin neering taskss, sales product configuraation and
marketin ng documentaation work taake place. Thiis could include transferrin ng engineerinng data (geommetry and
part list data)
d to a webb based sales configurator and other dessktop publish hing systems

Phase 4:: Service

Use, opeerate, maintaain, support, sustain, phaase-out, retire, recycle and


d disposal

The finall phase of thee lifecycle invo


olves managin ng of in servicce information. To provide customers an nd service
engineerrs with suppo ort informatio on for repair and mainten nance, as well as waste m management//recycling
informattion. This invvolves using tools
t such ass Maintenancce, Repair an nd Operationss Management (MRO)
softwaree. There is an end-of-life to o every producct. Whether itt be disposal or
o destructionn of material objects or
informattion, this needds to be considered since itt may not be free
f from ramifications.

Concieve
e Design Realisee Servicee

Figure- 5.4. All


A phases: pro
oduct lifecyclee

All phases: product lifecycle (figu


ure 5.5)

Commun
nicate, manage and collab
borate

t above phases can be seen in isolatio


None of the on. In reality a project doess not run sequ
uentially or in
n isolation
of other product devellopment projeects. Informattion is flowingg between diffferent peoplee and systemss. A major
part of PLM is the co-ordination
c n and management of product definittion data. Th his includes managing
m
engineerring changes and releasee status of componentss; configuration product variations; document d
managem ment; plannin
ng project reso
ources and timmescale and risk assessment.

FSIPD 251
For these tasks graphical, text and metadata such as product bills of materials (BOMs) needs to be managed.
At the engineering departments level this is the domain of PDM (product data management) software, at
the corporate level EDM (enterprise data management) software, these two definitions tend to blur however
but it is typical to see two or more data management systems within an organization. These systems are also
linked to other corporate systems such as SCM, CRM, and ERP. Associated with these systems is project
management Systems for project/program planning. This central role is covered by numerous collaborative
product development tools which run throughout the whole lifecycle and across organizations. This requires
many technology tools in the areas of conferencing, data sharing and data translation. The field being product
visualization which includes technologies such as DMU (digital mock-up), immersive virtual digital
prototyping (virtual reality), and photo-realistic imaging.

Figure 5.5. Product development cycle

The industry and academics are inter-related as both are mutually benefited. If the academics provide the
human resource, industry is the point where they get absorbed.

5.1.2.2 Industry-Academia Interaction:

A productive interface between academia and industry (figure 5.6), in the present times of knowledge
economy, is a critical requirement. The industry academia interface is all about knowledge transfer and
experience/technology transfer.

Universities and industry, which, for long have been operating in separate domains, are rapidly inching closer
to each other to create synergies. The constantly changing management paradigms, in response to growing
complexity of the business environment today have necessitated these two to come closer.

Indian Industry, after the liberalization, has become marginally more aware of the vital linkage between the
education system and business and corporate productivity. Even with this awareness, its engagement with
academia is tentative and ritualistic than real. Indian industry is myopically disengaged, if not wholly divorced
from Indian academia.

As much as management institutes aim to provide well groomed manpower to industry, the latter needs to
involve in the affairs of the former for improving quality of manpower. There exists principal-agent
relationship between institute and industry. In fact, input of one is critical for the other.

Internships, an example of successful cooperation between industry and academics, are designed to help
students develop vocational self-concept, acquire job relevant skills and provide informed career decision
making ability.

The various challenges currently facing Academia-Industry collaborations are awareness, identification,
evaluation, protection and commercialization of ideas.

FSIPD 252
In human resource management parlance, an expression that has interested many of late is industry-
academia interface. A concept that has been doing the rounds of boardrooms, premier educational institutes
and even state bodies, this could be another public-private success story. The end result: a secured future for
aspirants, less time and capital invested on grooming fresher and financial backing to the partner academic
institutes.

As institutes committed primarily to creation and growth of technological knowledge, the IITs have an
important role to play in the industrial sector of the country's economy.

The Critical Areas

The domains in which interaction is theoretically possible are:

Industry support to basic research for knowledge creation


Industry participation in technology development involving some exploratory work
Academic intervention in solving specific industry problems
Laboratory utilization by industry
Continuing education programme

Of the above, interaction at the level of industry support to basic research is virtually nonexistent, whereas at
the level of industry participation in technology development, some interaction, particularly with large public
sector enterprises, has been witnessed. Industrial problem solving constitutes, by and large, a successful
initiative, though not actualized to its full potential, since interaction in this domain is largely contingent
upon the presence of a strong industrial base in the region. One might, however, add that such a constraint is
of little consequence if the interaction is in areas where the Institute has recognized expertise. Laboratory
utilization by industry for developmental purposes and for material and product testing has seen relatively
greater success. Continuing education programme has been a time tested platform for interaction, with
participation from industry gradually on the increase.

Product/
Science Academia Technology Industry Applicatio
n

Figure 5.6. Industry vs. academia

Industry Needs and Expectations

Industry's enduring interest lies in targeted development. Large scale industry has the resources to invest in
new technology development initiatives, but it often tends to rely on bought out technologies, generally from
the overseas. Academic intervention may be required in minor technological innovation/modification aimed
at technology absorption/implementation. In the case of medium and small scale industry, the needs are
primarily oriented towards problem solving, with support required in the areas of design, process
improvement and plant and machinery performance, etc. This industry segment may also need academic
intervention in reverse engineering where the product exists and what is sought to be developed is a process
to yield it. There may be some appreciation, specifically in the case of medium scale industry, of the need for
parallel exploration of a new product line triggering a focused developmental activity, which might be carried

FSIPD 253
out in-house or in collaboration with the academia. Small scale industry dealing with specific products or
ancillary units, those acts as feeders to medium or large scale industry, do not generally seem to have
development driven needs. In this case, problem solving may simply amount to product testing and
production enhancement in terms of quantity and quality.

In its interaction with the academia, industry's expected time frames are immediate, and investment is
directed towards efforts that promise result oriented solutions. The costing frames are typically guided by a
reluctance to invest in technology R&D which has either long term or unclear outcomes.

Academia Aspirations

For academicians, the primary focus of interest is invariably a problem that throws up an intellectual
challenge. Technology development initiatives which involve understanding/ exploration of a
concept/phenomenon and alternative methodologies, etc., related to process and design improvement could
be of considerable interest. Academic environments value the autonomy of the individual researcher and
there is a strong preference for working towards creation of knowledge in specialized domains. Typically,
academic interest in the multidimensionality of a problem leads to a tendency to explore a variety of options
to arrive at a solution. Such activity consumes both time and effort and the result may often be inimical to
what the industry would regard as a wholesome solution. Time frames at institutes like the IITs are governed
also by research guidance and teaching priorities of the academic community. Globally, it is funding
considerations that orient academicians towards sponsored R&D activities, enabling them, thereby, to
sustain their broader research interests. It is not clear whether such compulsions are at work in the context of
the IITs.

The Mismatch

The gap between industry's needs and the academic community's aspirations appears to be considerably
large.
For academia, technology development amounts to conceptualization and execution coupled with
validation at the laboratory level.
For industry, the interest lies in translating the laboratory validated concept into a commercial proposition,
where the most important considerations are those of economic viability.
The industrial R&D in the country should actually be focused on this phase of technology development
where laboratory models are scaled up and converted into commercially viable products/processes.
Evolving a laboratory-proven idea into an implementable technology is a kind of effort which the academic
community does not appear to be fully geared towards, at least at present.

Improvement of Industry-Academia interaction:

The following guidelines to be followed:

For the Academic Community,

Bring the real world into the classroom or take the classroom into the real world
Require international studies
Explore new research opportunities
Stay Connected To Industry
Influence other academic communities

For the Industry Community

Offer more of the work opportunities that students and professors seek
Build deeper relationships with students
Redistribute the funding

FSIPD 254
For Academia and Industry together

Expand the collaboration


Halt the impending identity crisis
Expand the diversity of the design community
Modify academic rewards structures to encourage collaboration
Seek creative synergies

An important parameter of success for any B-school is its ability to offer corporate interface for its students,
which enhances their practical knowledge to face the corporate world.

Objective of partnership:

Major source of research funding for academia.


Industry gains valuable insight from key opinion leaders.
Complementary capabilities and skill sets.
Industry trends and practices.
Designing the course curriculum and other value added programs based on industry requirements.
Source for external project sponsored by the companies.
Bring-in consultancy project.
Create employable students Industry-ready students.
Curricula, faculty, infrastructure, pedagogy improvements in line with the industrys requirements of
demand for skilled professionals.

Different gate ways:

The different gateways for the industry-Academia interaction are as follows

Concept of Industry-Institute Partnership Cell. A dedicated effort to institutionalize the initiatives.


Guest Lecture by experienced person from industry.
Industrial visit.
Deputing faculty in industry to work in the lean period.
Organizing workshop/seminar periodically and invite the corporate people to deliver lecture and interact.
Joint FDP.
Panel Discussions.
CEO Interactions.
Corporate Excellence Award Functions

FSIPD 255
Industry-Academia Linkage:

Figure 5.7. Industry-academia linkages

The interaction of academia and industry is shown in figure 5.7.

Initiatives of Industry-Academia interface:

Companies like Pantaloon Retail part of the Future Group started this interface as an innovation, some
(especially IT companies) as the need of the hour, and some (aligning with ITI and government-run institutes)
as a social endeavor. Several of these courses have been conceptualized by industry associations like
Nasscom, with the support of member companies. The programs, though varied in terms of partners, thought
and duration, are meant to hone professional skills and eventually help the company and the booming
economy.

ICICI UDAAN.
Infosys Campus Connect.
L&T InfoTech Sparsh.
TCS AIP.
MOU between NASSCOM & UGC.
3D Printing from MIT Researchers USA
AKASH tablet from IIT Researchers INDIA

The subsequent sections describe the industry oriented product development processes, which is very much
important for any kind of industry.

FSIPD 256
5.2 The IPD Essentials

5.2.1. Vertical specific product development process:

In chapter 1, the generalised New Product Development has been discussed. Now, vertical specific product
development process is the process of development of a product of a specific vertical or industry like e.g.
automotive, aerospace, steel, heavy industry, etc. In this topic, we will learn about vertical specific product
development process by some examples. It recommended, that the vertical specific product development
process should be compared the New product development for better understanding.

5.2.1.1 Vertical specific product development process in automotive industry: Product development
steps are mentioned as following:

1. Idea generation: it is a process of identifying and activating the sources of new ideas and developing a
bank of ideas. These sources include internal sources like (all the departments of the organisation working
on the product development), customers, competitors, distributors and suppliers.
2. Idea screening: In this step, assessment of ideas are done by keeping market size, product (automobile)
size, development time and cost, manufacturing costs, rate of return, etc in consideration. After the
assessment, some of the ideas are selected for further processing.
3. Concept development and testing: In this step, the selected ideas are taken together to develop proper
concepts. In this way several concepts are formed. Now, these concepts are tested for a group of
customers and organisations own capabilities; and the best concept is chosen for further processes.
4. Marketing strategy development: After the concept development is done, there is a requirement of
making a strategy for the marketing of the concept (automobile). In this step, strategies are made for
deciding the market, sales & profit targets, price of the (product) automobile, mode of distribution of
product into the market, etc. Apart from these, budget of marketing of the product (automobile) is also
decided.
5. Business analysis: In this step, a final decision is taken i.e. whether to accept the concept / product
(automobile) or to decline. This decision is taken after analysing the product with economic point of
concern i.e. sales & profit targets, market share and competency.
6. Product development and testing: In this step, prototypes / models of the automobile is made which are
in turn tested and analyses for various performance basis. These processes include drawing and designing
of automobile, FE, CFD and crash analysis, lab testing, field testing, etc. After passing through all the
tests, the automobile is approved for development.
7. Test Marketing: A small scale test of the product is carried out in this step. The purpose is to measure
product appeal under the combined effect of salesmanship, advertising, sales promotion, distributor
incentives, public relations, etc.

There are basically 3 kinds of test marketing:

Standard test marketing: Full marketing campaign in a few chosen cities.


Controlled test marketing: It is done only through a very limited no. of showrooms.
Simulated test marketing: It is done with a very limited no. of consumers.

8. Commercialization: Finally, in this step, the automobile is fully launched in the market. It may happen
that initially it is launched in some particular areas and then later on increasing the market gradually.

FSIPD 257
5.2.1.2 Vertical Specific Product Development Process for medical devices:

The product development steps are shown in figure 5.8.

Figure 5.8. Vertical Specific Product Development Process for medical devices

Source: TCS
There are several steps which are quite specific for the development of a medical device. Such steps are
mentioned below:

1. Product Concept: Initially, a concept for new or revised device is drawn.


2. Product requirement: Now, those basic requirements are listed which are supposed to be fulfilled by the
device; like e.g. performance, operating conditions, reliability, biocompatibility, electromagnetic
compatibility, disposal issues, complexity of manufacturing processes, sterility, etc.
3. Product specifications: in this step, detailed specifications of the device and its components are given.
These specifications involve all kinds of features quantified and qualified.
4. Testing: In this step, testing is carried out. There are three stages of testing:
Bench testing: Here, critical evaluation of the device is done to ensure that it remains in required
condition during application.
Animal testing: The devices are tested on non human animals before implementing on humans.
Biocompatibility test of the materials of device is also done on animals. Even though the structure of
animals is different from humans, still these tests are carried on animals.
Clinical Testing: At last, the devices are tested on a group of human subjects who are usually sponsored
by the investigator.

5. Pilot plant scale up manufacturing: Pilot plants are small industrial systems which are operated to
generate information about the behaviour of the system for use in design of larger facilities. These pilot
plants are used to reduce the risk associated with construction of large process plants. It is done by the help
of chemical similitude studies, mathematical modelling, finite element analysis, computational fluid

FSIPD 258
dynamics, etc. Depending upon the results obtained from pilot plant, scaling of the device is done to the
actual required size and manufacturing of this size is done.
Case Study: Product development of Tata Nano:

1. Idea generation: It was always noticed that a common mans family travels on a two wheeler scooter
with one child in front and another sitting in between his father and mother. In addition to this condition,
poor roads, bad weather conditions make it dangerous, uncomfortable for the family. Here, the idea was to
develop a cheap mode transport which can be afforded by a common man and which can provide comfort and
safety in worse travel conditions.
2. Idea screening: In response to the requirement various ideas were brought up like:
A four wheeled open car
Rolled up plastic curtains in place of windows
Four wheeled car made up of engineering plastics
An auto rickshaw with four wheels, etc.

But the market requirement was a proper four wheeled car, hence the existing model of Nano was chosen.

3. Concept testing and development: After the ideas were screened, the concept development was done.
Several features were decided to be incorporated in the upcoming car like:
The car was required to be built on a non conventional platform.
All safety regulations were required to be fulfilled.
It was required to be produced at a scale double than the existing ones.
The car was supposed to fulfill some of the international standard also, so that if home market declines
the car, it could be exported.
This car should become an example to the entire world which could indicate the high capabilities of
accepting challenges, and many more.

With these priorities, concepts were made and tested. Since, the challenges were very difficult, there were
several failures, but these failures were not able to stop the efforts. Finally, the required model was
developed.

4. Business analysis: The biggest challenge under this step was to limit the cost of Nano to INR 1 lakh
only. It was sure that the conventional design and manufacturing methods will not help in meeting this
target. So focus was drawn to:
Incorporate significantly cheaper technology.
Design which reduces the material consumption.
Opt for alternative suppliers.
Opt for alternate alternative materials.
Establish a factory in a tax free zone.
Get max tax advantages.

Now the second challenge was to decide on the quantity to be produced. It was estimated that the demand
for Nano should be at least twice of that of Maruti 800 and hence the initial production capacity of
5,00,000 cars per annum was decided.

Product development: The required product (Nano) was developed with features like:

Bosch 624 c.c. twin cylinder engine


4 speed manual gear box
All aluminum engine
Light in weigh, so better mileage
Dimensions; Length: 3.1 m, Width: 1.5 m, Height: 1.6 m
Comfortable leg room
Safety standard were met, etc.

FSIPD 259
5. Commercialization: Finally Tata Nano was successfully launched after so many constraints and was a
success. Although, due to delay in production, cost of Nano had exceeded over 1 lakh.

A product has a number of benefits and such simultaneously the benefits from the product developed cannot
be derived and as such one has to compromise one benefit for the other of the product.

5.2.2 Product Development Trade Offs:

A trade-off (or tradeoff) is a situation that involves losing one quality or aspect of something in return for
gaining another quality or aspect. It often implies a decision to be made with full comprehension of both the
upside and downside of a particular choice; the term is also used in an evolutionary context, in which case the
selection process acts as the "decision-maker". Before proceeding towards product development trade-offs,
let us first understand the term trade off. Trade-off is basically an exchange of one thing with another. It is
better defined as replacement of one benefit / advantage with another. Now, the question rises that why to
choose any one of the available alternatives instead of all? The answer lies in realizing that there are many
situations when we are bound to choose only a certain options out of all available alternatives. An example of
such a situation is that while making a purchase; when we have to choose among the available options of a
product, we cannot go for the cheapest and the most advanced by selecting one option. It is quite obvious
that the price of product is going to rise with its advancement or improvement. In such conditions, we need to
make balance in our demands from a product like as when customers go to buy a car then in market who
observe so many constraints like as Cost, Performance, Life, advanced technology with easy in handling, etc.,
then user decide one of the car as per his suitability because user may or may not have constraint. The ideal
material may not come in the ideal color. It may be difficult to mold, machine, or otherwise manufacture in
the desired shape. It may not fit your budget. And even if that light, sleek, sexy little device youve created in
prototype does seem perfect in every way, it may prove otherwise when it falls off the desk and all its little
lights go out.

Before one develops a product a company has to know what the various ways are for registering the product
to be developed and manufactured by the only themselves. So, the section following this section throws light
on property right and confidentiality.

5.2.3 Intellectual Property Rights and Confidentiality

Intellectual property

Intellectual property (figure 5.9) refers to creations of the mind: inventions; literary and artistic works; and
symbols, names and images used in commerce. Intellectual property is divided into two categories:

Intellectual property

Copy Rights Industrial Property


Literacy works( Such as Patents for inventions
Novels, poems & Plays)
Trade marks
Films
Music Industrial designs
Artistic works ( Drawings, Geographical location
paintings, Photographs &
sculptures)
Architectural design

Figure 5.9. Intellectual property

FSIPD 260
Intellectual Property rights

Intellectual property rights are like any other property right. They allow creators, or owners, of patents,
trademarks or copyrighted works to benefit from their own work or investment in a creation. These rights are
outlined in Article 27 of the Universal Declaration of Human Rights, which provides for the right to benefit
from the protection of moral and material interests resulting from authorship of scientific, literary or artistic
productions.

The importance of intellectual property was first recognized in the Paris Convention for the Protection of
Industrial Property (1883) and the Berne Convention for the Protection of Literary and Artistic Works (1886).
Both treaties are administered by the World Intellectual Property Organization (WIPO).

Patent

A patent is an exclusive right granted for an invention a product or process that provides a new way of doing
something, or that offers a new technical solution to a problem. A patent provides patent owners with
protection for their inventions. Protection is granted for a limited period, generally 20 years from date of
application of patent. Patent is a territorial right and can be granted and enforced in a country only. That
means if you want Patent protection in Japan then you will have to file the Patent Application with patent
office in Japan.

Necessity of patents

Patents provide incentives to individuals by recognizing their creativity and offering the possibility of material
reward for their marketable inventions. These incentives encourage innovation, which in turn enhances the
quality of human life.

Kinds of protection patent offers

Patent protection means an invention cannot be commercially made, used, distributed or sold without the
patent owners consent. Patent rights are usually enforced in courts that, in most systems, hold the authority
to stop patent infringement. Conversely, a court can also declare a patent invalid upon a successful challenge
by a third party.

Role played by patents in everyday life

Patented inventions have pervaded every aspect of human life, from electric lighting (patents held by Edison
and Swan) and sewing machines (patents held by Howe and Singer), to magnetic resonance imaging (MRI)
(patents held by Damadian) and the iPhone (patents held by Apple). In return for patent protection, all patent
owners are obliged to publicly disclose information on their inventions in order to enrich the total body of
technical knowledge in the world. This ever increasing body of public knowledge promotes further creativity
and innovation. Patents therefore provide not only protection for their owners but also valuable information
and inspiration for future generations of researchers and inventors.

Procedure to grant patent

The first step in securing a patent is to file a patent application. The application generally contains the title of
the invention, as well as an indication of its technical field. It must include the background and a description
of the invention, in clear language and enough detail that an individual with an average understanding of the
field could use or reproduce the invention. Such descriptions are usually accompanied by visual materials
drawings, plans or diagrams that describe the invention in greater detail. The application also contains
various claims, that is, information to help determine the extent of protection to be granted by the patent.

FSIPD 261
Patents granted body

Patents are granted by national patent offices or by regional offices that carry out examination Work for a
group of countries for example, the European Patent Office (EPO) and the African Intellectual Property
Organization (OAPI). Under such regional systems, an applicant requests Protection for an invention in one or
more countries, and each country decides whether to offer Patent protection within its borders. The WIPO-
administered Patent Cooperation Treaty (PCT) Provides for the filing of a single international patent
application that has the same effect as national applications filed in the designated countries. An applicant
seeking protection may file one application and request protection in as many signatory states as needed.

Types of patents

The patents are classified as follows (figure 5.10):

Patent

Utility patent Design patent Plant patent

Figure 5.10. Types of patent

Utility Patent: - This is the most important type of Patent it is granted on the functional aspect of the
invention. This type of Patent is most sought after and requires a lot of skill in drafting of the application and
prosecuting it before a Patent Office. The functional utility of the invention is protected.

Design Patent: This type of Patent is granted to the ornamental or external appearance of the invention. If a
design is of functional necessity then it cannot be registered for Design Patent. For e.g. the aerodynamic
shape of a plane cannot be registered as design patent, as the shape is very important for the smooth
functioning of the invention itself.

Plant Patent: This type of Patent is granted for Plant variety made through asexual reproduction of plant
varieties.

Trademarks and Service marks

Trademarks

A trademark is a distinctive sign that identifies certain goods or services produced or provided by an individual
or a company. Its origin dates back to ancient times when craftsmen reproduced their signatures, or marks,
on their artistic works or products of a functional or practical nature. Over the years, these marks have evolved
into todays system of trademark registration and protection. The system helps consumers to identify and
purchase a product or service based on whether its specific characteristics and quality as indicated by its
unique trademark meet their needs.

It may be confidential business information that provides an enterprise a competitive edge may be considered
a trade secret. Usually these are manufacturing or industrial secrets and commercial secrets. These include
sales methods, distribution methods, consumer profiles, advertising strategies, lists of suppliers and clients,
and manufacturing processes. Contrary to patents, trade secrets are protected without registration. A trade
secret can be protected for an unlimited period of time but a substantial element of secrecy must exist, so
that, except by the use of improper means, there would be difficulty in acquiring the information. Considering
the vast availability of traditional knowledge in the country the protection under this will be very crucial in
reaping benefits from such type of knowledge. The Trades secret, traditional knowledge are also interlinked /
associated with the geographical indications. A well known example for Design/trade secret is the recipe
formula for Coca-Cola.

FSIPD 262
Difference between Trademarks and Service marks

A trademark is a word, name, symbol, or device used in trade with goods to indicate the Source of the goods
and to distinguish them from the goods of others. A service mark is the same as a trademark except that it
identifies and distinguishes the source of a service rather than a product. The terms trademark and mark
are commonly used to refer to both trademarks and service marks. Trademark rights may be used to prevent
others from using a similar mark which may be confusing (i.e. a pizza delivery car with the same company
markings as a police car), but not to prevent others from making or selling the same goods or services under a
clearly different mark. Trademarks which are used in US interstate or foreign commerce may be registered
with the USPTO (US Patent and Trademark Office).

Copyrights

Copyright laws grant authors, artists and other creators protection for their literary and artistic creations
generally referred to as works. A closely associated field is related rights or rights related to copyright that
encompass rights similar or identical to those of copyright, although sometimes more limited and of shorter
duration. Works covered by copyright include, but are not limited to: novels, poems, plays, reference works,
newspapers, advertisements, computer programs, databases, films, musical compositions, choreography,
paintings, drawings, photographs, sculpture, architecture, maps and technical drawings.

Regulations of copyrights

Copyright and related rights protection is obtained automatically without the need for registration or other
formalities. However, many countries provide for a national system of optional registration and deposit of
works. These systems facilitate, for example, questions involving disputes over ownership or creation,
financial transactions, sales, assignments and transfer of rights.

Many authors and performers do not have the ability or means to pursue the legal and administrative
enforcement of their copyright and related rights, especially given the increasingly global use of literary, music
and performance rights. As a result, the establishment and enhancement of collective management
organizations (CMOs), or societies, is a growing and necessary trend in many countries. These societies can
provide their members with efficient administrative support and legal expertise in, for example, collecting,
managing and disbursing royalties gained from the national and international use of a work or performance.
Certain rights of producers of sound recordings and broadcasting organizations are sometimes managed
collectively as well.

Difference between patents and copyrights

There are important differences between patents and copyright. A copyright covers the expression of a given
work, but does not stop someone appropriating ideas embedded within that work. This mostly affects
software programs. If a program has a clever idea embedded within it and the writer wishes to protect the use
of those ideas, the writer will be unlikely to be able to do so through copyright but may be able to do so
through a patent. Bear in mind however that copyright is between 50 and 75 years (internationally) compared
to a (US) patent life of 20 years from the date of application.

Benefit of copyright

Copyright and related rights protection is an essential component in fostering human creativity and
innovation. Giving authors, artists and creators incentives in the form of recognition and fair economic reward
increases their activity and output and can also enhance the results. By ensuring the existence and
enforceability of rights, individuals and companies can more easily invest in the creation, development and
global dissemination of their works. This, in turn, helps to increase access to and enhance the enjoyment of
culture, knowledge and entertainment the world over and also stimulates economic and social development.

FSIPD 263
Industrial design

An industrial design refers to the ornamental or aesthetic aspects of an article. A design may consist of three-
dimensional features, such as the shape or surface of an article, or two-dimensional features, such as
patterns, lines or color. Industrial designs are applied to a wide variety of industrial products and handicrafts:
from technical and medical instruments to watches, jewelry and other luxury items; from house wares and
electrical appliances to vehicles and architectural structures; from textile designs to leisure goods.

To be protected under most national laws, an industrial design must be new or original and nonfunctional.
This means that an industrial design is primarily of an aesthetic nature, and any technical features of the
article to which it is applied are not protected by the design registration. However, those features could be
protected by a patent.

Protection of industrial design

Industrial designs are what make an article attractive and appealing; hence, they add to the commercial value
of a product and increase its marketability. When an industrial design is protected, the owner the person or
entity that has registered the design is assured an exclusive right and protection against unauthorized
copying or imitation of the design by third parties. This helps to ensure a fair return on investment. An
effective system of protection also benefits consumers and the public at large, by promoting fair competition
and honest trade practices, encouraging creativity and promoting more aesthetically pleasing products.
Protecting industrial designs helps to promote economic development by encouraging creativity in the
industrial and manufacturing sectors, as well as in traditional arts and crafts. Designs contribute to the
expansion of commercial activity and the export of national products. Industrial designs can be relatively
simple and inexpensive to develop and protect. They are reasonably accessible to small and medium-sized
enterprises as well as to individual artists and crafts makers, in both developed and developing countries.

How to protect industrial designs

In most countries, an industrial design must be registered in order to be protected under industrial design law.
As a rule, to be registrable, the design must be new or original. Countries have varying definitions of such
terms, as well as variations in the registration process itself. Generally, new means that no identical or very
similar design is known to have previously existed. Once a design is registered, a registration certificate is
issued. Following that, the term of protection granted is generally five years, with the possibility of further
renewal, in most cases for a period of up to 15 years. Hardly any other subject matter within the realm of
intellectual property is as difficult to categorize as industrial designs. And this has significant implications for
the means and terms of its protection. Depending on the particular national law and the kind of design, an
industrial design may also be protected as a work of applied art under copyright law, with a much longer term
of protection than the standard 10 or 15 years under registered design law. In some countries, industrial
design and copyright protection can exist concurrently. In other countries, they are mutually exclusive: once
owners choose one kind of protection, they can no longer invoke the other. Under certain circumstances an
industrial design may also be protectable under unfair competition law, although the conditions of protection
and the rights and remedies available can differ significantly.

A system is composed of several components and has to serve various functions. Hence, configuration
management comes into picture.

FSIPD 264
5. 2. 4Security & Configuration management

Configuration

A configuration consists of the functional, physical, and interface characteristics of existing or planned
hardware, firmware, software or a combination thereof as set forth in technical documentation and
ultimately achieved in a product. The configuration is formally expressed in relation to a Functional,
Allocated, or product configuration baseline.

Configuration management

Configuration management permits the orderly development of a system, subsystem, or configuration item.
A good configuration management program ensures that designs are traceable to requirements, that change
is controlled and documented, that interfaces are defined and understood, and that there is consistency
between the product and its supporting documentation. Configuration management provides documentation
that describes what is supposed to be produced, what is being produced, what has been produced, and what
modifications have been made to what was produced.

Configuration management is performed on baselines, and the approval level for configuration modification
can change with each baseline. In a typical system development, customers or user representatives control
the operational requirements and usually the system concept. The developing agency program office normally
controls the functional baseline. Allocated and product baselines can be controlled by the program office, the
producer, or a logistics agent depending on the life cycle management strategy. This sets up a hierarchy of
configuration control authority corresponding to the baseline structure. Since lower level baselines have to
conform to a higher-level baseline, changes at the lower levels must be examined to assure they do not
impact a higher-level baseline. If they do, they must be approved at the highest level impacted.

For example, suppose the only engine turbine assembly affordably available for an engine development
cannot provide the continuous operating temperature required by the allocated baseline. Then not only must
the impact of the change at the lower level (turbine) be examined, but the change should also be reviewed for
possible impact on the functional baseline, where requirements such as engine power and thrust might
reside.

Configuration management is supported and performed by integrated teams in an Integrated Product and
Process Development (IPPD) environment. Configuration management is closely associated with technical
data management and interface management. Data and interface management is essential for proper
configuration management, and the configuration management effort has to include them.

Security management

For any business organization security of organization data (any kind) is so important (figure 5.11). Security
management capabilities presented in a clear, interactive format help you control access to data,
applications, and tools, reduce IT administration, and maintain high levels of protection for your business
system.

Information Security: Information has been valuable since the dawn of mankind: e.g. where to find food, how
to build shelter, etc. As access to computer stored data has increased, Information Security has become
correspondingly important. In the past, most corporate assets were hard or physical: factories, buildings,
land, raw materials, etc. Today far more assets are computer-stored information such as customer lists,
proprietary formulas, marketing and sales information, and financial data. Some financial assets only exist as
bits stored in various computers.

FSIPD 265
Figure 5.11. Data security

ISMS (Information Security Management system)

An Information Security Management System (ISMS) is way to protect and manage information based on a
systematic business risk approach, to establish, implement, operate, monitor, review, maintain, and improve
information security. It is an organizational approach to information security.

ISO 27001

ISO 27001 is an internationally recognized and independent specification for information security
management. It provides an extensive checklist of best-practice security controls which must be considered
for use in the organisations information security control framework. These controls include technical,
procedural, HR and legal compliance controls and a rigorous system of internal and independent external
audits. ISO 27001 certification allows Interoute to demonstrate a robust information security control
environment to manage security and reduce Information risk consistently within its business. By embedding
ISO 27001 security controls into the design of our solutions, Interoute controls the Confidentiality, Integrity &
Availability of our customers data holistically across the various infrastructure and platform technologies
supporting our solutions, as well as our own network and service management systems.

Interoutes Expertise, along with our extensive product portfolio of security solutions, can help you achieve
your own certification, using our solution as a base to develop from. Our experience security professionals can
leverage their knowledge, with your solution, and our Security products to meet your business technology
needs. Our Security Products include:

Firewalls,

Web and URL filtering,

Email filtering,

Other security solutions

Professional services, etc.

PDCA Cycle: The PDCA Cycle (figure 5.12) is a checklist of the four stages which you must go through to get
from `problem-faced' to `problem solved'. The four stages are Plan-Do-Check-Act, and they are carried out in
the cycle illustrated below.

FSIPD 266
P
Plan

A
D
Act Do

C
Check

Figure 5.12. Plan-Do-Check-Act

The concept of the PDCA Cycle was originally developed by Walter Shewhart, the pioneering statistician who
developed statistical process control in the Bell Laboratories in the US during the 1930's. It is often referred to
as `the Shewhart Cycle'. It was taken up and promoted very effectively from the 1950s on by the famous
Quality Management authority, W. Edwards Deming, and is consequently known by many as `the Deming
Wheel'. Use the PDCA Cycle to coordinate your continuous improvement efforts. It both emphasizes and
demonstrates that improvement programs must start with careful planning, must result in effective action,
and must move on again to careful planning in a continuous cycle.

Plan to improve your operations first by finding out what things are going wrong (that is identify the
problems faced), and come up with ideas for solving these problems.
Do changes designed to solve the problems on a small or experimental scale first. This minimizes
disruption to routine activity while testing whether the changes will work or not.
Check whether the small scale or experimental changes are achieving the desired result or not. Also,
continuously Check nominated key activities (regardless of any experimentation going on) to ensure that
you know what the quality of the output is at all times to identify any new problems when they crop up.

Act to implement changes on a larger scale if the experiment is successful. This means making the changes a
routine part of your activity. Also Act to involve other persons (other departments, suppliers, or customers)
affected by the changes and whose cooperation you need to implement them on a larger scale, or those who
may simply benefit from what you have learned (you may, of course, already have involved these people in the
do / trial stage).

Terminal questions:

1. Write a short note on engineering services industry.


2. Write down the differences between the approach of product development of industry and academia.
3. What do you understand by the term product development trade off?
4. Explain the vertical specific product development process for aerospace industry.
5. Write a short note on intellectual property rights and confidentiality.

FSIPD 267
References

1. Christine Falkenberg, Maximising R&D in the Company Value Chain for Expansion and Development into
Transition Countries, Research for Sustainable Development, Taschkent May 2009
2. Carol Corillon and Peter Mahaffy, Scientific Relations between Academia and Industry: Building on a
New Era of Interactions for the Benefit of Society, Report from an International Workshop on Academia-
Industry Relations, Sigtuna, Sweden, 22-25 November 2011
3. Booz, Alen, Hamilton, Globalization of Engineering Services-The next frontier of India, NASSCOM
4. Anna Whicher, Gisele Raulik-Murphy and Gavin Cawood, Academia-Industry Links in Innovation Policy
and the Role of Design, International Institute of Design & Support
5. Ulrich, Karl T. and Eppinger, Steven D, Product Design and Development, 3rd Edition, McGraw-Hill, New
York, 2004
6. Ullman, David G., The Mechanical Design Process, Mc Graw-Hill, 4th edition, 2009
7. Aljifri H, Navarro DS, International legal aspects of cryptography. Computers & Security,
2003;22(3):196203.
8. Andersen PW. Information security governance- Information Security Technical Report 2001; 6(3):60
70.
9. Broderick JS. Information security risk management when should it be managed?, Information
Security Technical Report 2001;6(3):128.
10. COBIT, Governance, control and audit for information and related technology (COBIT). 3rd ed. IT
Governance Institute, ISACA, ISACF
11. www.scribd.com

FSIPD 268

You might also like