Professional Documents
Culture Documents
IVth 7th Sem Year Question Bank B E Aeronautical PDF
IVth 7th Sem Year Question Bank B E Aeronautical PDF
COLLEGE OF ENGINEERING
IRUNGATTUKOTTAI
DEPARTMENT OF AERONAUTICAL
ENGINEERING
SEMESTER VII
AE6701 Avionics
ME6014 Computational Fluid Dynamics
AE6702 Experimental Stress Analysis
AE6010 Airframe Maintenance and Repair
AE6007 Fatigue And Fracture
GE6757 Total Quality Management
AE 6701 – AVIONICS
1. What is avionics?
Avionics means "aviation electronics". It comprises electronic systems for use on
aircraft, artificial satellites and spacecraft, comprising communications,
navigation and the display and management of multiple systems.
2. Explain the advantage of using avionics in civil aircraft.
• Reducing the crew workload by automating tasks.
• The reduction in weight can be translated to increased passengers or long
range.
• To enable the flight crew to carry out the aircraft mission safely and
efficiently.
• All weather operation and reduction in maintenance costs.
3. Explain the advantage of using avionics in military aircraft.
• A single seat fighter or strike aircraft is lighter and Costs less than an
equivalent two seat version.
• Elimination of the second crew member (navigator/observer/crew
member) results in reduction in training costs.
• Improved aircraft performance and control and handling and reduction in
maintenance costs
• Secure communication.
4. Give the general advantage of Avionics over the conventional aircraft
system.
• Increased safety
• Air traffic control requirements
• All weather operation
• Reduction in fuel consumption
• Improved aircraft performance and control and handling and reduction in
maintenance costs
5. Define the usage of avionics in space systems.
• Fly-by-wire control systems were used for vehicle attitude and translation
control.
• Sensors used around the aircraft for data acquisition.
• Redundancy system and autopilot.
• On board computers used in satellites for processing.
6. Explain “illities” of Avionics system.
• Capability
• Reliability
• Maintainability
• Certificability
• Survivability(military)
• Availability
• Susceptibility
• vulnerability
• Life cycle cost(military) or cost of ownership(civil)
• Technical risk
• Weight & power
7. Give various systems where the avionics used in aircrafts.
• Aircraft intercoms
• Wide Area Augmentation System
• Terrain awareness and warning system
• Ground proximity warning system
• Aircraft collision avoidance systems
• Display systems
• Traffic Collision Avoidance System
8. Explain the steps involved in design of avionics system.
• The three stages are:
i. Conceptual design - What will it do?
ii. Preliminary design - How much will it weigh?
iii. Detailed design - How many parts will it have?
9. What are digital computers?
A device that processes numerical information; more generally / any device that
manipulates symbolic information according to specified computational
procedures is called as digital computers. The term digital computer—or simply,
computer—embraces calculators, computer workstations, control computers
(controllers) for applications such as domestic appliances and industrial processes.
10. What is a volatile memory and give examples?
Volatile memory, also known as volatile storage, is computer memory that
requires power to maintain the stored information, unlike non-volatile memory
which does not require a maintained power supply. It has been less popularly
known as temporary memory.
Most forms of modern random access memory (RAM) are volatile storage
11. What is aliasing?
In computing, aliasing describes a situation in which a data location in memory
can be accessed through different symbolic names in the program. Thus,
modifying the data through one name implicitly modifies the values associated to
all aliased names, which may not be expected by the programmer. As a result,
aliasing makes it particularly difficult to understand, analyze and optimize
programs. Aliasing analyses intend to make and compute useful information for
understanding aliasing in programs.
12. Differentiate between volatile and non volatile memories.
Volatile memory: The data is lost on reboot, you will lost all of your data when
your electricity go out.. This is the ram memory. Random-access-memory.
Non-volatile memory: The data is saved to a hard drive or flash drive, or it could
be a hard coded chip. This is rom memory. Read-only-memory. All data that
stored in this type of memory will retain when you shutdown your computer.
13. What is microprocessor?
A complex microcircuit (integrated circuit) or set of such chips that carries out the
functions of the processor of an information technology system; that is, it contains
a control unit (and clock), an arithmetic and logic unit, and the necessary registers
and links to main store and to peripherals.
14. Explain the registers of microprocessor?
In computer architecture, a processor register (or general purpose register) is a
small amount of storage available on the CPU whose contents can be accessed
more quickly than storage available elsewhere. Typically, this specialized storage
is not considered part of the normal memory range for the machine. Processor
registers are at the top of the memory hierarchy, and provide the fastest way for a
CPU to access data.
15. What is Accumulator?
In a computer's central processing unit (CPU), an accumulator is a register in
which intermediate arithmetic and logic results are stored. Without a register like
an accumulator, it would be necessary to write the result of each calculation
(addition, multiplication, shift, etc.) to main memory, perhaps only to be read
right back again for use in the next operation. Access to main memory is slower
than access to a register like the accumulator because the technology used for the
large main memory is slower (but cheaper) than that used for a register.
16. Explain the types of memories?
Volatile memory, also known as volatile storage, is computer memory that
requires power to maintain the stored information, unlike non-volatile memory
which does not require a maintained power supply. It has been less popularly
known as temporary memory. Most forms of modern random access memory
(RAM) are volatile storage, including dynamic random access memory (DRAM)
and static random access memory (SRAM).
Non-volatile memory, nonvolatile memory, NVM or non-volatile storage, is
computer memory that can retain the stored information even when not powered.
Examples of non-volatile memory include read-only memory, flash memory,
most types of magnetic computer storage devices (e.g. hard disks, floppy disks,
and magnetic tape), optical discs, and early computer storage methods such as
paper tape and punch cards.
17. Give few avionics architecture.
• First Generation Architecture ( 1940’s –1950’s)
i. Disjoint or Independent Architecture ( MiG-21)
ii. Centralized Architecture (F-111)
• Second Generation Architecture ( 1960’s –1970’s)
i. Federated Architecture (F-16 A/B)
ii. Distributed Architecture (DAIS)
iii. Hierarchical Architecture (F-16 C/D, EAP)
• Third Generation Architecture ( 1980’s –1990’s)
i. Pave Pillar Architecture ( F-22)
• Fourth Generation Architecture (Post 2005)
i. Pave Pace Architecture- JSF
ii. Open System Architecture
18. Explain Federated architecture.
In FEDERATED ARCHITECTURE Data conversion occurs at the system level
and the datas are send as digital form – called Digital Avionics Information
Systems(DAIS). Several standard data processors are often used to perform a
variety of Low – Bandwidth functions such as navigation, weapon delivery ,
stores management and flight control Systems are connected in a Time – Shared
Multiplex Highway. Resource sharing occurs at the last link in the information
chain – via controls and displays.
19. Explain centralized architecture.
As the digital technology evolved, a central computer was added to integrate the
information from the sensors and subsystems. The central computing complex is
connected to other subsystems and sensors through analog, digital, synchro and
other interfaces. When interfacing with computer a variety of different
transmission methods, some of which required signal conversion (A/D) when
interfacing with computer. Signal conditioning and computation take place in one
or more computers in a LRU located in an avionics bay ,with signals transmitted
over one way data bus. Data are transmitted from the systems to the central
computer and the DATA CONVERSION TAKES PLACE AT THE CENTRAL
COMPUTER.
20. How is federated architecture different from centralized architecture?
In FEDERATED ARCHITECTURE Data conversion occurs at the system level
and the data are send as digital form – called Digital Avionics Information
Systems (DAIS). It is fully digital.
In centralized architecture Data conversion takes place at the central computer.
Analog wires are used.
21. Explain MIL-STD 1553B components?
• BUS CONTROLLER
• BUS MONITOR
• REMOTE TERMINAL
• TRANSMISSION MEDIA
22. Explain the status word of MIL-STD 1553B.
Status words are transmitted by the RT in response to command messages from
the BC and consist of:
• 3 bit-time sync pattern (same as for a command word)
• 5 bit address of the responding RT
• 11 bit status field
• 1 parity check bit.
The 11 bits in the status field are used to notify the BC of the operating condition
of the RT and subsystem.
23. Explain the bus controller and Remote terminal of MIL-STD 1553B.
There is only one Bus Controller at a time on any MIL-STD-1553 bus. It initiates
all message communication over the bus.
A Remote Terminal can be used to provide:
• An interface between the MIL-STD-1553B data bus and an attached
subsystem
• A bridge between a MIL-STD-1553B bus and another MIL-STD-1553B
bus.
24. Explain ARINC 429 standard.
ARINC 429 is the technical standard for the predominant avionics data bus used
on most higher-end commercial and transport aircraft. It defines the physical and
electrical interfaces of a two-wire data bus and a data protocol to support an
aircraft's avionics local area network.
25. Explain ARINC 629 standard.
ARINC 629 is a multi-transmitter protocol where many units share the same bus.
It was a further development of ARINC 429 especially designed for the Boeing
777.
26. What is an auto pilot?
An autopilot is a mechanical, electrical, or hydraulic system used to guide a
vehicle without assistance from a human being. Most people understand an
autopilot to refer specifically to aircraft, but self-steering gear for ships, boats,
space craft and missiles are sometimes also called by this term.
27. What is brick walling or partitioning in avionics architecture?
The purpose of partitioning is fault containment: a failure in one partition must
not propagate to cause failure in another partition. The function in a partition
depends on the correct operation of its processor and associated peripherals, and
partitioning is not intended to protect against their failure—this can be achieved
only by replicating functions across multiple processors in a fault-tolerant
manner.
28. Define Glass cockpit.
A glass cockpit is an aircraft cockpit that features electronic instrument displays.
Where a traditional cockpit relies on numerous mechanical gauges to display
information, a glass cockpit uses several displays driven by flight management
systems, that can be adjusted to display flight information as needed. This
simplifies aircraft operation and navigation and allows pilots to focus only on the
most pertinent information. They are also popular with airline companies as they
usually eliminate the need for a flight engineer. In recent years the technology has
become widely available in small aircraft.
29. Define plasma panel.
A plasma display panel (PDP) is a type of flat panel display common to large TV
displays (32" inches or larger). Many tiny cells between two panels of glass hold a
mixture of noble gases. The gas in the cells is electrically turned into a plasma
which then excites phosphors to emit light.
30. Differentiate LED & LCD.
LEDs are based on the semiconductor diode. When the diode is forward biased
(switched on), electrons are able to recombine with holes and energy is released in
the form of light. This effect is called electroluminescence and the color of the
light is determined by the energy gap of the semiconductor. LEDs present many
advantages over traditional light sources including lower energy consumption,
longer lifetime, improved robustness, smaller size and faster switching. However,
they are relatively expensive and require more precise current and heat
management than traditional light sources.
A liquid crystal display (LCD) is a thin, flat panel used for electronically
displaying information such as text, images, and moving pictures Among its
major features are its lightweight construction, its portability, and its ability to be
produced in much larger screen sizes than are practical for the construction of
cathode ray tube (CRT) display technology. Its low electrical power consumption
enables it to be used in battery-powered electronic equipment. It is an
electronically-modulated optical device made up of any number of pixels filled
with liquid crystals and arrayed in front of a light source (backlight) or reflector to
produce images in color or monochrome.
31. Explain CRT and its usage in aircraft displays.
The cathode ray tube (CRT) is a vacuum tube containing an electron gun (a
source of electrons) and a fluorescent screen, with internal or external means to
accelerate and deflect the electron beam, used to create images in the form of light
emitted from the fluorescent screen.
32. What is meant by DVI?
The Digital Visual Interface (DVI) is a video interface standard designed to
provide very high visual quality on digital display devices such as flat panel LCD
computer displays and digital projectors. It was developed by an industry
consortium, the Digital Display Working Group (DDWG). It is designed for
carrying uncompressed digital video data to a display.
33. What are MFD and its significance in Aircraft?
A Multi-function display (MFD) is a small screen (CRT or LCD) in an aircraft
surrounded by multiple buttons that can be used to display information to the pilot
in numerous configurable ways. Often an MFD will be used in concert with a
Primary Flight Display. MFDs are part of the digital era of modern planes or
helicopter. The first MFD were introduced by airforces. The advantage of an
MFD over analog display is that an MFD does not consume much space in the
cockpit.
Many MFDs allow the pilot to display their navigation route, moving map,
weather radar, NEXRAD, GPWS, TCAS and airport information all on the same
screen.
34. What is HOTAS?
HOTAS, an abbreviation for Hands On Throttle-And-Stick, is the name given to
the concept of placing buttons and switches on the throttle stick and flight control
stick in an aircraft's cockpit, allowing the pilot to access vital cockpit functions
and fly the aircraft without having to remove his hands from the throttle and flight
controls. Having all switches on the stick and throttle allows the pilot to keep his
"hands on throttle-and-stick", thus allowing him to remain focused on more
important duties than looking for controls in the cockpit.
35. Explain HUD?
A head-up display, or abbreviated as HUD, is any transparent display that
presents data without requiring the user to look away from his or her usual
viewpoint. The origin of the name stems from the user being able to view
information with his head "up" and looking forward, instead of angled down
looking at lower instruments. Although they were initially developed for military
aviation, HUDs are now used in commercial aircraft, automobiles, and other
applications.
36. Explain Navigation.
Navigation is the process of reading, and controlling the movement of a craft or
vehicle from one place to another. It is also the term of art used for the specialized
knowledge used by navigators to perform navigation tasks. The word navigate is
derived from the Latin "navigare", meaning "to sail".All navigational techniques
involve locating the navigator's position compared to known locations or patterns.
37. Explain the types of Navigation.
• Dead reckoning
• Navigation by Piloting
• Celestial navigation
• Electronic navigation
38. Explain Dead reckoning type of navigation.
Dead reckoning (DR) is the process of estimating one's current position based
upon a previously determined position, or fix, and advancing that position based
upon known or estimated speeds over elapsed time, and course. While traditional
methods of dead reckoning are no longer considered primary means of navigation,
modern inertial navigation systems, which also depend upon dead reckoning, are
very widely used.
39. What is INS?
An Inertial Navigation System (INS) is a navigation aid that uses a computer,
motion sensors (accelerometers) and rotation sensors (gyroscopes) to
continuously calculate via dead reckoning the position, orientation, and velocity
(direction and speed of movement) of a moving object without the need for
external references. It is used on vehicles such as ships, aircraft, submarines,
guided missiles, and spacecraft.
40. What are different types of INS?
It is of two different configurations based on the inertial sensor placement. They
are
a. Stable or Gimballed platform.
b. Strap down platform
41. What is GPS?
The Global Positioning System (GPS) is a U.S. space-based global navigation
satellite system. It provides reliable positioning, navigation, and timing services to
worldwide users on a continuous basis in all weather, day and night, anywhere on
or near the Earth. GPS is made up of three parts: between 24 and 32 satellites
orbiting the Earth, four control and monitoring stations on Earth, and the GPS
receivers owned by users. GPS satellites broadcast signals from space that are
used by GPS receivers to provide three-dimensional location (latitude, longitude,
and altitude) plus the time.
42. Explain about P and C/A codes.
Binary data that is modulated or "superimposed" on the carrier signal is referred
to as Code. Two main forms of code are used with NAVSTAR GPS: C/A or
Coarse/Acquisition Code (also known as the civilian code), is modulated and
repeated on the L1 wave every millisecond; the P-Code, or Precise Code, is
modulated on both the L1 and L2 waves and is repeated every seven days. The
(Y) code is a special form of P code used to protect against false transmissions;
special hardware, available only to the U.S.government, must be used to decrypt
the P(Y) code.
43. What is Flight control system?
An aircraft flight control system consists of flight control surfaces, the respective
cockpit controls, connecting linkages, and the necessary operating mechanisms to
control an aircraft's direction in flight. Aircraft engine controls are also considered
as flight controls as they change speed.
44. What is Actuator?
An actuator is a mechanical device for moving or controlling a mechanism or
system. An actuator typically is a mechanical device that takes energy, usually
created by air, electricity, or liquid, and converts that into some kind of motion.
45. Explain different types of actuator.
• plasma actuators
• pneumatic actuators
• electric actuators
• hydraulic cylinders,
• linear actuators
46. What is FBW?
A fly-by-wire system actually replaces manual control of the aircraft with an
electronic interface. The movements of flight controls are converted to electronic
signals, and flight control computers determine how to move the actuators at each
control surface to provide the expected response. The actuators are usually
hydraulic, but electric actuators have also been used.
47. Explain the advantage of FBW over conventional FCS.
• Care free maneuvering characteristics
• Continuous automatic stabilization of the aircraft by computer control of
the control surfaces
• Auto pilot integration
• Good consistent handling which is sensibly constant over a wide flight
envelope and range of load conditions
• Enables a lighter, higher performance aircraft designed with relaxed
stability
48. What is strap down Navigation?
Strapdown systems have all their sensors mounted on a platform that changes
orientation like the plane. Instead of mechanical gyros to hold it level, it has three
more accurate gyros that sense the orientation of the system. Additionally, it has
the same three acceleration sensors.
Whereas the gimballed system just senses the orientation of the platform to get
the aircraft's attitude, the strapdown systems have three gyroscopes that sense the
rate of roll, pitch, and yaw. It integrates them to get the orientation, then
calculates the acceleration in each of the same axes as the gimballed system. Due
to the sensing of the rate of rotation, rather than just holding a platform level, very
accurate and sensitive gyroscopes are needed.
49. What is FMS?
A Flight Management System is a fundamental part of a modern aircraft in that it
controls the navigation. The flight management system (FMS) is the avionics that
holds the flight plan, and allows the pilot to modify as required in flight. The FMS
uses various sensors to determine the aircraft's position. Given the position and
the flight plan, the FMS guides the aircraft along the flight plan. The FMS is
normally controlled through a small screen and a keyboard.
50. What is meant by jammers in electronic warfare?
Electronic jamming is a form of Electronic Warfare where jammers radiate
interfering signals toward an enemy's radar, blocking the receiver with highly
concentrated energy signals. The two main technique styles are noise techniques
and repeater techniques.
51. Give the difference between ECCM and ECM.
Electronic countermeasures (ECM) are a subsection of electronic warfare which
includes any sort of electrical or electronic device designed to trick or deceive
radar, sonar, or other detection systems like IR (infrared) and Laser. It may be
used both offensively or defensively in any method to deny targeting information
to an enemy. The system may make many separate targets appear to the enemy, or
make the real target appear to disappear or move about randomly. It is used
effectively to protect aircraft from guided missiles. Most air forces use ECM to
protect their aircraft from attack.
Electronic counter-countermeasures (ECCM) is a part of electronic warfare which
includes a variety of practices which attempt to reduce or eliminate the effect of
electronic countermeasures (ECM) on electronic sensors aboard vehicles, ships
and aircraft and weapons such as missiles. ECCM is also known as electronic
protective measures (EPM), chiefly in Europe. In practice, EPM often means
resistance to jamming.
52. Explain RADAR.
Radar is an object detection system that uses electromagnetic waves to identify
the range, altitude, direction, or speed of both moving and fixed objects such as
aircraft, ships, motor vehicles, weather formations, and terrain. The term RADAR
was coined in 1941 as an acronym for RAdio Detection And Ranging. A radar
system has a transmitter that emits microwaves or radio waves. These waves are
in phase when emitted, and when they come into contact with an object are
scattered in all directions. The signal is thus partly reflected back and it has a
slight change of wavelength (and thus frequency) if the target is moving.
Although the signal returned is usually very weak, the signal can be amplified
through use of electronic techniques in the receiver and in the antenna
configuration. This enables radar to detect objects at ranges where other
emissions, such as sound or visible light, would be too weak to detect.
53. Explain Certification.
Certification refers to the confirmation of certain characteristics of an object,
person, or organization. This confirmation is often, but not always, provided by
some form of external review, education, or assessment.
54. Explain V & V
Verification and validation is the process of checking that a product, service, or
system meets specifications and that it fulfills its intended purpose. These are
critical components of a quality management system such as ISO 9000.
Sometimes preceded with "Independent" (or IV&V) to ensure the validation is
performed by a disinterested third party.
55. Explain Reliability.
• The idea that something is fit for purpose with respect to time;
• The capacity of a device or system to perform as designed;
• The resistance to failure of a device or system;
• The ability of a device or system to perform a required function under
stated conditions for a specified period of time;
• The probability that a functional unit will perform its required function for
a specified interval under stated conditions.
• The ability of something to "fail well" (fail without catastrophic
consequences)
56. Explain maintainability.
The probability that a failed system can be made operable in a specified interval
or downtime is called as maintainability.
57. Explain Electronic warfare.
Electronic warfare (EW) refers to any action involving the use of the
electromagnetic spectrum or directed energy to control the spectrum or to attack
the enemy. The purpose of electronic warfare is to deny the opponent the
advantage of, and ensure friendly unimpeded access to, the EM spectrum. EW can
be applied from air, sea, land, and space by manned and unmanned systems, and
can target communication, radar, or other services. EW includes three major
subdivisions: Electronic Attack (EA), Electronic Protection (EP), and Electronic
warfare Support (ES).
58. Explain the specific advantages of INS.
• It is the self contained, autonomous and unjammable.
• It is faster than the data given by the GPS.
• INS is very accurate over the short distance.
59. Explain Gimbaled INS.
Gimbaled systems have a platform in the device that is mounted in gimbals. This
device has 2 or more mechanical gyroscopes (not likely there are more than 3)
that keep this platform level. On the platform, in addition to the gyroscopes, are
usually three accelerometers, one in each direction. This was the earlier type of
INS. It does not need accurate gyroscope orientation sensing, they only need
mechanical gyroscopes to keep a platform level -- a much less demanding task for
the gyroscopes. Additionally, since the accelerometers are already oriented
(usually north/south, east/west, and up/down) the actual integration to obtain
velocity and then position can be done by simpler, analog electronics.
60. Give few examples of integrated avionics system used in weapon system.
• Hemet Mounted Display (HMT)
• Head Level Display (HLD)
• Night Vision Goggles (NGV)
• Forward Looking Infra Red Displays (FLIR)
61. Give few examples of Standards used in design of avionics system.
• Military standards - MIL-STD-1629A (Hardware FMEA), MILSTD-
882 (systems safety program requirements)
• ARINC 429, 629 (Civilian a/c data bus) and MIL-STD-1553A, 1773
(Military a/c data buses)
• Civil Standards - FAR 25 : 1309 A (equipments, systems and
installation), FAR 25 : 581 (lightning protection systems), FAR 25 : 571
(control systems), FAR 25: 572 (stability augmentation systems)
62. Give few examples of integrated avionics system used in civil airlines.
• INS & GPS (Navigation)
• MFKs and MFDU (Display I/O)
• HUD
• Glass Cockpit
63. Explain the steps of certification.
• Assessment
• Validation
• Verification
Other major steps like
• Functional Hazard Assessment (FHA)
• Fault Tree Analysis (FTA)
• Failure Mode Effect Analysis (FMEA)
64. Explain the document support for Certification.
• In Military aircrafts, the documents support for certification
i. MIL-STD-1629A (Hardware)
ii. DOD-STD-2167 (Software)
• In Civil aircrafts, the document support for certification
o FAR 25 : 1309A (systems)
o RTCA-DO-178A (Software)
Within the overall task of hardware assessment and validation,
certification is perhaps the most difficult part for civil/military avionics
designers. Certification is the challenging process of negotiation and
compromise between the designers and the regulatory authorities
buttressed by technical analysis and expertise on both sides.
65. Explain the advantage of GPS over conventional navigation.
• Global coverage and assessment
• More precise
• High integrity and portable simple system
• Augments the accuracy of the self contained systems
66. Explain FBW over FBL.
• FBW: (FLY BY WIRE) It’s a concept of utilizing digital data bus for the
transmission of mechanical movements from pilot’s joystick to the
Mechanical actuators near by the control surface in the form of electronic
signals using suitable transducers. It eliminates majority of the weight in
handling control rods, push pull systems. Because of this weight
elimination the redundancy level of FBW can be raised.
• FBL: (FLY BY LIGHT) It’s a concept of utilizing optic fiber cables for
the transmission of mechanical movements from pilot’s joystick to the
mechanical actuators near by the control surface in the form of
monochromatic light signals using suitable transducers. It eliminates the
amplification units, filter circuits, modulator units etc., which are at high
redundant levels in the FBW. FBL is more reliable and the signals passing
through the fiber optic cable never degrade and hence it contributes further
low weight in the aircraft.
67. What is FBL?
• FBL: (FLY BY LIGHT) It’s a concept of utilizing optic fiber cables for
the transmission of mechanical movements from pilot’s joystick to the
mechanical actuators near by the control surface in the form of
monochromatic light signals using suitable transducers. It eliminates the
amplification units, filter circuits, modulator units etc., which are at high
redundant levels in the FBW. FBL is more reliable and the signals passing
through the fiber optic cable never degrade and hence it contributes further
low weight in the aircraft.
68. Compare INS and GPS.
INS GPS
Self contained system Not a self contained system
Accuracy degrades with time Accuracy level is high
always
Being a self contained system,
weight is more Weight is less comparatively
No signal transmission/reception Signal is transmitted/received
69. What is usage of night vision goggles?
• Night time flight is possible (As pilot can see the terrain like a near
morning)
• Close Ground Support during night time attack is enabled in rotor and
fighter planes
• Night time reconnaissance and surveillance is possible on NVG enabled
cameras
70. Explain advantage of EL over Plasma display.
• Less flickering
• Sustainable luminosity even during aging
• Light weight than plasma displays
• Simple light weight component
• Available in smaller size (unlike plasma displays, which are available only
at 32”)
71. Explain the need of communication system in airline.
• Renders a clear picture of aircrafts health during the complete mission
• Ensures safety landing and take-off guidance
• Provides suitable environment awareness to the crew members of flight to
direct the flying machine
• Acts as primary interfacing unit between pilot and the ATC or ground
station.
72. Explain the advantage of HMD over MUD?
• In HMD the gimbaled sensors enables the pilot to watch critical data in the
helmet in the directions through which he/she moves/looks, thus
facilitating him/her to watch the primary data always.
• HMD display formats are very similar to those of HUDs except for the
addition of helmet-pointing azimuth and elevation information and vectors
showing where the last target of interest was prior to looking down into
the cockpit or searching for another target.
73. Explain MFK and its usage.
• As the cockpits of modern aircraft have more controls jammed into them,
the point reached where there is no more space. Multifunction keyboards
(MFKs) offer a very attractive solution to this space problem wherein a
single panel of switches performs a variety of functions depending on the
phase of the mission or the keyboard menu selected.
• Multifunction keyboards can be implemented in several ways. The first
two ways use LEDs or LCDs in panels in a central location. Designs using
LEDs have arrays (typically ranging from five rows of three switches to
seven rows of five switches) of standard sized push button switches with
legends built into the surface of the switches.
74. Differentiate between Civil and military communication standards.
Civil Communication Standards
Speed level in data bus is generally slow. Frequency band is in common range of
communication devices. Redundancy and integrity level is moderate. Maintenance
is spoken in terms of Cost Of Ownership (COO)
Ex: ARINC 429, FAR-25-1309A
Military Communication Standards
Speed level in the data bus is high.Frequency band is in isolated or restricted range
of communication devices. It varies between nation to nation. Redundancy and
integrity level is high. Maintenance is spoken in terms of Life Cycle Cost (LCC)
INSs have angular and linear accelerometers (for changes in position); some
include a gyroscopic element (for maintaining an absolute angular reference). An
inertial navigation system (INS) is a navigation aid that uses a computer, motion
sensors (accelerometers) and rotation sensors (gyroscopes) to continuously
calculate via dead reckoning the position, orientation, and velocity (direction and
speed of movement) of a moving object without the need for external references. It
is used on vehicles such as ships, aircraft, submarines, guided missiles, and
spacecraft. Other terms used to refer to inertial navigation systems or closely
related devices include inertial guidance system, inertial reference platform,
inertial instrument, and many other variations.
Collision-avoidance systems
– traffic alert and collision avoidance system (TCAS), which can detect the
location of nearby aircraft, and provide instructions for avoiding a midair
collision. Smaller aircraft may use simpler traffic alerting systems such as
TPAS, which are passive (they do not actively interrogate the transponders of
other aircraft) and do not provide advisories for conflict resolution.
– To help avoid controlled flight into terrain (CFIT), aircraft use systems such
as ground-proximity warning systems (GPWS), which use radar altimeters as
a key element. One of the major weaknesses of GPWS is the lack of "look-
ahead" information, because it only provides altitude above terrain "look-
down". In order to overcome this weakness, modern aircraft use a terrain
awareness warning system (TAWS).
PART B
INTRODUCTION TO AVIONICS
1. What is navigation system and explain the various types of navigation with
examples.
2. Explain in detail about Radar Electronic war fare and its salient features and its
usage.
3. Explain Certification and explain the various steps involved in certification of
Avionics system.
4. Explain radio navigation in detail.
5. Explain in detail about INS.
6. Explain in detail about satellite navigation system.
7. Explain in detail about ADF, DME, VOR, LORAN, DECCA, OMEGA, ILS,
MLS.
8. What is Dead reckoning navigation system and explain any one type in detail.
9. Explain satellite navigation system in detail. Explain inertial sensors.
UNIT I
The fundamental basis of almost all CFD problems are the Navier–Stokes equations,
which define any single-phase (gas or liquid, but not both) fluid flow. These equations can be
simplified by removing terms describing viscous actions to yield the Euler equations. Further
simplification, by removing terms describing vorticity yields the full potential equations.
Finally, for small perturbations in subsonic and supersonic flows (not transonic orhypersonic)
these equations can be linearized to yield the linearized potential equations.
Historically, methods were first developed to solve the Linearized potential equations.
Two-dimensional (2D) methods, using conformal transformations of the flow about a
cylinder to the flow about an airfoil were developed in the 1930s. The computer power
available paced development of three-dimensional methods. The first work using computers
to model fluid flow, as governed by the Navier-Stokes equations, was performed at Los
Alamos National Labs, in the T3 group. This group was led by Francis H. Harlow, who is
widely considered as one of the pioneers of CFD. From 1957 to late 1960s, this group
developed a variety of numerical methods to simulate transient two-dimensional fluid flows,
such as Particle-in-cell method (Harlow, 1957), Fluid-in-cell method (Gentry, Martin and
Daly, 1966), Vorticity stream function method (Jake Fromm, 1963), and Marker-and-cell
method(Harlow and Welch, 1965). Fromm's vorticity-stream-function method for 2D,
transient, incompressible flow was the first treatment of strongly contorting incompressible
flows in the world.
4. Do a brief literature survey o CFD and the origin of different codes and methods?
The first paper with three-dimensional model was published by John Hess and
A.M.O. Smith of Douglas Aircraft in 1967. This method discretized the surface of the
geometry with panels, giving rise to this class of programs being called Panel Methods. Their
method itself was simplified, in that it did not include lifting flows and hence was mainly
applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described
in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more
advanced three-dimensional Panel Codes were developed at Boeing(PANAIR, A502),
Lockheed (Quadpan), Douglas (HESS),McDonnellAircraft (MACAERO), NASA (P MARC)
and Analytical Methods (WBAERO, USAERO and VSAERO). Some (PANAIR, HESS and
MACAERO) were higher order codes, using higher order distributions of surface
singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single
singularities on each surface panel. The advantage of the lower order codes was that they ran
much faster on the computers of the time. Today, VSAERO has grown to be a multi-order
code and is the most widely used program of this class. It has been used in the development
of many submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind
turbines. Its sister code, USAERO is an unsteady panel method that has also been used for
modeling such things as high speed trains and racing yachts. The NASA PMARC code from
an early version of VSAERO and a derivative of PMARC, named CMARC, is also
commercially available.
In the two-dimensional realm, a number of Panel Codes have been developed for
airfoil analysis and design. The codes typically have a boundary layer analysis included, so
that viscous effects can be modeled. Professor Richard Eppler of the University of Stuttgart
developed the PROFILE code, partly with NASA funding, which became available
inthe early1980s. This was soon followedby MIT Professor Mark
[21]
Drela's XFOIL code. Both PROFILE and XFOIL incorporate two-dimensional panel
codes, with coupled boundary layer codes for airfoil analysis work. PROFILE uses
a conformal transformation method for inverse airfoil design, while XFOIL has both a
conformal transformation and an inverse panel method for airfoil design.
An intermediate step between Panel Codes and Full Potential codes were codes that used the
Transonic Small Disturbance equations. In particular, the three-dimensional WIBCO code,
developed by Charlie Boppe of Grumman Aircraft in the early 1980s has seen heavy use.
Developers turned to Full Potential codes, as panel methods could not calculate the
non-linear flow present at transonic speeds. The first description of a means of using the Full
Potential equations was published by Earll Murman and Julian Cole of Boeing in 1970.
Frances Bauer, Paul Garabedian and David Korn of the Courant Institute at New York
University (NYU) wrote a series of two-dimensional Full Potential airfoil codes that were
widely used, the most important being named Program H A further growth of Program H was
developed by Bob Melnik and his group at Grumman Aerospace as Grumfoil. Antony
Jameson, originally at Grumman Aircraft and the Courant Institute of NYU, worked with
David Caughey to develop the important three-dimensional Full Potential code FLO22 in
1975. Many Full Potential codes emerged after this, culminating in Boeing's Tranair (A633)
code, which still sees heavy use.
The main task in fluid dynamics is to find the velocity field describing the flow in a
given domain. To do this, one uses the basic equations of fluid flow, which we derive in this
section. These encode the familiar laws of mechanics:
In any domain, the flow equations must be solved subject to a set of conditions that act at the
domain boundary, If the flow leads to compression of the fluid, we must also consider
thermodynamics:
• conservation of energy.
Bernoulli‟s equation
An incompressible flow is a flow where the density ρ is constant. Let‟s assume we‟re dealing
with an incompressible flow. From the momentum equation and the streamline condition, we
can derive that
This equation is called Euler‟s equation. Since the streamline condition was used in the
derivation, it is only valid along a streamline. Integrating the Euler equation between point 1
and point 2 gives
In other words, is constant along a streamline.
An inviscid flow is a flow without friction, thermal conduction or diffusion. It can be shown
that inviscid flows are irrotational flows. For irrotational is constant, even for different streamlines.
Continuity equation
In a low-speed wind tunnel the flow field variables can be assumed to be a function of x only,
so A = A(x), V = V (x), p = p(x), etcetera. Such a flow is called a quasi-one-dimensional
flow. From the continuity equation can be derived that
for two points in the tunnel. This applies to both compressible and incompressible flows. If
the flow becomes incompressible, then ρ1 = ρ2. The equation then reduces to A1V1 = A2V2.
If we combine this with Bernoulli‟s equation, we find
If the flow is also inviscid, and thus irrotational, it follows that ∇ ×V = 0. It also implicates that there is a velocity potential φ such that V = ∇φ.
Combining this with above equation gives
This simple but important relation is called Laplace‟s equation. It seems that the velocity
potential satisfies Laplace‟s equation. But what about the stream function? We know that
We can also remember the irrotationality condition, stating that ∂v .
Inserting above in this condition gives
So the stream function ψ also satisfies Laplace‟s equation, just like the velocity potential
function φ.
A method to simulate the flow around an airfoil is the so-called panel method.
We know that any combination of sources, doublets, vortices and such satisfies the Laplace
equation. However, which combination describes our flow? To find that out, we have to
use the boundary conditions. But how should we use them? We just use the panel method!
Simply apply the following steps.
• At every panel, we can find the velocity potential φ (as a function of the source strengths
σ). We can also find the flow velocity ∂φ/∂n normal to the airfoil. This flow velocity should
of course be 0. This gives us N equations. Using these conditions, we can solve for the
source strengths σi.
Solving for the source strengths σ might seem like a difficult task. However, these equations can simply be put in a matrix. This goes according to
13. What is Source and sink flows?
1. Source flow: Consider a two-dimensional, incompressible flow where all the streamlines
are straight lines emanating from a central point O, as shown at the left of Figure. Mover, let
the velocity along each of the streamlines vary inversely with distance from point O. Such a
flow is called a source flow. Examining Figure, we see that the velocity components in the
radial and tangential directions are Vr and V , respectively, where V = 0. The coordinate
system in Figure is a cylindrical coordinate system, with the z axis perpendicular to the page.
(Note that polar coordinates are simply the cylindrical coordinates r and confined to a
single plane given by z = constant.) It is easily shown that (1) source flow is a physically
possible incompressible flow, that is, .V = 0, at every point except the origin, where .V
becomes infinite, and (2) source flow is irrotational at every point.
Consider a source of strength and a sink of equal (but opposite) strength - separated
by a distance l, as shown in figure a. At any point P in the flow, the stream function is
1 2
2 2
where 2 2 1 as seen from figure a. Equation is the stream function for a source-
Now in Figure a, let the distance l approach zero while the absolute magnitude of the
strengths of the source and sink increase in such a fashion that the product l remains
constant. This limiting process is shown in Figure b. In the limit, as l 0 while l
remains constant, we obtain a special flow pattern defined as a doublet. The strength of the
doublet is denoted by k and is defined as k l . The stream function for a doublet is
obtained from Equation as follows:
lim d
l 0 2
k l const
Consider a flow however all the streamlines are concentric circles about a given point, as
sketched in figure. Moreover, let the velocity along any given circular streamline be constant,
but let it vary from one streamline to another inversely with distance from the common
center. Such a flow is called a vortex flow. Examine Figure; the velocity
components in the radial and tangential directions are V and V , respectively, where
r
Vr 0 and V constant / r. It is easily shown (try it yourself) that (1) vortex flow is a
physically possible incompressible flow, that is,
.V 0
at every point, and (2) vortex flow
is irrotational, that is, V 0, at every point except the origin.
16. What is Lifting flow over a cylinder?
Consider the flow synthesized by the addition of the nonlifting flow over a cylinder and a
vortex of strength , as shown in figure. The stream function for nonlifiting flow over a
circular cylinder of radius R is given by Equation:
17. Draw the Source panel distribution over the surface of a body of a arbitrary shape.
Figure: Source panel distribution over the surface of a body of a arbitrary shape.
18. What is the difference between source panel and vortex panel methods?
The vortex panel method is directly analogous to the source panel method. However,
because a source has zero circulation, source panels are useful only for nonlifting cases. In
contrast, vortices have circulation, and hence vortex panels can be used for lifting cases.
For the source panel method, the n equations for the n unknown source strengths are
routinely solved, giving the flow over a nonlifting body. In contrast, for the lifting case with
vortex panels, in addition to n the n equations given by Equation applied at all the panels,
we must also satisfy the Kutta condition.
19. Classify PDE.
Figure: Domain and boundaries for the solution of hyperbolic equations. Two-
dimensional steady flow.
A significance of these characteristics is that information at point P influences only the region
between the two characteristics. For example, if in figure, we jabbed point P with a pin, i.e., if
we set up a small disturbance at point P, then this disturbance is felt at every point within
region I in figure, but only in that region. In this sense, region I is defined as the region of
influence of point P. Now image the two characteristics through P extended backward to the y
axis. That portion of the y axis which is intercepted by the two characteristics is labeled ab.
This has a corollary effect on boundary conditions for hyperbolic equations. For example,
assume that boundary conditions are specified on the y axis (x = 0). That is, the dependent
variables u and v are known along the y axis. Then the solution can be obtained by
“marching forward” in the distance x, starting from the given boundary. However, the
solution for u and v at point P will depend only on that part of the boundary between a and b,
as shown in figure. Information at point c, which is outside the interval ab, is propagated
along characteristics through and influences on only region II in figure. Point P is outside
region II, and hence does not feel the information from point c. Point P depends on only that
part of the boundary which is intercepted by and included between the two retreating
characteristics lines through point P, that is, interval ab. For this reason, the region to the left
of point P, region, III in Figure, is called the domain of dependence of point P; that is,
properties at P depends only on what is happening in region III.
20. Brief characteristics of Parabolic equations.
Figure: Domain and boundaries for the solution of parabolic equations in two
dimensions.
Starting with the initial data line ac, the solution between the boundaries cd and ab is
obtained by marching in the general x direction. The extension to the case of three
dimensions is straightforward, as sketched in Figure. Here, the parabolic equation has three
independent variables x, y and z. Considering point P located in this space. Assuming that the
initial conditions are given over the area abcd in the yz plane. Furthermore, assuming
boundary conditions given along the four surfaces abgh, cdef, ahed, and bgfc, which extend
in the general x direction away from the perimeter of the initial data surface. Then, the
information at P influences the entire three-dimensional region to the right of P, contained
within the boundary surfaces. This region is crosshatched in figure. Starting with the initial
data plane abcd, the solution is marched in the general x direction.
Figure: Domain and boundaries for the solution of elliptic equations in two dimensions.
Let us consider an elliptic equation in two independent variables x and y. The xy
plane is sketched in Figure. The characteristic curves for an elliptic equation are imaginary-
for the most part, the methodology associated with the method of characteristics is therefore
useless for the solution of elliptic equations. For elliptic equations, there are no limited
regions of influence or domains of dependence; rather, information is propagated everywhere
in all directions. For example, consider point P in the xy plane sketched in Figure. Assume
that the domain of the problem is defined as the rectangle abcd shown in figure and that P is
located somewhere inside this closed domain. This is already in contrast to the rather open
domains considered in figures and for hyperbolic and parabolic equations, respectively. Now
assume that we jab point P in figure with a needle; i.e., we introduce a disturbance at point P.
The major mathematical characteristic of elliptic equations is that this disturbance is felt
everywhere throughout the domain. Furthermore, because point P influences all points in the
domain, then in turn the solution at point P is influenced by the entire closed boundary abcd.
Therefore, the solution at point P must be carried out simultaneously with the solution at all
other points in the domain. This is in stark contrast to the “marching” solutions germane to
parabolic and hyperbolic equations.
If the solution to a partial differential equation exists and is unique, and if the solution
depends continuously upon the initial and boundary conditions, then the problem is well-
posed. In CFD, it is important that you establish that our problem is well-posed before we
attempt to carry out a numerical solution. When the blunt body problem was set up using the
unsteady Euler equations, and a time-marching procedure was employed to go to the steady
state at large starting with essentially arbitrary assumed initial conditions at time t = 0, the
problem suddenly became well-posed.
There are different ways to discretize partial differential equations. The best are finite-
difference or finite-element methods, which are well suited to computer program
implementation. In the simplest method of finite differences, derivatives at a point (x, y) are
approximated by difference quotients over a small interval, i.e., / x is replaced by
/ xwhere x is small and y is constant, and / y is replaced by where /y
y is small and x is constant. Finite-difference solutions are usually satisfactory for practical
applications. The finite-element method replaces the original function by a function that has
some degree of smoothness over the global domain, but is a piecewise polynomial on simple
nodes such as small triangles or rectangles.
UNIT II
Imagine that we have a two-dimensional flow field which is governed by the Navier-
Stokes equation, or as the case may be, the Euler equation, as derived in Chapter. These are
partial differential equations. An analytical solution of these equations would provide, in
principle, closed-form expressions for u, v, p, , etc., as functions of x and y, which could be
used to give values of the flow-field variables at any points in the domain. On the other hand,
if the partial derivatives in the governing equations are replaced by approximate algebraic
difference quotients, where the algebraic difference equations are totally replaced by a system
of algebraic then the flow-field variables at two or more of the discrete grid points shown in
figure, then the partial differential equations are totally replaced by a system of algebraic
equations which can be solved for the values of the flow-field variables at the discrete grid
points only. In this sense, the original partial differential equations have been discretized.
Moreover, this method of discretization is called the method of finite differences. Finite-
difference solutions are widely employed in CFD, and hence much of this chapter will be
devoted to matters concerning finite differences.
2 u ( x) 2 3 u ( x)3
u
u i 1,j u i ,j x i, x x2 i ,j 2 x3 i ,j 6 ......
Equation is mathematically an exact expression for ui + 1, j if (1) the number of terms is infinite
and the series converges and/or (2) x 0.
x i ,j x x2 2 x3 6 ......
Finite i ,j i ,j
difference Truncation error
representation
In equation, the actual partial derivative evaluated at point (i, j) is given on the left
side. The first term on the right side, namely,
u i 1,jui ,j
u
x i ,j
x
then the truncation error in equation tells us what is being neglected in this approximation. In
equation, the lowest-order term in the truncation error involves x to the first power; hence,
the finite-difference expression in equation is called first-order accurate.
Time marching in the context of CFD is used to accomplish one or the other of the following
purposes:
To obtain a steady-state solution by means of assuming some arbitrary initial conditions for a
flow field, and then calculating the flow in steps of time, going out to a sufficiently large
number of time steps until a final steady-state flow is approached at large values of time. In
this situation, the final steady state is the desired result, and the marching is simply a means
to that end. The solution to the supersonic blunt body problem is a cast in point.
To obtain an accurate timewise solution of an inherently unsteady flow, such as the time-
varying flow field over a pitching airfoil or the naturally unsteady flow pattern that results for
many separated flows.
With the complexity of the implicit approach relative to the explicit approach in mind, the
immediate question is: Why deal with the implicit approach at all? Why not always use an
explicit approach? Unfortunately, life is not that simple. We yet to mention the most
important difference between the explicit and implicit approaches. Note that the increments x
and t appear in all the above difference equations. For the explicit approach, once x is
chosen, then t is not an independent, arbitrary choice; rather, t is restricted to be equal to or
less than a certain value prescribed by a stability criterion. If t is taken larger than the limit
imposed by the stability criterion, the time marching procedure will quickly go unstable, and
our computer program will quickly shut down due to such things as numbers going to infinity
or taking the square root of a negative number. In many cases, t must be very small to
maintain stability; this can result in long computer running times to make calculations over a
given interval of time. On the other hand, there are no such stability restrictions on an implicit
approach. For most implicit methods, stability can be maintained over much larger values of
t than for a corresponding explicit method; indeed, some implicit methods are
unconditionally stable, meaning that any value of t, no manner how large, will yield a stable
solution. Hence, for an implicit method, considerably fewer time steps are required to cover a
given interval in time compared to an explicit method. Therefore, for some applications, even
though the implicit approach requires more computation per time step due to its relative
complexity, the fact that considerably fewer time steps are required to cover a given interval
of time actually can result in a shorter run time on the computer compared to an explicit
approach.
Explicit approach
Disadvantage In terms of our above example, for a given x, t must be less than
some limit imposed by stability constraints. In some cases, t must be
very small to maintain stability; this can result in long computer
running times to make calculations over a given interval of t.
15. Write the advantage and disadvantage of implicit method.
Implicit approach
Advantage Stability can be maintained over much larger values of t, hence using
considerable fewer time steps to make calculations over a given
interval of t. This results in less computer time.
Disadvantage Since massive matrix manipulations are usually required at each time
step, the computer time per time step is much larger than in the explicit
approach.
Disadvantage since large t can be taken, the truncation error is large, and the use of
implicit methods to follow the exact transients (time variations of the
independent variable) may not be as accurate as an explicit approach.
However for a time-dependent solution in which the steady state is the
desired result, this result timewise inaccuracy is not important.
Discretization error, the difference between the exact analytical solution of the partial
differential equation [for example, equation and the exact (round-off-free) solution of the
corresponding difference equation [for example, equation. from our discussion in section, the
discretization error is simply the truncation for the difference equation plus any errors
introduced by the numerical treatment of the boundary conditions.
If we let
A = analytical solution of partial differential
equation D = exact solution of difference equation
N = numerical solution from a real computer with finite accuracy
Then
Discretization error = A – D
Round-off error = e = N – D
From equation, we can write
N=D+e
where, again, e is the round-off error, which for the remainder of our discussion in
this section we will simply call error for brevity. The numerical solution N must satisfy the
difference equation.
17. What is Courant number?
1
u u in 1
2 (u in 1 uin 1
t t
The difference used in the above equation, where equation is used to represent the time
derivative, is called the Lax method, after the mathematician Peter Lax who first proposed it.
t ik t
If we now assume an error of the form e m (x, t) e e m as done previously and
substitute this form in equation, the amplification factor becomes
et cos(k m x) iC sin(k m x)
t
C c 1
x
In equation, C is called the Courant number. This equation says that t x / c for the
numerical solution of equation to be stable. Moreover, equation is called the Courant-
Friedrichs-Lewy condition, generally written as the CFL condition. It is an important stability
criterion for hyperbolic equations. The CFL condition dates back to 1928; the original work
can be found in Reference.
Flux-Vector Splitting
where is a diagonal matrix with the eigen values of A as the diagonal terms.
Recall the discussion in chap 3 concerning the definition of characteristic curves and the
emphasis in sec that information concerning a flow field travels along these characteristic
curves. Moreover, we have seen that the eigenvalues of the jacobian matrices give the slopes
of the characteristic lines; for an unsteady flow, the values of these eigenvalues represent the
velocity and direction of propagation of information. It would seem natural that a numerical
scheme for solving the flow equations should be consistent with the velocity and direction
with which information propagates throughout the flow field. Indeed, this is nothing more
than obeying the physics of the flow.
Strictly speaking, the central difference schemes which have been highlighted
throughout this book do not always follow the proper flow of information throughout the flow
field. In many cases, they draw numerical information from outside the domain of
dependence of a given grid point; as discussed at the end of sec, this can compromise the
accuracy of the solution. For flow fields which involve smooth, continuous variations of the
flow-field variables, this does not appear to cause a major problem. We have seen some
examples where a central difference scheme works quite well: the shock-free nozzle flows in
the continuous expansion wave in and the smoothly varying coquette flow. In all thee cases, a
central difference scheme (such as MacCormack‟s scheme works reasonably well. In fact,
there is a mathematical reason for this, dealing with the analytic continuation properties of
smooth Equation is written in operator form; for example, the expression
An Bn
x y
An Bn Un 1
A nU n 1 n n 1
x y x y B U
and similarly on the right-hand side of equation. Examining Equation more closely, we note
that the right-hand side is a known number at time level n; all the unknowns are on the left-
hand side. Question: How many unknowns do we have on the left-hand side? The answer,
of course, depends on what type of finite-difference expression we choose to represent the
derivatives. For example, if we use the familiar on the left-hand side, a five-point difference
module will be needed to support the difference scheme, as sketched in fig. In turn, we will
have five unknowns on the left-hand side of equation namely,
n+1
U n 1 U n 1 U n 1U n 1 U n 1 and U . Clearly, we have lost the tridiagonal form; in the above
i 1, j , i 1, j , i,j, i 1, j , i , j 1, i,j-1
expression, in addition to the terms involving the three diagonals, namely,
n 1 n 1 n 1 n+1
Un 1 U , and U we also have terms off the three diagonals, namely, U and U .
i 1, j , i,j i+1,j, i,j 1 i,j-1
Indeed, these terms lead to a pentadiagonal matrix. The matrix manipulation associated with
the solution of such a system is very computational-intensive-we have lost the tremendous
computational advantage of the tridiagonal form. The reason for this problem is the
multidimensional nature of the equations: the simultaneous appearance of both the x and y
derivatives in equation.
This idea has its roots in the classic alternating-direction-implicit (ADI) procedure
developed in the middle 1950s by Peaceman, Rackford, and Douglas. The ADI method is
discussed in; this method essentially splits the unsteady two-dimensional problem described
by Equation into two separate one-dimensional problems at each time step: the first stage
deals with the unknowns associated with the x derivative evaluated at an intermediate time
1 n 12 n1/ 2
level n , namely, U i 11/ 2 ,Uin, j , and Ui+1,j , which yields an easily solved tridiagonal
2
form; the second stage deals with the unknowns associated with the y derivatives evaluated at
time level n+1, is achieved by two applications of the tridiagonal solution procedure. The
concept of the ADI method is nicely described in, which should be consulted for more
details.
The ADI philosophy described above has been extended to the solution of the
governing flow equations via the Beam and Warning scheme described in sec. leading to a
procedure called approximate factorization. In this procedure, we represent Equation in a
somewhat “factored form,” namely,
n
t t t t F G
n n n n
I 2 x A I 2 y B nU n 1 I 2 x A I 2 y B U t x y
If you mentally carry out the multiplication of the two factors on both the left and right
sides of equation, you will see that equation is not precisely the same as equation; indeed,
equation has some extra terms, namely,
t 2
n n
4 x A y B Un 1
t 2
n n n
and 4 x A y B U
2
which do not appear in equation. On the other hand, these terms involvet , and they do
not affect the second-order accuracy which is originally embodied by equation. Therefore,
we can replace equation with equation. The factored form appearing in equation is called
approximate factorization (approximate because of the aforementioned leftover terms that
we simply live with).
Un Un 1 U n
The factors on the left and right sides of equation are identical; hence, bringing the
right-hand-side factors to the left-hand side, we can factor out the expression U n 1 U n , and
using the notation of equation, we can write as following subsections, we will only discuss
the general nature of these ideas so as to familiarize you with just the essence of each. The
purpose of these discussions is to ease your transition to more advanced studies.
n
t t E G
n n n
I 2 x A I 2 y B U t x y
UNIT IV
The algorithm arising from the Taylor series expansion is TGM (Taylor Galerkin
Method). Zienkiewicz and his co-workers [Zienkiewicz and Codina, 1995] have applied for
the past decade the concept of characteristics Galerkin methods (CGM) which produce results
similar to TGM in dealing with convection-dominated problems for both compressible an
incompressible flows.
The major issues in CFD as observed in FDM are as follows: (1) Difficulties of
satisfying the conservation of mass in incompressible flows (incompressibility condition),
resulting in checkerboard type pressure oscillations: (2) Shock discontinuities in compressible
flows: and (3) convection –dominated flows in both incompressible and compressible and
compressible flows. Mixed method, penalty methods and pressure correction methods were
developed to cope with the incompressibility condition.
7. What are the numerical methods other than the FDM, FEM and FVM?
There are many numerical methods other than the FDM, FEM and FVM which are
based on the standard Eulerian coordinates. They includes boundary element methods
(BEM), coupled- Eulerian – Lagrangian (CEL) methods, particle-in-cell (PIC methods, and
Monte Carlo methods (MCM).
The Galerkin method described lead to unstable and inaccurate solutions in fluid
dynamics equations in which the flow is convection dominated. In this case, we must use the
methods of weighted residual with test functions.
11. What are the applications of the numerical diffusion test function ?
Grid generation is often considered as the most important and most time
consuming part of CFD simulation. The quality of the grid plays a direct role on the quality
of the analysis, regardless of the flow solver used. Additionally, the solver will be more
robust and efficient when using a well constructed mesh. It is important for the CFD analyst
to know and understand all of the various grid generation methods. Only by knowing all the
methods can he or she select the right tool to solve the problem at hand.
Structured grid methods take their name from the fact that the grid is laid out
in a regular repeating pattern called a block. These types of grids utilize quadrilateral
elements in 2D and hexahedral elements in 3D in a computationally rectangular array.
Although the element topology is fixed, the grid can be shaped to be body fitted through
stretching and twisting of the block. Really good structured grid generators utilize
sophisticated elliptic equations to automatically optimize the shape of the mesh for
orthogonality and uniformity.
The major drawback of structured block grids is the time and expertise
required to lay out an optimal block structure for an entire model. Often this comes down to
past user experience and brute force placement of control points and edges. Some geometries,
e.g. shallow cones and wedges, do not lend themselves to structured block topologies. In
these areas, the user is forced to stretch or twist the elements to a degree which drastically
affects solver accuracy and performance. Grid generation times are usually measured in days
if not weeks.
The advantage of unstructured grid methods is that they are very automated
and, therefore, require little user time or effort. The user need not worry about laying out
block structure or connections. Additionally, unstructured grid methods are well suited to
inexperienced users because they require little user input and will generate a valid mesh under
most circumstances. Unstructured methods also enable the solution of very large and detailed
problems in a relatively short period of time. Grid generation times are usually measured in
minutes or hours.
The major drawback of unstructured grids is the lack of user control when
laying out the mesh. Typically any user involvement is limited to the boundaries of the mesh
with the mesher automatically filling the interior. Triangle and tetrahedral elements have the
problem that they do not stretch or twist well, therefore, the grid is limited to being largely
isotropic, i.e. all the elements have roughly the same size and shape. This is a major problem
when trying to refine the grid in a local area, often the entire grid must be made much finer in
order to get the point densities required locally. Another drawback of the methods is their
reliance on good CAD data. Most meshing failures are due to some (possibly minuscule)
error in the CAD model. Unstructured flow solvers typically require more memory and have
longer execution times than structured grid solvers on a similar mesh. Post processing the
solution on an unstructured mesh requires powerful tools for interpolating the results onto
planes and surfaces of rotation for easier viewing.
PART – B
1. Derive the energy equation for a viscous flow in partial differential non-
conservation form.
2. Derive the continuity equation for inviscid flow in partial differential non-conservation
form.
3. Write down elliptic, parabolic and hyperbolic partial differential equations as
applicable to CFD.
4. Explain the grid generation technique based on PDE and summarize the advantages
of Sthe elliptic grid generation method.
5. Obtain the 2D steady compressible continuity equation in transformed coordinates for the
transformation x, ln( y 1)
6. Write down the procedure for the calculation of pressure coefficient distribution around a
circular cylinder using the source panel technique.
Discuss the vortex panel method applied to lifting flows over a flat plate.
7. Discuss the source panel method for the flow past an oscillating cylinder.
8. State and explain the difference between explicit and implicit methods with suitable
examples.
9. How do you determine the accuracy of the discretization process? What are the uses and
difficulties of approximating the derivatives with higher order finite difference schemes?
How do you overcome these difficulties?
10. Explain the strong and weak formulations of a boundary value problem.
11. Explain the description of Prandtl boundary layer equation and its solution
methodology.
12. Study the stability behaviour of second order wave equation by Von-Neuman stability
method.
13. What are quadrilateral Lagrange elements and isoparametric elements in FEM?
14. Solve the simplified Sturn-Lioville equation:
2 y y
x
2 yF With boundary conditions y(0) = 0 and
x
1 0 ; using Galerkin finite
element method.
16. What is strong formulation? Explain with the help of one dimensional boundary value
problem.
17. Explain Runge-Kutta and multi-stage time stepping.
18. Discuss the properties of discretization schemes and explain upwind discretization
applied to FVM.
19. What is cell centered formulation? Explain with the help of using control volume,
semi discretization equation, ij U / t F.nds 0.
20. State and explain the spurious modes for Runge-Kutta cell vertex formulation in
FVM.
21. Compare and contrast direct methods and iteration methods in solving a system
simultaneous linear algebraic equations.
22. Solve the following equations by Gauss-Siedel Iteration
Method: 5x+2y+z=12
x+4y+2z=15
x+2y+5z =20.
23. Solve the following equations by Gauss-Elimination method:
x+3y+6z=2
3x-y+4z=9
x-4y+2z =7
24. Explain the steps to solve a fluid dynamics problem.
25. Derive the stability condition for CTCS discretization of second order wave equation using
Von Neumann stability analysis.
26. Derive the finite difference expressions for a second order derivative with forward,
backward and central difference approximations.
27. Obtain the CFL condition for Lax method of discretization of first order wave equation.
28. Derive the continuity equation with differential approach in conservation form and from this
obtain non conservation differential form.
29. Explain the various computer graphic techniques used in CFD.
30. Explain the need for turbulence modeling in dealing with CFD problems. What are the
various turbulence models used in CFD problems.
31. Write an algorithm for obtaining the inverse of a matrix.
32. Consider steady state heat loss through a straight long fin with temperature of the fin base
and the surrounding fluid and Tb and Tf respectively. Assume the heat loss from the end face
of to be negligible. Derive the governing equation for the problem.
33. Distinguish between discretization and round-off errors. Compare them with suitable
examples.
34. Show that forward time and central space differencing for first order wave equation results
in unstable scheme using Von Neumann stability criterion and comment.
35. Derive the continuity equation with integral approach in non conservation form and from
this obtain conservation integral form.
36. Express the complete Navier-Stokes equations and derive Bernoulli’s equation from it
explaining the assumptions made in the process.
37. Discuss the various relaxation techniques.
38. Derive the Quasi one-dimensional compressible flow equations for flow through a nozzle.
Explain the method of capturing the shock in dealing with the nozzle.
39. Present the algorithm for explicit and implicit method and outline the solution procedure.
40. Write down the governing equation for the thermal field and present the discritised form of
the equation for a flow inside a channel. Show how upwinding is carried out and outline the
procedure.
41. Derive the unsteady mass conservation equation in differential form and show how density
is updated using the MacCormack method for a quasi one dimensional compressible flow in
a CD nozzle.
42. Form the continuity equation for two dimensional transient compressible flow , show how
density is updated using the MacCormach scheme.
43. Explain the SIMPLE algorithm and show how the pressure is determined.
44. Derive the continuity equation for an incompressible flow field in differential form and state
the assumptions made.
45. Derive the momentum equation in Cartesian co-ordinates and state the assumptions for NS
equations.
46. Derive the forward and central difference approximations to the first derivative along with
the leading error term.
47. Derive the stability criterion for the explicit scheme for one dimensional transient
conduction.
48. Derive the stability criterion for the wave equation (CFL condition) from physical or
numerical considerations.
ANNA UNIVERSITY OF TECHNOLOGY, CHENNAI
221082 -COMPUTATIONAL FLUID DYNAMICS
TIME – 3 Hours MAXIMUM: 100 Marks
PART – A (10 x 2 = 20 Marks)
1. What are the important applications of CFD in engineering?
2. Write the 3-dimensional momentum equation.
3. What are the advantages and disadvantages of stream function-vorticity method?
4. Differentiate between structured and unstructured grid.
5. What is meant by CFL condition?
6. Define discretization and round off error.
7. Discuss the need for upwind type discretization.
8. What are the various relaxation techniques?
9. What are the effects of turbulence on time averaged Navier-stoke equation?
10. What are the needs for turbulence models?
PART – B (5 x16 = 80 Marks)
11. a) Derive the energy equation for a viscous flow in partial differential non-
conservation form. (16)
(Or)
b) (i) Explain uniform and non-uniform grids and their necessity. (8)
(ii) Explain briefly about numerical errors and grid independence test. (8)
12. a) A long, cylindrical fuel element of radius b=1 cm and thermal conductivity k=25 W/
o 8 3
(m C) generates energy at a constant rate of qg=5X10 W/m . The boundary surface at
o
r=b can be assumed to be maintained at 100 C. Assuming one-dimensional, radial heat
flow, calculate the radial temperature distribution in the fuel element by using finite
differences. Compare the results with the exact solution of the
problem. (16)
(Or)
b) Drive the equation for two-dimensional transient heat conduction in a square plate
13. a) Discuss SIMPLE procedure of Patankar & Spalding in detail. (16)
(Or)
b) State and explain the difference between explicit and implicit methods with suitable
examples. (16)
14. a) (i) Elaborate finite difference approach for computation of thermal boundary layer
flows. (8)
(ii) An iron rod L=5 cm long of diameter D=2 cm with thermal conductivity k=50 W/ (m.
o o 2o
C) protrudes from a wall and is exposed to an ambient at T = 20 C and h=100W/m C).
o
The base of the rod is at T0=320 C, and its tip is insulated. Assuming one-dimensional
steady-state heat flow, calculate the temperature distribution along the rod and the rate of
heat loss into the ambient by using finite differences. Compare the finite- difference solution
with the exact analytical solution of this problem. (8)
(Or)
b) (i) Discuss the upwind scheme and its advantages. (8)
(ii) Derive an energy balance equation of a square plate using convection heat transfer
Method (8)
PART-A
1. Define Measurement:
(i) The standard used for comparison purposes must be accurately defined
& should be commonly acceptable.
(ii) The standard must be of the same character as the measure and (ie, the
unknown quantity or the quantity under measurement)
(iii) The apparatus used and the method accepted for the purposes of
comparison must be provable.
(ii) Indirect Method: Measuring systems are used in indirect methods for
measurement purposes.
4. What are the uses of measuring instruments?
1. Balancing the unknown force against the known gravitational force either
directly
2. Transferring the unknown force to a fluid pressure and then meaning the
resulting
fluid pressure. Hydraulic and Pneumatic load cells are used for transferring the
11. Tell some thing about ‘static characteristics’ and ‘static calibration’ in
measurements:
- Instrumental errors
- Environmental errors
- Observational errors
Static correction is the negative value of absolute static error ie, Cs= -Eo
19. A meter reads 127.50V and the true value of the voltage is 127.43v.
Determine (a) static error and (b) static correction for this instrument.
20. A thermometer reads 95.45 0c and the static correction given in the
21. An inclined Limb manometer is used for measurement of flow rate reads
0.161 x 10- ³m³/s. The true value of flow rate is 0.159 x 10-³ m³/s.
Determine (a)
The relative error (fractional error) is defined as the ratio of the error to
the specified magnitude (nominal magnitude) of a quantity.
1. Mechanical Extensometers
2. Optical extensometers
4. Electrical extensometers
5. Pneumatic extensometers.
than 1000: 1
3. Low input force: The input force required to cause displacement should
be
1. Wedge magnification
2. Screw magnification
3. Compound magnification
4. Lever magnification
34. Give the magnification and gauge length of porter- Lipp strain gauge.
Gauge length is 25 mm
1. Very compact
2. Light weight
The magnification may vary from 300 to 2000 depending upon the
model. The gauge length varies from 12.5 to 25 mm.
length of 50 mm.
38. Give the minimum strain value that scratch gauge can be sense.
The minimum strain that a strain gauge can sense is above 100 micro
strains.
1. Compact in size
adhesive bonding
3. It can measure scratch under all types of loading (static, shock, fatigue)
42. For the following reading find the deformation sensitivity & strain
sensitivity
factor = 5
43. Give the formula to obtain fundamental frequency for a vibrating wire
PART – B
1. Explain in detail the Principles of Measurements.
2. Write short notes on:
(a) Accuracy
(b) Sensitivity
(c) Range
3. Write a short account of the various types of strain gauges. Give their
special advantages and limitations.
4. What are the basic characteristics of a strain gauge? Which factors should
be considered
5. What are the various types of Mechanical strain gauges? Explain
Huggenberger tensometer in detail.
6. What are the various types of optical strain gauges? Explain the
Tuckerman gauge in detail.
7. Explain the construction and working of Acoustical strain gauge.
8. What are the different types of electrical strain gauges? Describe a
capacitance strain gauge and give its uses and limitations.
UNIT –II
ELECTRICAL RESISTANCE STRAIN GAUGES
PART – A
In this types of gauges the losses in the magnetic circuit are varied by
changing the thickness or position of the high- loss element inserted in the
magnetic field.
7. Give the formula for the impedance of a coil to the passage of alternating
current.
F= frequency in herzs
A thin paper sheet or metal sheet with strain gauge wire is bonded with an
adhesive material to the structure under strictly.
The foil strain gauge has metal foil photo- etched in a grid pattern of the
electric insulator of the thin resin and gauge leads attached.
14. Give the formula for electrical capacity between parallel plates in
capacitance
strain gauges
N- No of plates
Photo etching is the act of producing grid configuration on metal foil with
the help of photo effect.
High resistance
2. Rectangular rosette
a) Three element
b) Four element
4. T. Delta rosette
21. Give the formula for strain measured by a strain gauge in particular angles
22. Give the relation on between principal stress and principal strains
23.What are the methods are available for computing the strain rosette datas?
1. Analytical Solutions
2. Graphical Solutions
4. Nomographic Solutions
5. Geometrical Computers.
PART - B
UNIT – III
PHOTOELASTICITY
PART - A
1. Define Light
Light is usually defined as the radiation that can affect the human eye.
Light from a source that emits a continuous spectrum with equal energy
for every wave length is called white light.
The focus of points on different radial lines from the source exhibiting the
same
5. Define Ray?
A line normal to the wave front, indicating the direction of propagation of
the waves is called a ray.
The waves in which vibration are along the direction of their travel is
known as
longitudinal waves.
The light having vibration only along a single straight line perpendicular
to the
10. What are the methods are available to obtain plane polarized light?
11. What are the disadvantages are available in Nichol prism when we use that
to
obtain plane polarized light?
1. Costly
2. Intensity is Poor.
The ratio of the velocity of light In air to the velocity in the medium is
called the
The wave length of any given frequency in the distance traveled during
one complete vibration.
The phase of vibration at any instant defines the stage of the cycle
reached at that instant.
17. Give the general equation of motion of a transverse light wave propagating
in Zdirection.
The crossed – crossed setup is called the standard setup of the circular
polariscope.
26. Give the most commonly used methods for compensation techniques
6. Photometric method.
27. What are the techniques used to determine the stresses at the inner layers of
the
PART – B
PART – A
i) The directions of the principle stress which are perpendicular to the cracks
(isostatic).
ii) The estimated magnitude of the larger principle stress by means of the
isoentatics clock of crack ends.
iii) Maximum principle stress theory is assumed given the failure of the
loading.
iv) Residual stresses in the calibration strip and specimen are the same.
i) It provides whole field data for both magnitude and direction of principle
stresses and does not require a tedious point-to-point method.
The Moiré’s effect occurs whenever two similar but not quite identical array
of equally spaced lines or dots are arranged so that one array can be viewed
through the other.
i) Geometrical approach
iv) Moiré’s fringes can be used to determine isopachics and thus can act as
an extensometer.
iii) The gratings do not spoil the specimen as is the case in the brittle
lacquer and strain gauge methods.
PART – B
1. What are the assumptions made while analyzing brittle coatings? Derive
expressions for coating stresses.
3. Explain briefly the refrigeration t technique used for brittle lacquers. What
are the effects of this technique on brittle lacquers?
4. What are the various types of brittle coating available? Discuss their
important features.
5. Describe the calibration method generally used for brittle coating .how true
threshold strains can be determined by this method?
7. Explain the Moiré’s method in brief and discuss the fundamental properties
of the Moire’s fringes.
8. What are the two techniques used for Moiré’s fringe analysis? Discuss the
displacement approach in detail?
9. Describe calibration the shadow Moiré’s method in detail and give its uses.
UNIT-V
NON-DESTRUCTIVE TESTING
PART – A
quantity available.
- Is simple to analyze
- High sensitivity
PART – B
1. What is welding?
Welding is a process used joining metal parts by either fusion or forging.
2. What are the joining methods used in aircraft?
Bolting
Riveting,
Brazing,
Soldering,
Bonding,
Welding.
3. Give reason for welding process is the best joining method?
Rigidity
Simplicity
Low weight
High strength
4. What is Gas welding? List its types?
Gas welding is accomplished by heating end & edges of metal to molten state with a
high temperature flame.
Types:
Oxy-acteylene
Oxy-hydroyen
5. What is electric resistance welding?
Is a process in which a low –voltage , high amperage current is applied to the metal to
be welded through heavy , low resistance copper conductor.
6. What are the types of electric resistance welding?
Butt welding
Spot
7. List down the equipments used in Oxy-acteylene welding technique?
Cylinder (Oxygen, acetylene)
Pressure regulator
Welding torch
Hose
Special wrench
Spark lightner
Goggles , Gloves
Fire extinguisher
8 . What type of fire extinguisher use in Oxy-acteylene welding?
Carbon dioxide type used , is a chemical powder for special in gases or oil fire.
10. How to identify the oxygen- hose & acetylene –hose?
Oxygen – right hand thread
Acetylene - left hand thread
11. What are the types of NDT?
Eddy current method
Ultrasonic
Dye-penetrate
Maganetic
12. List the types of plastic used in aircraft?
Acrylics plastic
Cellouse,
Acetates
13. What is thermosetting plastic ?
1. State the characteristics of a good weld & How ‘welded-patch repair’ is carried out.
2. Explain in detail the procedures and instructions for setting up ‘Acetylene
Welding Equipment’.
3. Explain in detail the various NDT methods used in aircraft maintenance.
4. (i). State the five fundamentals type of welded joints.
(ii). Explain the characteristics of a good weld.
(iii). What is plasma Arc welding?
(iv). Differentiate between MIG and TIG welding.
5. Explain welding jigs of fixtures used in aircraft industry?
6. Explain maintenance of welding equipment’s.
7. Explain different types of inert gas welding?
8. Explain NDI checks used in sheet method?
9. Explain rivet repair design?
UNIT II
1. Discuss about the maintenance and repair of plastic components.
2. Explain in detail about the inspection and repair of composite components.
3. Explain the common method of cementing transparent plastics.
4. Explain an typical example of the procedure used in the repair of a mat-molden
assembly.
5. Explain NDI methods for composites inspection?
6. Explain cementing of plastics.
UNIT III
1. Describe the important guide lines for installation and rigging of control surfaces.
2. Explain the methods used for checking the track of the main rotor of a helicopter.
3. Explain the procedure of jacking an aircraft.
4. Describe the various methods used in tracking the main rotor of a helicopter.
5. Explain control surface rigging?
6. Explain procedure for AC leveling?
7. Explain symmetric check on aircraft?
UNIT IV
1. Describe the inspections on fixed-gear and retractable landing gear carried
out for maintenance of landing gear.
2. Write short notes on Fire Protection system and Ice Protection system.
3. Write the procedure for servicing and maintenance of oxygen system in any airplane.
4. Write short notes on:
(i). water and waste system inspection.
(ii) fire protection system inspection.
5. Explain inspection and maintenance of fire protection system?
6. Explain inspection and maintenance of instruments?
7. Explain inspection and maintenance of fixed and retractable landing gear.
UNIT V
1. Describe in brief about the three categories of hazardous materials in aircraft
maintenance.
2. Explain quoting an example of trouble shooting procedure followed in aircraft
servicing.
3. Write in detail the safety precautions to be followed while storing and handling
hazardous materials.
4. Explain the causes and remedial actions of the following defects:
(i). Aircraft has a tendency to fly one wing low
(ii). Aircraft has a tendency to be nose or tail heavy.
5. Explain with diagram on position and warning system used in aircraft?
6. Explain trouble shooting with and without chart?
AE6007 - FATIGUE AND FRACTURE
PART A
1. Define endurance limit
2. What is a notch?
3. What is stress concentration factor
4. What is Nueber’s stress concentration factor
5. What is plastic stress concentration factor
6. Draw notched S.N curve
7. What is fatigue
8. Draw the SN curve
9. Explain Goodman, Gerber and soderberg relations
10. What is stress reversals
PART-B
1. Explain S-N curve of fatigue tests
2. Explain the effect of mean stress, Goodman, Gerber and Soderberg relations
3. Explain notches and stress concentrations
4. Explain Neuber’s stress concentration factors in detail
5. Explain plastic stress concentration factors in detail
6. Explain Notched S.N curves in detail
PART A
1. What is low cycle fatigue
2. What is high cycle fatigue
3. Explain Coffin-Manson’s relation
4. Explain transition life
5. What is cyclic strain hardening
6. What is strain softening
7. Explain the analysis of load histories
8. Name the cycle counting techniques
9. What is cumulative damage
10. What is Miner’s theory
PART B
1. Explain Low cycle fatigue and high cycle fatigue in detail
2. Explain Coffin-Manson’s relation
3. Explain the analysis of load histories in detail
4. Explain cycle counting techniques in detail
5. Explain Miner’s theory.
PART A
1. What are the phases in fatigue life?
2. Explain dislocation
3. Explain stress analysis of cracked bodies
4. What is potential energy and surface energy
5. What is fracture toughness
6. What are the stress intensity factors
7. Write the Griffith’s equation
8. What is nucleation ?
9. Explain the crack initiation process
10. Explain the crack growth process
PART B
1. Explain the phases in fatigue life in detail
2. Explain the strength and stress analysis of cracked bodies
3. Explain Griffith theory
4. Explain Irwin-Orwin extension of Griffith’s theory
5. Explain the effect of thickness on fracture toughness and stress intensity factors
for typical geometries.
PART A
1. What is safe life?
2. What is fail-safe design philosophies?
3. What is the importance of fracture mechanisms in aerospace structures?
4. Explain the methods of fatigue testing
PART B
1. Explain safe life and fail safe design philosophies in details
2. Importance of fracture mechanics in aerospace structures?
3. Application of fracture mechanics to composite materials and structures
PART A
1. What are the common causes of failure
2. Explain the principle of failure analysis?
3. Explain the fracture mechanics approach to failure problems
4. What are the techniques of failure analysis
5. Explain service failure mechanisms
6. What is brittle fracture?
7. What is ductile fracture?
8. Distinguish brittle fracture and ductile fracture
9. What is fatigue fracture?
10. What are wear failures?
11. What are fretting failures
12. What are environment induced failures
13. What are the high temperature failures
14. What is faulty heat treatment failures?
15. What are design failures
16. What are processing failures?
PART B
1. What are the various techniques of failure analysis? Explain in detail
2. Explain service failure mechanisms
3. Explain brittle and ductile fracture in detail
4. Explain wear failure and fretting failures
5. Explain environment induced failures and high temperature failures
6. Explain faulty heat treatment and design failures
GE6757
TOTAL
QUALITY
MANAGEMENT
UNIT I-INTRODUCTION
PART - A
1. Define quality.
It is defined as the process of planning to design and obtain a better quality product
or service and to attain new break through goals.
5. Constantly improve the process of planning, production, and service- this system includes people.
10. Eliminate slogans/targets asking for increased productivity without providing methods
12. Remove barriers that stand between workers and their pride of workmanship.
14. Put all emphasis in the company to work to accomplish the transformation.
A vision statement outlines what a company wants to be. It focuses on tomorrow; it is inspirational;
it provides clear decision-making criteria; and it is timeless.
14. Define Mission Statement?
A mission statement outlines what the company is now. It focuses on today; it identifies the
customer(s); it identifies the critical process(es); and it states the level of performance. It has
been said that a vision is something to be pursued, while a mission is something to be
accomplished.
15. Define Quality Policy?
The overall intentions and direction of an organization regarding quality, as formally expressed by
top management.
1. Recognize that top management and all organizational units are fully committed to quality.
Quality is everyone’s responsibility and of quality leadership, focused on Customer Satisfaction.
2. Define quality as the elimination of variation through an increase in process capability and
reduction in cycle time.
3. Adopt the defect prevention approach to quality rather than defect detection.
4. Establish a cooperative environment for teamwork and mutual problem solving among all
employees.
5. Make incremental, sustained improvements in quality and productivity through ongoing
training and application in statistical techniques.
6. Involve suppliers and customers in process and unit cost optimization.
PART B
2. Features –Secondary characteristics, added features, such as remote control. Though this
attribute is a secondary characteristic, it necessarily supplements the basic functioning of the
product.
3. Conformance –Meeting specifications or industry standards. How far the products physical and
performance characteristic match with the set standards is called conformity.
4. Reliability –Consistency of performance over time, average time for the unit to fail. Under
prescribed conditions of use of the product the probability of surviving over a specified period is
termed as reliability of that product.
5. Durability –Useful life includes repair. The quantum of use a customer gets from a product
before it wears out beyond further use or when a replacement is essential is called durability.
6. Service –Resolution of problems and complaints, ease of repair. The possibility to repair a
product quickly and with ease is serviceability.
7. Response –Human to human interface, such as the courtesy of the dealer. It refers to the degree
they react and act quickly to resolve the problems.
8. Aesthetics –Sensory characteristics such as exterior finish. It is the manner in which a product
looks feels, tastes or smells.
9. Reputation –Past performance and other intangibles, such as being ranked first.
Accessibility and convenience –Whether the service is easy to get ?or must the customer influence
the service provider to get the required service.
Accuracy –This is with regard to whether the service is done correctly even in the first instance.
Responsiveness –Whether the service person reacts and cat quickly to resolve problems
Quality planning is one of the foremost function of the members in the organization to
achieve the quality objectives.
Quality planning is nothing but forecasting the future activities related to quality.
The whole organization should be involved in the implementation of Quality planning.
Quality planning acts as a road map for the members in the organization to achieve the
quality goal.
The Quality planning organizes the activities as a team from the beginning of the project.
Thus the manufacturing department works simultaneously with design and Engineering department
before finalizing the detailed specifications.
Various steps in Quality planning are
Prevention Costs: Costs of activities that are specifically designed to prevent poor quality. Examples
of "poor quality" include coding errors, design errors, mistakes in the user manuals, as well as badly
documented or unmentionably complex code.
Appraisal Costs: Costs of activities designed to find quality problems, such as code inspections and
any type of testing.
Failure Costs: Costs that result from poor quality, such as the cost of fixing bugs and the cost of
dealing with customer complaints.
Internal Failure Costs: Failure costs that arise before your company supplies its product to the
customer.
External Failure Costs: Failure costs that arise after your company supplies the product to the
customer, such as customer service costs, or the cost of patching a released product and distributing
the patch.
Total Cost of Quality: The sum of costs: Prevention + Appraisal + Internal Failure + External
Failure.
The cost of quality, or total quality cost, is defined as the sum of resources spent on prevention plus
resources spent on appraisal plus the expenditures and economic impact of failures. The objective of
Total Quality Cost is to achieve measurable improvement in materiel quality and quality cost
reduction on a systematic basis. The purpose of the Total Quality Cost Model is to
To survive, companies had to make major changes in their quality programs. Many hired
consultants and instituted quality training programs for their employees. A new concept of quality
was emerging. One result is that quality began to have a strategic meaning. Today, successful
companies understand that quality provides a competitive advantage. They put the customer first
and define quality as meeting or exceeding customer expectations.
Since the
1970s,
competition
based on
quality has
grown in
importance and
has generated
tremendous
interest, concern, and enthusiasm. Companies in every line of business are focusing on improving
quality in order to be more competitive. In many industries quality excellence has become a
standard for doing business. Companies that do not meet this standard simply will not survive. The
importance of quality is demonstrated by national quality awards and quality certifications that are
coveted by businesses.
The term used for today’stotalqualitynewmanagementconceptorTQM.The figureof qu represents the line of
the old and new concepts of quality. The old concept is reactive, designed to
correct quality problems after they occur. The new concept is proactive, designed to build quality
into the product and process design.
The 14 points are obviously the responsibilities of top management. No one else can carry them out.
Quality is everybody's job, but quality must be led by management. The 14 points apply anywhere,
to small organizations as well as to large ones. The management of a service industry has the same
obligations and the same problems as management in manufacturing.
I. Create constancy of purpose toward improvement of product and service, with a plan to become
competitive and to stay in business. Decide whom top management is responsible to.
2. Adopt the new philosophy. We are in a new economic age. We can no longer live with commonly
accepted levels of delays, mistakes, defective materials, and defective workmanship.
3. Cease dependence on mass inspection. Require, instead, statistical evidence that quality is built
in, to eliminate need for inspection on a mass basis. Purchasing managers have a new job, and must
learn it.
4. End the practice of awarding business on the basis of price tag. Instead, depend on meaningful
measures of quality, along with price. Eliminate suppliers that can not qualify with statistical
evidence of quality.
5. Find problems. It is management's job to work continually on the system (design, incoming
materials, composition of material, maintenance, improvement of machine, training, supervision,
retraining).
8. Drive out fear, so that everyone may work effectively for the company.
9. Break down barriers between departments. People in research, design, sales, and production must
work as a team, to foresee problems of production that may encounter with various materials end
specifications.
10. Eliminate numerical goals, posters, and slogans for the work force, asking for new levels of
productivity without providing methods.
12. Remove barriers that stand between the hourly worker and his right to pride of workmanship.
14. Create a structure in top management that will push every day on the above 13 points
Performance involves fitness for use which is a phrase that indicates that a product and
or service is ready for the use of customers at the time of sale. Other considerations are:
Availability- which is the probability that a product will operate when needed.
Develop procedures for complaint resolution that empowering front line personnel.
Analyse complaints , but understand that complaints do not always fit into neat
categories.
Work to identify process and material variations and then eliminate the root cause
.”More inspection is not corrective actio
When a survey response is received , a senior manager should contact the customer
and strive to resolve the concern.
Provide a monthly complaint report to the quality council for their evaluation and
if needed the assignment of process improvement teams.
Organization
Customer care
Communication
Leadership
The customer code of ethics is a code which the employee is expected to sign. It is:
Keep promises to the customer.
Return the call to the customer as quickly as possible.
Extend assistance to the customers as needed. There should not be any let up.
Maintain a neat and acceptable environment in the work place as well as office.
Abraham Maslow stated the motivation could be explained in terms of needs and
that there are five levels:
Level 1- survival
Level 2 –security
Level 3 –social
Level 4 –esteem
Employee survey helps to assess the current state of employee relations, identify
trends, measure the effectiveness of program implementation, increased communication
effectiveness. Survey includes personality characteristics like anxiety , self esteem in the
organization and the ability to participate in the organization: management styles like
consideration of subordinates and commitment to quality , job attitudes like job
satisfaction, social support at work and co workers commitment to quality and the work
like various task, autonomy and importance.
Empowerment is an environment in which people have the ability , the confidence and
the commitment to take the responsibility and ownership to improve the process and
initiate the necessary steps to satisfy customer requirements within well defined
boundaries in order to achieve organizational values and goals.
1.Sponsor
2.Team charter
3.Team composition
4.Training
5.Ground rules
6.Clear objectives
7.accountability
9.Resources
10.Trust
12.Open communication
13.Appropriate leadership
14.Balanced participation
15.Cohesiveness
The purpose of performance appraisal is to let the employees know how they are doing
& provide a basis for promotion & salary increase, counseling and other purposes relating
the employees future. Employees should be aware of the process of appraisal. The
parameters of evaluation should be known to the employees. The appraisal should point
out the employees strength & weakness.
An external customer can be defined in many ways such as the one who uses the
product or service, the one who purchases the product or service, or the one who
influences the sale of the product or service. An external customer exists outside the
organization & falls into three categories –current , prospective & lost customers. An
internal customer is very important every function whether it is engineering order
processing or production has an internal customer- each receives a product or service
and in exchange provides a product or service. Every person in a process is considered
as customer of the preceeding operation.
Know thyself
Know your employees
Establish a positive
attitude Share the goals
Monitor progress
Develop interesting work like job rotation , job enlargement and job
enrichment.
Communicate effectively
Celebrate the success.
QC are the group of people from one work unit who voluntarily meet together
on a regular basis to identify , analyse and solve problems relating to quality and other
problems problems in other areas.They choose their own problems , discuss among
themselves and try to arrive at a viable solution for implementation.These quality
circles are quite successful in Japan, though success of equal magnitude has not been
able to be achieved in other countries.
Continuous improvement is derived from the Japanese term KAIZEN which means small but
continuous improvement.
26. What is Continuous process improvement?
Continuous process improvement is the heart of TQM Process. It consists of measuring key
quality parameters and take active steps to improve them. TQM demands structured
improvement programmes in all these areas of business administration, customer services,
product quality and so on. The main aim of continuous process improvement is to improve the
levels of customer satisfaction and reducing the cost of attaining this. The Organization should
strive to achieve perfection and quality by continuously improving the production process and
business.
32.What is 5 –S Practice?
5-S (JAPANESE 5-S PRACTICE) is the key for Total Quality Environment. The 5-S Practice
is a technique used to establish and maintain quality environment in an organization.
The 5-S Stands for five Japanese words.
1.Seire (Organize)
2.Seiton (Put things in order)
3.Seiso (Clean up)
4.Seiketsu (Standarardise)
5.Shitsuke (Discipline)
The logic behind the 5-S Practice is that organization, neatness, cleanliness, standardization
and discipline at the work place are the basic requirements for producing high quality products
and services, with high productivity and no wastage.
The PDSA Cycle was first developed by Walter Shewart and then it was modified by Deming
as PDCA Cycle.. PDSA stands for PLAN,usedfor DO, testing ideas that may create an improvement. It
can be used to test ideas for improvement
quickly and easily based on existing ideas, research, feedback, theory audit etc or practical
ideas that have been proven to work elsewhere. It is a very effective improvement technique
and it uses simple measurements to monitor the effect of changes overtime. It encourages
starting with small changes, which can build in to larger improvements through successive
quick cycles of changes. The PDSA Cycle has been used for decades as an effective tool for
continuous improvement. This method is well established and validated and is particularly
suited for small and dynamic organizations.
Quality improvement activities are concerned with both sporadic and chronic
problems.
The approaches for solving chronic problems differ from that for solving sporadic
problems
Sporadic Problems.
Sporadic problems are quality problems that occur on all of a sudden due to several
reasons.
Sporadic problems are attacked by the control sequence.
Sporadic problems are dramatic and receive immediate attention.
It is easy to identify and eradicate the sporadic problems. The causes for the sporadic
problems have to be analyzed and corrective measures have to be taken to eradicate
those problems completely.
Chronic problem
A Chronic problem is a long standing problem and it is very difficult to find solution
for the problem.
Chronic problems are not dramatic because they exist for a long time
Chronic problems use the breakthrough sequence.
Chronic problems are like chronic disease, which exist forever and it is often difficult
to solve.
A continuous improvement can be achieved in Chronic problems by using statistical
quality control techniques.
Kaizen is a Japanese word, which means small but continuous improvement. It means ongoing
improvement involving everyone including managers and workers. The Kaizen philosophy
assumes that our way of life i.e. our professional life, social life or personnel life –deserves to
be constantly improved. In the Kaizen philosophy , improvement in all areas of business such
as cost, meeting delivery schedules, employee safety and skill development, supplier relations,
new product development or productivity all enhances the quality of the firm. Thus, any
activity directed towards improvements falls under Kaizen Umbrella.
Activities such as establishing traditional quality control systems , installing robotics and
advanced technology, instituting employee suggestion systems, maintaining equipment and
implementing JIT Production systems all leads to improvement (or) all can be reduced to one
word namely KAIZEN.
Dr. Genichi Taguchi , a mechanical engineer who has own four deming awards has introduced
the Quality loss function concept, which combines cost, target and variation in one metric with
specifications being of secondary importance. Taghuchi has defined quality as the loss
imparted to society from the time a product is shipped. Societal losses include failure to meet
customer requirements, failure to meet ideal performance and harmful side effects.
There are about five types of Quality problems. These problems can be classified in to
1. Compliance problem –These problems arise because the existing system is not
performing properly.
2. Unstructured problem - These problems arise because the existing system is not
performing properly.
3. Efficiency problem - These problems arise because the existing system is not
performing properly.
4. Process design problem –These problem arises because of poor process design.
5. Product design problem These problem arises because of poor product design.
Capability Index is the ratio of tolerance to the capability. There are two measures.
One indicates the ability of process to meet the specifications.
Another indicates the centering of the process on the target.
42.Define Partnering
Partnering is a relationship between two or more parties based upon trust, dedication to
common goals and objectives and understanding of each participants expectations and values.
2. Trust
3. Shared Vision
The following are the different approaches towards continuous process improvement.
Juran trilogy –Juran approach on quality improvement is from cost oriented
perspective.
Shewarts Plan –Do –Study –Act (PDSA) Cycle
This approach is basically applying scientific methods for continuous improvement
and quality.
5S for workplace organization to improve quality.
KAIZEN –The Japanese approach to Quality improvement.
PART –B
Features
Features or attributes of a product or service are psychological , time oriented,
contractual, ethical and technological. Features are secondary characteristics of a product or
service. For example the primary function of an automobile is transportation, whereas a car
stereo system is a feature of an automobile.
Service
An emphasis on customer service is emerging as a method for organization to give the
customer the added value. Customer service is an intangible- it is made up of many small
things, all geared to changing the customer that are not quantifiable, yet contribute greatly to customer
satisfaction. Providing excellent
customer service is different from and more difficult than achieving product excellence.
Warranty
The product warranty represents an org product backed up by a guarantee of customer
satisfaction.. It also represents a public
commitment to guarantee a level of service sufficient to satisfy the customers. A warranty forces the
organization to focus on the cust warranty generates feedback by providing information on the product or
service quality. It
also forces the organization to develop a corrective action system. A warranty encourages
customers to buy a service by reducing the risk of the purchase decision and it generates more
sales from existing customers by enhancing loyalty.
Price
Customers are willing to pay higher price to obtain value. They evaluate the
organizations performance with that of the competitors to determine who provides the greatest
value. Ongoing efforts must be made by everyone having contact with customers to identify ,
verify and update each customers perception of value in relation to each product or service.
Reputation
Customers rate the organizations with the overall experience with them not just
the product. Good experience are repeated to six people and bad experience are repeated to 15
people, therefore it is very difficult to create a favourable impression.
Telecom service:
1.Availability
2.Connection establishment
3.Connection retention.
4.Connection quality
5.Billing integrity
.
Hotels: They have to be highly sensitive to personal desires , attitudes, wishes and tastes.
It depends upon
1.Appropriate reservation facility
2.provision of suitable room facility
3.Availability of food and bar services
4.Other services such as parking, transportation, gift shops,telephone service, laundry.
Restaurents:
Transportation:
1.Effectiveness in protection
2.Charges of transportation
3.Efficiency of packing
4.Efficiency of storage
5.Efficiency of delivery
3.List and explain the five levels in Maslows theory and relate it to customer satisfaction
Level 2
It means a safe place of work and job security .These are very important to
employees.it is really a motivating factor , if an organization takes interest in the well being of
its employees.Actually a threat of losing the job can not be a motivating factor to the
employees.It is not limited to job security alone.It includes having privacy in doing the job, by
having a cabin, a storeroom or cupboard to lock the personal items and adjustable furniture.
Level 3
It is the need to belong.this is a must for human beings.The greatest punishment to an
employee is to keep him in isolation in an office environment.In such a situation he would feel
utmost despair and go to the extreme step of ending his life.Employees need to be provided
with formal social areas like canteens, conference rooms and informal areas like water coolers
, coffee vending machine or notice boards.
Level 4
It is pride and self worth.Everone wants to be recognized in the organization in which
he works.Business cards, office rooms furnished with needy furniture and creating a good
ambience definitely enhances the talent and definitely motivates him.It is quite interesting to
note that he longs for all these to satisfy his ego under the guise of self respect.Seeking advice
from the employees on matters of common interest will create and sustain a self esteem
feeling in employees.
Level 5
Deals with the ascent on the ladder to higher positions.Really intelligent and efficient
employees require to be promoted.Stagnation in the existing position for too long a period will
not make the employees sustain enthusiasm and any amount of coaxing can not considered as
motivating them.An interesting point at this juncture is worth notable.An employee is prone to
think that if his job security is endangered , why not reverting back to the previous level.
Herzberg extended the general work of Maslow by using empirical research to develop his
theory on employee motivation. He found that people were motivated by recognition
responsibility , achievement, advancement and the work itself.These factors were labeled as
motivators.In addition he found that bad feelings were associated with low salary , minimal
fringes benefits poor working conditions , ill defined organizational policies and mediocre
technical supervision. These job related factors were labeled as dissatisfiers or hygiene factors
which implies that they are preventable..It is important to realize that dissatisfiers are often
extrinsic in nature and motivators are often instrinsic.the presence of the extrinsic conditions
does not necessarily motivate employees, however their absence results in dissatisfaction
among employees..Absence of motivating factors does not make employees dissatisfied but
when there are motivating factors present they do provide strong levels of motivation that
result in good job performance for the individual and the organization.In general dissatisfiers
must be taken care of before motivators are lower levels and the motivators are equivalent to the upper
levels.
1.Organization
2.Customer care
3.Communication
5.Leadership
Lead by example
Listen to the front line people.
Strive for continuous process improvement
Empowerment is an environment in which people have the ability , the confidence and
the commitment to take the responsibility and ownership to improve the process and
initiate the necessary steps to satisfy customer requirements within well defined
boundaries in order to achieve organizational values and goals.
The word empowerment is not to be confused with delegation or job
enrichment.Delegation is distribution and entrusting work to others.Employee
empowerment requires the individual to be held responsible for accomplishing a whole
task.Besides the employee by having been empowered becomes accountable for the
work.The following conditions are necessary for employee empowerment.
Team is defined as a group of people working together to achieve a common objective or goal.The various types of teams are:
ii)Cross-functional team:
It comprises of about six to ten members representing different functional areas lide
engineering,marketing,accounting,production,quality and HR.It may also have a customer
and supplier.The life of this team is also temporary.
2.Team charter- It is a document that contain the team boundaries, background of the
problem, the teams authority and duties.It also identifies the members team, leader
timekeeper and facilitator.
3.Team composition- The size should rarely exceed ten people except in the case of
natural work team.Larger teams have the problem of conflict.
5.Groung rules- They contain the rules and operation and conduct.There should be an
open discussion of what is tolerated and not.
11.Effective problem solving- Decisions are based on the problem solving method.
12.Open communication- Members actively listen , without interruption speak with
clarity and directness, ask questions and say what they mean.
13.Appropriate leadership- All teams need leadership whether imposed by the quality
council or whether someone emerges as a leader.
14.Balanced participation- all members must actively involve in the teams activities.
1.Team leader:
2.Facilitator is not a member of the team.He is a neutral assistant and may not be needed
with a mature team.His role:
Support the leader in facilitating the team during the initial stages.
Focus on how the decisions are made.
Intervenes when necessary to keep them on track.
Does not perform the activities of the team
Provides feedback on the team performance.
Documents the main ideas of the discussion, the issues raised, decisions made,
actions and future agenda.
Presents the documents for the team
review. Participates a s a member
5.Team member
3.Dominating participants- they like to hear themselves talk, and they dominate the
meeting.Solution is to structure discussion on the key issues for equal participation and
have the team agree on the need for balanced participation.
4.Reluctantparticipants- They feel shy or they are unsure o themselves they must be
encouraged to participate.
6.Rush to accomplish- Team is being pushed by one or more members who are impatient
for results.Confront the rusher offline and explain the effects of impatience.
7.Attribution –It is guessing at a persons motives their opinion.Solution is to find out the real meaning
of the problem.
8.Discounts and plops- Arises when members fail to givecredit to anothers opinion or no
one responds to a statement that plops.Solution is to reinforce active listening, and support
the discounted member.
9.Wander lust- It happen when members lose track of the meetings purpose or want to
avoid a sensitive topic.Solution are to use agenda with time estimates and redirect the
conversation back to the agenda.
10.Feuding team member- they disrupt the entire team with their disagreement.solution
are to get the adversaries to discuss the issues offline , offer to facilitate the discussion and
encourage them to form some contract about their behaviour.
There are different stages in the life cycle of teams.Bruce tuckerman found that there
were four stages to team development.These stages are forming , storming, norming and
performing.
1.Forming- here the members become aware of the team boundaries.They are not familiar
with each other.They are cautious ad their communication is formal..Time is spent in
organizing and training.The leader charters the team .they should meet to evaluate the
problem posed by the management, determine the type of training team members may
need and identify the appropriate team leader.
2.Storming-The members start to realize the amount of work that lies ahead.Members
they resist working as a team.There is a real deal of conflict.Each individual particularly
from the cross functional team brings with them both hierarchial and functional baggage,
differenced in goals, , differences in perception as well as different work ethics sense of
time career family priorities, and attitudes towards authority. Team leader should handle
the conflict.
Ask each member to list what the other side should do.
Have the sides each to write ten questions for their opponents .This will allow them
to signal their major concern about the other sides position.The answers often lead to
compromise.
Convince the team members they sometimes have to admit they are
wrong. Respect the experts on the team
4.Performing- They have settled their relations and expectations.They better understand
the project and begin performing by diagnosing and solving problems and choosing and
implementing changes.Members work to achieve their objectives effectively and
efficiently.
5.Adjourning- this stage is reserved to the temporary teams.They should evaluate the
performance and determine the results learned.They also need a celebration to recognize
the team members contribution to the organization..
2.Incompatible rewards and compensation- They make little effort to reward team
performance.Because of the strong focus on the individual reward it is difficult
for individuals to buy into the team concept.
3.First line supervisor resistance- They are reluctant to give up power and concerned
about job security
5.lack of management support- It must provide the resources and buy into the quality
council/sponsor system.
7.Lack of union support-They need union support for the team to be successful.
8.Project scope too large- The team and organization are not clear on what is reasonable
or management is abducting its responsibility to guide the team.
9.Project objectives are not significant- Management has not defined what role the team
will play in the organization.
10.No clear measure of successs-The team is not clear about its charter and goals.
11.No time to do improvement work –Value and beliefs of the organization are not compatible with the
team ‘s work.Individua team’s progress.
12.Team is too large-The organization lack methods for involving people in ways other
than team membership.
13.Trapped in group think- team members all have a mindset that no actions are taken
until everyone agrees with every decision.
Serves as a continual reminder that the organization regards quality and productivity
as important.
Offers the organization a visible technique to thank high achievers for outstanding
performance.
Provides employees a specific goal to work.It motivates the employees to improve the
process.
Boosts the morale in the work environment by creating a healthy sense of competition
among individuals and teams seeking recognition
The very purpose of performance appraisal is to make the employees know hoe they
are functioning in the organization The employees should know the parameters for
measurement. There are seven steps for improvement for performance appraisal
2.Require work team or group evaluations that are at least equal in emphasis to individual
focused evaluation
The Organisation should strive to achieve perfection and quality by continuously improving
the production process and business. It is very difficult to attain the perfection Continuous
improvement is derived from the Japanese term KAIZEN which means small but continuous
improvement. Continuous process improvement is the heart of TQM Process .It consists of
measuring key quality parameters and take active steps to improve them. TQM demands
structured improvement programmes in all these areas of business administration, customer
services, product quality and so on.
The main aim of continuous process improvement is to improve the levels of customer
The following line diagram shows the steps towards implementing continuous process
improvement.
ADDIE or Analysis, Design, Development, Implement and Evaluate model has five
phases.
a) Analysis Phase :
In this phase the aim is to identify the areas of opportunity and to target specific
problems. These areas of opportunity and problems are identified based on brainstorming
sessions, Process definition sessions Recommendations forwarded to the team by
organizational members and various other analyzing techniques.
b) Design Phase
This Phase generates solutions through Brainstorming sessions. Here the required
resources are identified to implement the choosen solution and also the baselines to
measure the outcomes that are identified.
c) Development Phase
d) Implementation phase
Execute the approved solution
e) Evaluation Phase
In this phase, measurement tools are built; implementation is monitored and
measurements to baseline are evaluated. Evaluation is not only reserved for the last but is
done throughout the entire process.
JURAN TRILOGY
Dr. Joseph Moses Juran’s major contribution management.
Dr.Joseph Juran was also called as father of Quality. He was a professor and Quality
consultant and conceptualized the Pareto principle where many Industries depends on it. The
American society for quality have proposed
Principle. Its universal application makes it one of the most useful concepts and tools of
modern day management.
Dr. Joseph Moses Juran has written many books on Quality. One of his book namely
“Standard reference work on Quality control” departments in most of the organizations.
This handbook on quality gives more useful information to improve the performance of the
organization by improving the quality of goods and services. One of his classic book namely
Managerial Breakthrough presents more general theory of quality management
The book Managerial Breakthrough is the first book to describe a step by step sequence for
breakthrough improvement. Today this process has evolved in to six sigma and it act as a basis
for quality initiatives world wide.
In 1979 Juran founded Juran Institute, an organization aimed at providing Research and
pragmatic solutions to enable organizations from any Industry to learn the tools and
techniques for managing Quality.
Juran describes Quality from the customer perspective as having two aspects.
Higher quality means greater number of features that meet customer needs.
The second aspect is that the product should be free from troubles.
The Juran Trilogy, published in 1986, was accepted world wide as the basis for quality
management.
The Juran Trilogy describes three basic components or processes required to manage quality
in the organization. They are
Quality Planning
Quality improvement
Quality Control
Quality Trilogy model was developed by Juran at Juran Institute. Quality Trilogy provides a
model of how an organization can improve from the bottom line by better understanding the
relationship between the processes that plan , control and improve quality.
Quality planning - The process for designing products , services and processes to meet new
breakthrough goals.
Steps involved in Quality planning are
Setting the goals
Identify the customers
Determine the needs of those customers
Translate those needs in to business language.
Develop a product that can respond to those needs.
Optimize the product features so as to meet the organizations needs and customer needs.
Quality Improvement
Develop a process to produce a product of good quality.
Optimize the process
Quality Control
Prove that the process can produce the products under operating conditions with minimum
inspection
Transfer the process in to operations.
PDSA CYCLE
The PDSA Cycle was first developed by Walter shewart and then it was modified by deming
as PDCA Cycle. PDSA stands for PLAN, DO, STU ideas that may create an improvement. It can be used to
test ideas for improvement quickly
and easily based on existing ideas, research, feedback, theory audit etc or practical ideas that
have been proven to work elsewhere. It is a very effective improvement technique and it uses
simple measurements to monitor the effect of changes overtime. It encourages starting with
small changes, which can build in to larger improvements through successive quick cycles of
changes. The PDSA Cycle has been used for decades as an effective tool for continous
improvement. This method is well established and validated and is particularly suited for small
and dynamic organizations.
STEP –1 PLAN
Identify what change you think will create improvement and then plan the test of the change.
What is the objective of introducing the change? Plan on how you are going to collect the
information’s about the differences that oc
how you will know whether the change made has worked or not? The change should bring out
the differences which are measurable in isolation. A major change could be broken in to
smaller and more manageable chunks.
STEP –2
Put the plan in to practice. Then set the change by collecting data. The stage involves carrying
out the plans successfully. It is important that the DO stage is kept as short as possible. There
may be changes that could only be measured over long periods. Record any unexpected
events, problems and observations. Start analyzing the data.
STEP –3 STUDY
Review and reflect on the data collected in the previous step. Complete the analysis of the
data. Find out whether there has been any improvement in the process ? Did your expectations
meet the reality. Find out what could have been done differently.
STEP –4 ACT
Make further changes or amendments after yo and collect data again. Carry out an amended and measure any
difference.
1.Seire (Organize)
2.Seiton (Put things in order)
3.Seiso (Clean up)
4.Seiketsu (Standarardise)
5.Shitsuke (Discipline)
The English equivalents , their meanings and typical examples are shown in the following
table
1.Seire (Organize)
It is about separating the things which are necessary for the Job from those that are
unnecessary and keeping the number of necessary things as low as possible and at a covenient
location. Differentiation should be made between necessary and unnecessary things.
4. Seiketsu (Standarardise)
Seiketsu means continually and repeatedly maintaining neatness and cleanliness in the
organization. It claims both personal cleanliness and the cleanliness of the environment. The
emphasis is on visual management, transparency in storage (put appropriate labels) and
standardization.
5. Shitsuke (Discipline)
Discipline means instilling the ability of doing things the way they are supposed to be done.
Discipline is a process of repetition and practice. The emphasis in self-discipline is on creating
a work force with good habits.
The logic behind the 5-S Practice is that organization, neatness, cleanliness, standardization
and discipline at the work place are the basic requirements for producing high quality products
and services, with high productivity and no wastage.
KAIZEN Vs INNOVATION
Kaizen is different from innovation. Innovation results in large, short term and radical changes
in products and processes. Innovation is dramatic, a real attention getter and often championed
by a few proponents. Kaizen, on the other hand, is a continous improvement involving every
employee at every level in every organization. Kaizen is focused on small, frequent and
gradual improvements over a long term. The comparison between kaizen and Innovation is
given in the following table
24. How can every one be involved in improvement activities (Kaizen Philosophy) ?
Top Management
Middle Mnagement
Supervisors
Workers
Workers engage in improvement through suggestion systems and small group activities, self-
development programs for problem solving and enhanced Job performance skills.
The foundation for Quality improvement in the kaizen philosophy is the use of statistically
based tools for problem solving and training for all levels of management and workers.
Quality control and Engineering knowledge is made available to shop floor personnel so that
they can solve their own problems better in the kaizen philosophy.
Organisations should choose the suppliers very carefully. When choosing the partners , they
should use the existing performance of their partners, minimum targets, agreed targets and the
suppliers attitude as the criteria for supplier selection. Before selecting the supplier the
organization must first evaluate whether its needs (Requirements such as raw material and
other materials) actually need a supplier in the first place. This dilemma arises because the
organization has a choice to make or buy a particular item. The decision is a strategic one that
must be made even during the design stage.
Supplier must be selected based on the following criteria. Those criteria are
Find a supplier who understands and appreciates the management philosophy of the
organization.
The supplier should have a stable management system. To determine this point, several
questions have to be asked such as : Is there a quality policy statement with the
supplier that includes quality objectives and a commitment to quality? Is the policy
understood at all levels of management? Are they implementing it? Does the
management have scheduled review of its quality system to determine its effectiveness
? etc.
The ability of the supplier to maintain high technical standards and his capability of
dealing with future technological innovations.
The ability of the supplier to supply the right quantity and quality of raw material at the
right time.
The ability of the supplier to meet sudden increase in requirement of raw materials
(Probably due to sudden change in demand)
The ability of the supplier to provide the raw material at the right price.
Easy accessibility of the supplier in terms of communication
The sincerity of the supplier in implementing the contract provisions.
The supplier has an effective quality system such as ISO 9000 or QS 9000 etc., and
other improvement programs.
The customer and supplier shall have agreed upon specifications that are
mutually developed, justifiable and unambiguous.
The supplier shall have no product-related lot rejection for a significant period
of time, say,. one year or significant number of lots.
The supplier shall have no non-product related rejections for a stated period of
time or number of lots. Non product related rejections are wrong count of items
sent to supplier or billing error etc.
The supplier shall have no negative non-product related incidents for a stated
period of time. This criterion covers incidents or problems that occur even
though inspection and tests showed conformance to specifications.
The supplier shall have a fully documented quality system such as ISO 9000 or
QS 9000.
The supplier shall have a successfully passed an on-site system evaluation. This
evaluation could be by a third party such as an ISO registrar or by the customer
itself.
The supplier must conduct inspections and test in his premises to achieve
quality.
The supplier shall have the ability to provide timely inspection and test data to
the customer because this is necessary by the customer when the product
arrives.
There are six techniques used for presenting performance measures, namely
b) Control chart
c) Capability Index
28. Explain the basic concepts of performance measure and mention the
area required to measure the performance.
PART A
1. What are the seven quality tools?
It is a process tool to classify data and rank categories in descending order of occurrence to
separate significant categories from trivial ones. Separating data into category, counting
occurrences in each category, and arranging categories from highest to lowest frequency and
drawing and labeling bars for each category does it.
3. Define the Ishikawa diagram?
It is called as Fishbone diagram which a process tool to identify possible causes for a
particular effect.
4. Explain Histogram?
The scatter plot is a useful way to discover the relationship between two factors, X
and Y, i.e., the correlation. An important feature of the scatter plot is its visualization of the
correlation pattern, through which the relationship can be determined.
Measures of central tendency are measures of the location of the middle or the center
of a distribution. The definition of "middle" or "center" is purposely left somewhat vague so
that the term "central tendency" can refer to a wide variety of measures. The mean is the most
commonly used measure of central tendency.
In any manufacturing process, the variability is inherent and cannot be eliminated fully
though it can be controlled to some extent. The extent of variability decides Go, Non-Go or
Acceptance, Rejection of the products. Statistics renders an immense help to assess this
variability quantitatively and to take the correction action promptly before any disaster that
may occur as a consequence. Process capability study is a statistical tool or technique, to
assess the variation in the ability of the process during the conversion of feed material.
It is defined as the quality performance capability of the process with given process
factors and under normal, in control condition. Based on the results of any process that are
continuously measured, standard deviation is calculated by taking the square root of its
variance to calculate the indices of process capability namely CP and Cpk. The need for process
capability is to
i. Predict the extent to which the process will be able to hold tolerance or customer
requirements.
ii. Choose, from among competing process, the most one for meeting the customer
requirements.
iii. Redesign and implement a new process that eliminates the source of variability now at
work.
Potential capability index (Cp): The potential process capability measures the overall
performance of the process and is measured as the ratio of difference between upper
specification limit (USL) and lower specification limit (LSL) to six times of standard
deviation ( ).
C DESIGN TOLERANCE
P
PROCESS TOLERANCE
C USL - LSL
P
6
Performance capability Index (Cpk): The index CP calculates the precision of the
process by measuring the overall processes performance, consideration the both positive and
negative deviation. This is alone not sufficient, since there is every chance for lack of accuracy
in the process. To assess the accuracy, clustering nature of values around the mean or away
from the center value are calculated. Using performance capability index (Cpk) the clustering
effect on Lower limit is calculated by CPKL & Upper limit is calculated by CPKU and minimum
of the above is Cpk.
μ LSL
C
pkl
USL -
3σ
C
pku 3
C pk= MIN (Cpku, Cpkl)
The arithmetic mean is what is commonly called the average: When the word "mean"
is used without a modifier, it can be assumed that it refers to the arithmetic mean. The mean is
the sum of all the scores divided by the number of scores. The formula in summation notation
is: μ = ΣX/N populationwheremean andμN isthe numbertheof scores.
Median: When there is an odd number of numbers, the median is simply the middle
number. For example, the median of 2, 4, and 7 is 4. When there is an even number of
numbers, the median is the mean of the two middle numbers. Thus, the median of the numbers
2, 4, 7, 12 is (4+7)/2 = 5.5.
Mode is the most frequently occurring score in a distribution and is used as a measure
of central tendency. The advantage of the mode as a measure of central tendency is that its
meaning is obvious.
11. What are types of Control Charts? Explain Variable charts? Explain Attribute
charts?
The control chart is a very important tool in the “analyze of the Six Sigma improvement
methodology. In to judge if the process is predictable; in causes of variation so that they can be acted o
performance of the process is under control. The original concept of the control chart was
proposed by Walter A. Shewhart in 1924 and the tool has been used extensively in industry
since the Second World War, especially in Japan and the USA after about 1980. Control charts
offer the study of variation and its source. They can give process monitoring and control, and
can also give direction for improvements. They can separate special from common cause
issues of a process. They can give early identification of special causes so that there can be
timely resolution before many poor quality products are produced.
Shewhart control charts track processes by plotting data over time.These charts can
track either variables or attribute process parameters. The types of variable charts are process
mean (x), range (R), standard deviation (s), individual value (x) and moving range (Rs). The
attribute types are fraction nonconforming (p), number of nonconforming items (np), number
of nonconformities (c), and nonconformities per unit (u).
The typical control limits are plus and minus 3 standard deviations limits using about
20-30 data points. When a point falls outside these limits, the process is said to be out of
control. When a point falls inside these limits, the process is said to be under control.
There are various types of control charts, depending on the nature and quantity of the
characteristics we want to supervise. The following control charts are the most often used ones
depending on whether the data are continuous or discrete. These charts are called Shewhart
control charts. Note that for continuous data, the two types of chart are simultaneously used in
the same way as a single control chart.
the following formulae apply to the calculation of CL, UCL and LCL for the –x
(average) chart.
Here, A2 and d2 are the frequently used constants for control charts,. Table 4.1 contains
CL, UCL and LCL for the respective control charts.
Step 4. Draw the control chart and check for special causes
The control chart can now be drawn, with CL, UCL and LCL. The samples used for calculating
the control limits are then plotted on the chart to determine if the samples used to
calculate the control limits embody any special causes of variation. Special causes exist if any
of the following alarm rules apply:
It was proposed by De-Moivre an english Mathematician in the yea random variable X is said to have a
2
normal distribution with the parameter & (Mean &
Variance) if its density function is given b
2
1 (x )
1 2
f(x) e
2
2
Where X ~ N ( , ), can be converted into Standard Normal Variable Z ~N (0, 1) using the
relationship of transformation, whose probability density function is
1 2 X
f(z) e z /2 where z
2
2
If we have sample of size n, and the characteristics are y1, y2, y3, y4……,yn, then & is
calculated as
y y y ... y
y 1 2 3 n
n
n
2
(yi y)
2 v i 1
n 1
i
RR n
out waste from the process with the application of statistical tool and techniques in a
rigorous manner.
In statistical terms, six sigma means 3.4 defects per million opportunities where
Six Sigma methodologies are classified as two types which are improvement oriented
namely DMAIC and DFSS.
21.Define PPM?
Parts per Million (PPM): The number of defective item out of one million inspected item.
22.Define DPU?
Defects per Unit (DPU): It is defined as ratio between the Number of unit defective to
Number of unit inspected.
Number of unit defective DPU
Number of unit inspected
23.Define DPO?
Defects per Opportunities (DPO): It is defined as ratio between Defects per unit to Number of
independent opportunities for non-conformance per unit.
DPU
DPO m
24.Define DPO?
YIELD = 1- DPU
26. What are the New Seven Management tools?
The affinity diagram is used to generate ideas, then organize these ideas in a logical
manner. The first step in developing an affinity diagram is to post the problem (or issue)
where everyone can see it. Next, team members write their ideas for solving the problem on
cards and post them below the problem. Seeing the ideas of other members of the team helps
everyone generate new ideas. As the idea generation phase slows, the team sorts the ideas into
groups, based on patterns or common themes. Finally, descriptive title cards are created to
describe each group of ideas.
A tree diagram assists teams in exploring all the options available to solve a problem,
or accomplish a task. The tree diagram actually resembles a tree when complete. The trunk of
the tree is the problem or task. Branches are major options for solving the problem, or
completing the task. Twigs are elements of the options. Leaves are the means of
accomplishing the options.
The matrix diagram allows teams to describe relationships between lists of items. A
matrix diagram can be used to compare the results of implementing a new manufacturing
process to the needs of a customer. For example, if the customer's main needs are low cost
products, short leadtimes, and products that are durable; and a change in the manufacturing
process can provide faster throughput, larger quantities, and more part options; then the only
real positive relationship is the shorter leadtime to the faster throughput.
The other process outcomes—larger quantities and more options—are of little value to
the customer. This matrix diagram, relating customer needs to the manufacturing process
changes, would be helpful in deciding which manufacturing process to implement.
The activity network diagram graphically shows total completion time, the required
sequence of events, tasks that can be done simultaneously, and critical tasks that need
monitoring. In this respect, an activity network diagram is similar to the traditional PERT
chart used for activity measurement and planning.
31.What is SPC?
There are five types of histograms depending on the type of distribution .They
are :
1.Bell shaped distribution
2.Double peaked distribution
3.Plateau distribution
4.Comb distribution
5.Skewed distribution
33.What is an attribute?
PART B
1. Explain Pareto chart?
The Pareto chart was introduced in the 1940s by Joseph M. Juran, who named it after the
Italian economist and statistician Vilfredo Pareto, 1848–1923. It is applied to distinguish
the “vital few from the trivial many” as J is closely related to the so called 80/20 rule –“80% of the
problems ste causes,” or in SixtheSigmapoor termsvalues“80%in Yofstem
In the Six Sigma improvement methodology, the Pareto chart has two primary
applications. One is for selecting appropriate improvement projects in the define phase.
Here it offers a very objective basis for selection, based on, for example, frequency of
occurrence, cost saving and improvement potential in process performance. The other
primary application is in the analyze phase for identifying the vital few causes (Xs) that
will constitute the greatest improvement in Y if appropriate measures are taken.
2) Define the period of time for the diagram –for example, weekly, daily, or shift. Quality
improvements over time can later be made from the information determined within this
step.
5) Plot the number of occurrences of each characteristic in descending order in a bar graph
along with a cumulative percentage overlay.
6) Trivial columns can be lumped under one column designation; however, care must
be exercised not to omit small but important items.
Table 4.2 shows a summary table in which a total of 50 claims during the first month of
2002 are classified into six different reasons. Figure 4.4 is the Pareto chart of the data
in Table 4.2.
2. Explain the cause and effect diagram and Check sheet ?
Cause-and-effect diagram
An effective tool as part of a problem-solving process is the cause-and-effect diagram, also
known as the Ishikawa diagram (after its originator) or fishbone diagram. This technique is
useful to trigger ideas and promote a balanced approach in group brainstorming sessions
where individuals list the perceived sources (causes) with respect to outcomes (effect). As
shown in Figure 4.1, the effect is written in a rectangle on the right-hand side, and the
Causes are listed on the left-hand side. They are connected with arrows to show the cause-
and-effect relationship.
Check sheet
The check sheet is used for the specific data collection of any desired characteristics of a
process or product that is to be improved. It is frequently used in the measure phase of the
Six Sigma improvement methodology, DMAIC. For practical purposes, the check sheet is
commonly formatted as a table. It is important that the check sheet is kept simple and that
its design is aligned to the characteristics that are measured. Consideration should be given
as to who should gather the data and what measurement intervals to apply. For example,
Figure 4.2 shows a check sheet for defect items in an assembly process of automobile
ratios.
3. Explain the Histogram and Scatter diagram?
Histogram
It is meaningful to present data in a form that visually illustrates the frequency of
occurrence of values. In the analysis phase of the Six Sigma improvement methodology,
histograms are commonly applied to learn about the distribution of the data within the
results Ys and the causes Xs collected in the measure phase and they are also used to
obtain an understanding of the potential for improvements.
To create a histogram when the response only “takes on simply made each time a discrete value occurs.
After a number of responses are taken, the
tally for the grouping of occurrences can then be plotted in histogram form. For example,
Figure 4.3 shows a histogram of 200 rolls of two dice, where, for instance, the sum of the
dice was two for eight of these rolls. However, when making a histogram of response data
that are continuous, the data need to be placed into classes or groups. The area of each bar
n the histogram is made proportional to the number of observations within each data value
or interval. The histogram shows both the process variation and the type of distribution
that the collected data entails.
Scatter diagram
The scatter plot is a useful way to discover the relationship between two factors, X and Y,
i.e., the correlation. An important feature of the scatter plot is its visualization of the
correlation pattern, through which the relationship can be determined. In the improve
phase of the Six Sigma improvement methodology, one often searches the collected data
for Xs that have a special influence on Y. Knowing the existence of such relationships, it is
possible to identify input variables that cause special variation of the result variable. It can
then be determined how to set the input variables, if they are controllable, so that the
process is improved. When several Xs may influence the values of Y, one scatter plot
should be drawn for each combination of the Xs and Y. When constructing the scatter
diagram, it is common to place the input variable, X, on the X-axis and the result variable,
Y, on the Y-axis. The two variables can now be plotted against each other and a scatter of
plotted points appears. This gives us a basic understanding of the relationship between X
and Y, and provides us with a basis for improvement.
Table 4.3 shows a set of data depicting the relationship between the process temperature
(X) and the length of the plastic product (Y) made in the process. Figure 4.5 shows a
scatter diagram of the data in Table 4.3.
4. Explain the Measures of Tendency and Dispersion?
Measures of central tendency are measures of the location of the middle or the center of a
distribution. The definition of "middle" or "center" is purposely left somewhat vague so
that the term "central tendency" can refer to a wide variety of measures. The mean is the
most commonly used measure of central tendency. The following measures of central
tendency are discussed in this text:
Mean
Median
Mode
OR
Central tendency is measured in three ways: mean, median and mode. The mean is simply
the average score of a distribution. The median is the center, or middle score within a
distribution. The mode is the most frequent score within a distribution. In a normal
distribution, the mean, median and mode are identical.
Arithmetic Mean
The arithmetic mean is what is commonly called the average: When the word "mean" is
used without a modifier, it can be assumed that it refers to the arithmetic mean. The mean
is the sum of all the scores divided by the number of scores. The formula in summation
notation is: μ = ΣX/N populationwheremean andμN isthe numbertheof scores. If the scores
are from a sample, then the symbol M refers to the mean and N refers to the sample
size. The formula for M is the same as the for central tendency for roughly symmetric distributions but
can be misleading in skewed
distributions since it can be greatly influenced by extreme scores. Therefore, other
statistics such as the median may be more informative for distributions such as reaction
time or family income that are frequently very skewed.
The formal definition of the arithmetic mean is µ = E[X] where μ is the populat of the variable X and
E[X] is the expected value of X.
The sample size is very simply the size of the sample. If there is only one sample, the
letter "N" is used to designate the sample size. If samples are taken from each of "a"
populations, then the small letter "n" is used to designate size of the sample from each
population. When there are samples from more than one population, N is used to indicate
the total number of subjects sampled and is equal to (a)(n). If the sample sizes from the
various populations are different, then n 1 would indicate the sample size from the first
population, n 2 from the second, etc. The total number of subjects sampled would still be
indicated by N.
Geometric Mean
The geometric mean is the nth root of the product of the scores. Thus, the geometric mean
of the scores: 1, 2, 3, and 10 is the fourth root of 1 x 2 x 3 x 10 which is the fourth root of
60 which equals 2.78. The formula can be w means to take the product of all the values of X.
The median is the middle of a distribution: half the scores are above the median and half
are below the median. The median is less sensitive to extreme scores than the mean and
this makes it a better measure than the mean for highly skewed distributions.
Computation of Median
When there is an odd number of numbers, the median is simply the middle number. For
example, the median of 2, 4, and 7 is 4.
When there is an even number of numbers, the median is the mean of the two middle
numbers. Thus, the median of the numbers 2, 4, 7, 12 is (4+7)/2 = 5.5.
The mode is the most frequently occurring score in a distribution and is used as a measure
of central tendency. The advantage of the mode as a measure of central tendency is that its
meaning is obvious
1 2
f(x) e
2
2
Where X ~ N ( , ), it can be converted into Standard Normal Variable Z ~N (0, 1) using
the relationship of transformation, whose probability density function is
1 2 X
f(z) e z /2 where z
2
2
If we have sample of size n, and the characteristics are y1, y2, y3, y4……,yn, then & is
calculated as
y y y ... y
y 1 2 3 n
n
n
2
(yi y)
2 v i 1
n 1
If we use X bar - R chart, in which there are k sub groups of size n, can be estimated
as
Where Ri is the range of each sub group and d2 is the constant that depends on sample size
n. The value of d2 can be found by seen the X bar - R chart table.
R
R R in
S
d2
Characteristics
The curve is bell-shaped and symmetrical about the line x = .
Mean, Median, Mode of the distribution coincides so it is called symmetrical.
chart was proposed by Walter A. Shewhart in 1924 and the tool has been used extensively
in industry since the Second World War, especially in Japan and the USA after about
1980. Control charts offer the study of variation and its source. They can give process
monitoring and control, and can also give direction for improvements. They can separate
special from common cause issues of a process. They can give early identification of
special causes so that there can be timely resolution before many poor quality products are
produced.
Shewhart control charts track processes by plotting data over time in the form shown in
Figure 4.3. This chart can track either variables or attribute process parameters. The types
of variable charts are process mean (x), range (R), standard deviation (s), individual value
(x) and moving range (Rs). The attribute types are fraction nonconforming (p), number of
nonconforming items (np), number of nonconformities (c), and nonconformities per unit
(u).
The typical control limits are plus and minus 3 standard deviations limits using about
20-30 data points. When a point falls outside these limits, the process is said to be out of
control. When a point falls inside these limits, the process is said to be under control.
There are various types of control charts, depending on the nature and quantity of the
characteristics we want to supervise. The following control charts are the most often used
ones depending on whether the data are continuous or discrete. These charts are called
Shewhart control charts. Note that for continuous data, the two types of chart are
simultaneously used in the same way as a single control chart.
Besides these charts, the following new charts for continuous data have been suggested
and studied. For good references forcontrol charts, see CUSUM (cumulative sum) chart ;
MA (moving average) chart ; GMA (geometric moving average) chart ; EWMA
(exponentially weighted moving average) chart
the following formulae apply to the calculation of CL, UCL and LCL for the –x
(average) chart.
Here, A2 and d2 are the frequently used constants for control charts. Table 4.1
contains CL, UCL and LCL for the respective control charts.
Step 4. Draw the control chart and check for special causes
The control chart can now be drawn, with CL, UCL and LCL. The samples used for
calculating the control limits are then plotted on the chart to determine if the samples used
to calculate the control limits embody any special causes of variation. Special causes exist
if any of the following alarm rules apply:
MEASURE PHASE
Sigma calculation & process capability
calculation
ANALYZE PHASE
Control chart, Pareto chart, cause & effect
analysis & histogram
IMPROVE PHASE
Develop a potential solution
CONTROL PHASE
Develop Standards & Procedures
Project team
The team is formed to solve the critical to quality problem and they are given training
regarding the quality tools and others. According to the level of training they are given
“belt” which represents. their role in the
Executive leaders
He or she should set the tone and direction for six-sigma effort and they have to commit
themselves and promote it throughout the organization.
Champions
He or She serves as a coach in supporting the project teams for implementing the six sigma
and providing them the required resource for their work.
Master Black belts
He or She is the highest levels of technical proficiency and must be an expertise in six
sigma tools.
Black belts
He or She is the backbone of six-sigma culture and takes responsibilities for the routine
work and results of six-sigma project. They are key agents, fully dedicated and thoroughly
trained in six sigma techniques and tools.
Green Belt
He or She are responsible for collection and analysis of data needed to improve the process
and usually receive more simplified training than black belts and work on projects only on
part time.
Team Members
He or She must have sufficient functional expertise relevant to the project undertaken.
They must be able to work well with others and have technical skills to contribute
significantly to the team.
MEASURE PHASE
It measures the current performance input, output, and process and calculates the sigma
level. In this Phase we do
I] Sigma calculation
Sigma calculation
Parts per Million (PPM): The number of defective item out of one million inspected item.
Defects per Unit (DPU): It is defined as ratio between the Number of unit defective to
Number of unit inspected.
Number of unit defective
DPU
Number of unit inspected
Defects per Opportunities (DPO): It is defined as ratio between Defects per unit to Number
of independent opportunities for non-conformance per unit.
DPU
DPO m
YIELD = 1- DPU
PROCESS CAPABILITY
In any manufacturing process, the variability is inherent and cannot be eliminated fully
though it can be controlled to some extent. The extent of variability decides Go, Non-Go
or Acceptance, Rejection of the products. Statistics renders an immense help to assess this
variability quantitatively and to take the correction action promptly before any disaster
that may occur as a consequence. Process capability study is a statistical tool or technique,
to assess the variation in the ability of the process during the conversion of feed material.
Process capability
It is defined as the quality performance capability of the process with given process
factors and under normal, in control condition. Based on the results of any process that are
continuously measured, standard deviation is calculated by taking the square root of its
variance to calculate the indices of process capability namely CP and Cpk. The need for
process capability is to
iv. Predict the extent to which the process will be able to hold tolerance or customer
requirements.
v. Choose, from among competing process, the most one for meeting the customer
requirements.
vi. Redesign and implement a new process that eliminates the source of variability now at
work.
USL - LSL
CP
6
b) Performance capability Index (Cpk): The index CP calculates the precision of the
process by measuring the overall processes performance, consideration the both positive
and negative deviation. This is alone not sufficient, since there is every chance for lack of
accuracy in the process. To assess the accuracy, clustering nature of values around the
mean or away from the center value are calculated. Using performance capability index
(Cpk) the clustering effect on Lower limit is calculated by CPKL & Upper limit is calculated
by CPKU and minimum of the above is Cpk.
μ LSL
C
USL -
pkl
3σ
C
pku 3
C pk= MIN (Cpku, Cpkl)
ANALYZE PHASE
It analyzes the gap between the current performance levels with the desired
performance level. In this problems are identified and prioritized for solving it by
identifying root cause of the problem.
Pareto Chart
It is a process tool to classify data and rank categories in descending order of occurrence
to separate significant categories from trivial ones. Separating data into category, counting
occurrences in each category, and arranging categories from highest to lowest frequency
and drawing and labeling bars for each category does it.
Cause and Effect Diagram
It is called as Fishbone diagram which a process tool to identify possible causes for a
particular effect.
IMPROVE PHASE
It involves in generating the improvement solutions for the problem and chooses the
best one for implementation, which will satisfy the goals.
CONTROL PHASE
It involves in putting measures in place to make sure that the new process is monitored
and continuously improved.
The interrelationship digraph allows teams to look for cause and effect relationships
between pairs of elements. The team starts with ideas that seem to be related and
determines if one causes the other. If idea 1 causes idea 5, then an arrow is drawn from 1
to 5. If idea 5 causes idea 1, then the arrow is drawn from 5 to 1. If no cause is ascertained,
no arrow is drawn. When the exercise is finished, it is obvious that ideas with many
outgoing arrows cause things to happen, while ideas with many incoming arrows result
from other things.
A tree diagram assists teams in exploring all the options available to solve a problem, or
accomplish a task. The tree diagram actually resembles a tree when complete. The trunk of
the tree is the problem or task. Branches are major options for solving the problem, or
completing the task. Twigs are elements of the options. Leaves are the means of
accomplishing the options.
The prioritization matrix helps teams select from a series of options based on weighted
criteria. It can be used after options have been generated, such as in a tree diagram
exercise. A prioritization matrix is helpful in selecting which option to pursue. The
prioritization matrix adds weights (values) to each of the selection criteria to be used in
deciding between options. For example, if you need to install a new software system to
better track quality data, your selection criteria could be cost, leadtime, reliability, and
upgrades. A simple scale, say 1 through 5, could be used to prioritize the selection criteria
being used. Next, you would rate the software options for each of these selection criteria
and multiply that rating by the criteria weighting.
The matrix diagram allows teams to describe relationships between lists of items. A
matrix diagram can be used to compare the results of implementing a new manufacturing
process to the needs of a customer. For example, if the customer's main needs are low cost
products, short leadtimes, and products that are durable; and a change in the manufacturing
process can provide faster throughput, larger quantities, and more part options; then the
only real positive relationship is the shorter leadtime to the faster throughput.
The other process outcomes—larger quantities and more options—are of little value to the
customer. This matrix diagram, relating customer needs to the manufacturing process
changes, would be helpful in deciding which manufacturing process to implement.
The process decision program chart can help a team identify things that could go wrong,
so corrective action can be planned in advance. The process decision program chart starts
with the problem. Below this, major issues related to the problem are listed. Below the
issues, associated tasks are listed. For each task, the team considers what could go wrong
and records these possibilities on the chart. Next, the team considers actions to prevent
things from going wrong. Finally, the team selects which preventive actions to take from
all the ones listed.
The activity network diagram graphically shows total completion time, the required
sequence of events, tasks that can be done simultaneously, and critical tasks that need
monitoring. In this respect, an activity network diagram is similar to the traditional PERT
chart used for activity measurement and planning.
UNIT IV- TQM TOOLS AND
TECHNIQES-II
PART-A
Quality function deployment (QFD) is a T requirements are met throughout the design process and
also in the production systems.
The primary planning tool in QFD is the House of Quality. House of Quality is a set of
matrix used to translate the voice of the customers into technical design requirement that meet
specific target values and characteristics of the final product. Because of its structure, it is
referred to as the ‘House of Quality’.
Taguchi has defined quality as the loss imparted to society from the time a product is
shipped.
There are three common quality loss
functions. 1. Nominal - the - best
2. Smaller –the –better
3. Larger –the –better
Predictive maintenance is the process of using data and statistical tool to determine
when a piece of equipment will fail.
Predictive maintenance is the process of periodically performing activities such as
lubrication on the equipment to keep it running.
Down time losses are measured by equipment availability (A) using the
equation,
Availability A=(T/P)*100
Where T= Operation time (P-D)
P=Planned operation time
D= Down time.
Failure Mode and Effect Analysis (FMEA) is an analytical technique which combines the
technology and experience of the people
To identify foreseeable failure modes of a product(or) process
To plan for its elimination.
Debug includes a high failure rate at the initial stages because of inappropriate use (or)
flaws in the design (or) manufacturing.
1. Design FMEA
2. Process FMEA
1. Noise
2. Vibration
3. Erratic operation
4. poor performance
5. Lack of stability.
Reduced speed losses are measured by tracking performance efficiency using the
equation,
Performance efficiency E=(C*N/T)*100
Where C=Cycle time
N= Number of units produced.
1. Dissatisfiers: are the needs that are expected in a product (or) service
2. Satisfiers: are needs that customers say they want
3. Exciters/Delighters: are new (or) innovative features that customer do not except.
1. L- Matrix
2. T- Matrix
3. Y- Matrix
4. X- Matrix
5. C- Matrix.
1. Internal benchmarking
2. Competitive benchmarking
3. Process benchmarking.
PART B –QUESTIONS
There are six steps in benchmarking process which are given below.
1. Deciding what to benchmarking
2. Understanding current performace
3. Planning
4. Studying others
5. Learning from the data
6. Using the findings and taking action.
Deciding what to Benchmark
The initial stage of benchmarking is to determine what to benchmark and against whom to do
so. Improvement to best-in –class levels in some areas will contribute greatly to market and
financial success, where as improvement in other areas will have no significant impact.
Planning
Once the internal processes are understood,a benchmarking team should be formed. A
team represents different perspectives,special skills and a variety of business connections. The
team will decide what type of benchmarking to perform, what type of data are to be collected,
and the method of collection.
It is better to find appropriate benchmarking partner may be any person (or)
organization that supplies you with information contacts with the
suppliers,consultants,customers and people within the organization are the gold mines of
information.
Benchmarking planning process requires examination of several outside organization.
Normally a process is divided in to number of sub –processes. A single organization is not
‘best–in–class’ for suball-processesstudies.Hence,multiplefororganizationsallshould be studied
for benchmarking.
Studying Others
For studying other organizations,there are three techniques used.
1. Questionnaires
2. Site visits
3. Focus Groups.
Learning from the Data
The Objective of this benchmarking process is to identify and analyse the gaps
between the best practices and data are useful to identify performance gaps between
benchmarking partners.
The objective of this process is to develop strategies and action plans to bridge the
negative gap. To effect the change, the findings should be communicated to the people within
the organization who can enable 0improvement. The findings should be transformed as goals
and objectives and action plans should be developed to implement new processes.
By understanding the differences, the managers are able to organize their improvement efforts
to meet the goal. Hence , benchmarking is used to set the goals and objectives and meet then
by improving processes.
5. When managers and workers are aware of external information, they are usually
much more motivated to attain the set goals.
6. No one can argue that attaining the new goal is impossiable, since it can be proved
that another organization have already achieved it.
7. Benchmarking is time and cost efficient because benchmarking process involves
imitation and adaptation rather than invention.
8. Benchmarking reduces some of the planning, testing and prototyping effort since it
copies the working model of an improved process.
9. Benchmarking helps to identify the current position of a business and determine
the priorities for improvement.
10. Benchmarking allows comparisons with previous Benchmarking profiles and
against recognized best practices.
11. Benchmarking encourages regular monitoring of process and continuous
improvement.
12. Benchmarking increases the competitiveness of the company by demonstrating
environmental improvements to customers and shareholders.
The Japanese have developed the concept of Quality Function Deployment (QFD). The
Quality Function Deployment (QFD) is a TQM tool which ensures requirements are met throughout
the design process and also in the production systems.
QFD is basically a philosophy and a set planning and communication tool that focuses
on customer requirements in coordinating the design, manufacturing and marketing of
goods.
1.Dissatisfiers : are the needs that are expected in a product (or)
service. In a car, safety measures and cushioning seats are known as
dissatisfiers.
These features are generally not stated by customers but assumed as
a given. If they are not present, then the customer will be dissatisfied.
2. Satisfiers : are the needs that customers say they want. Air –
conditioning and Compact Disc player in a car are the examples
of satisfiers. Fulfilling these needs creates satisfaction.
3. Exciters/delighters: are new (or) innovative features that customers do
not expect . Antilock brakes and collision avoidance systems are known
as examples of exciters /delighters. The pressure such unexpected
features leads to high perceptions of quality.
5. a) Explain the Taguchi’s Quality Loss Fun
Dr. Genichi Taguchi, a mechanical engineering who has won four Deming awards, has
introduced the Quality Loss Function concept, Which combines cost, target and variation in
one metric with Specifications being of secondary importance. Further more, he Developed
Robust Design in which noise factors are taken into account to ensure that the system
functions correctly.
Taguchi has defined quality as the loss imparted to society from the time a
product is shipped. Societal losses include failure to meet customer requirements,
failure to meet ideal performance and harmful side effects. The various losses due to
production are raw material, energy and labour consumed on unusable products (or) toxic-by –
products. Consider the following example to illustrate loss-to-society concept. There are three
stages in the evolution of polythene bag thickness.
Purpose of FEMA
Phase 1: Market analysis to establish knowledge about current customer requirements which
are considered as critical for their satisfa same requirements and the translation into product characteristics.
Phase 2: Translation of critical product characteristics into component characteristics, i.e., the
product’s parts.
Phase 3: Translation of critical component characteristics into process
characteristics. Phase 4: Translation of critical process characteristics into production
characteristics, i.e., instructions and measurements.
9.
Draw the
Matrix of QFD?
10. Draw the FMEA Form ?
UNIT V
1. Why do we need a Quality system?
In order to assure the quality of a product, the manufacturer must ensure its
quality. So, to ensure this quality it is necessary to make a systematic study and control
check at every stage of production. It is also essential to take critical review of efforts
and achievements of the company with respect to the quality of the product. Thus it is
necessary to develop a standard quality system.
The ISO 9000 system is a quality management system that can be adopted by
all types of organizations belonging to government, public, private, (or) joint sectors.
The ISO 9000 system shows the way in creating products by preventing
deficiencies, instead of conducting expensive post product inspections and rework.
In two party quality system, the supplier of the product (or) service would
develop a quality system that would conform to his standard. The customer would then
audit this system for acceptability. Here the supplier and customer form the two
parties.
In two party registration system –after auditing, it may be found that the customer’s
quality requirements are not
incurred in multiple audits, a standard quality system must be developed and audited
by a third party registration system.
The ISO 9000, QS 9000, ISO 14000 and other quality systems are such third
party registration systems that indicate to customers (or) potential customers that the
suppliers has a quality system in place and it is being monitored.
Quality audit can be classified into two types –internal and external audit.
1. System Audit
2. Process Audit
3. Product Audit
4. Adequacy Audit
5. Compliance Audit
10. What is the use of QS 9000?
The QS 9000 standard defines the fundamental quality expectations from the
suppliers of production and service parts. The QS 9000 standard uses ISO 9000 as its
base with much broader requirements.
ISO 14000 standard gives the company a background on which to base its
Environmental Management System (EMS). This system can be joined with other
quality standards and can be implemented together to achieve the organizations
environmental targets.
The equivalents of the above standards in Indian Standards System , developed by the Bureau
of Indian Standards are as below:
The other quality systems are AS 9100 used in aerospace industry, ISO/TS
16949, which is called as Quality Systems Automotive Suppliers –Particular Requirements for
the application of ISO 9001, TS 9000 a consolidation of the various quality system
requirements within telecommunications industry , QS 9000 and ISO 14000. Out of these Qs
9000 uses ISO 9000 as its foundation.But its requirements are much broader.
One of the greatest strengths of ISO 14000 is that it establishes a process that spreads
responsibility and participation to every individual of the organization. It teaches employees
the effect on the environment of their own work duties, how these can be minimized, what the
benefits can be and what negative consequences can be if responsibilities are ignored.
20.What is EMS?
EMS stands for Environment management system, which gives the procedures and
methods to save the industries from pollution.It is more or less an abatement measure of
environment degradation effects caused by industries.
PART B
In order to assure the quality of a product, the manufacturer must ensure its
quality. So, to ensure this quality it is necessary to make a systematic study and control
check at every stage of production. It is also essential to take critical review of efforts
and achievements of the company with respect to the quality of the product. Thus it is
necessary to develop a standard quality system.
The quality assurance system should provide for contract review to ensure that customer
defined and documented and that the company has the capability to
requirements are adequately
meet these requirements.
Quality assurance comes from process control. Therefore, quality assurance systems include
documented procedures for production, installation, and service activities; the appropriate
equipment and working environment; methods for monitoring and controlling critical quality
characteristics; approval processes for equipment; criteria for workmanship, such as written
standards, samples, or illustrations; and maintenance activities. Process control also includes
monitoring the accuracy and variability of equipment, operator knowledge and skills, the
accuracy of measurement results and data used, and environmental factors.
Traditionally, quality assurance was a result of mass final inspection. Heavy reliance on inspection
proliferated because of the industrial revolution and the division of labor. The role of the
inspection department was to seek out defective items in production and remove them
prior to shipment.
Deming tried to eliminate mass inspection. According tohim the true purpose of inspection is to provide
information to control and improve the process effectively.
process: at receipt of incoming materials,
Inspection and/or testing is performed at three points in the production
during the manufacturing process, & upon completion of production.
2.Explain briefly about ISO 9000 Series of standards.
The history of the ISO 9000 family is a story both of success and of misunderstanding.
Anybody trying to implement the ISO norm in an enterprise will be confronted with both these
sides, the latter when middle management start becoming involved. People with no experience
of quality assurance find the ISO norm a bit difficult to handle, or at least hard to understand,
without experience or help. This is not only because of the formal structure of the ISO
standard, but also because of its technical language: even the words "shall", "must" and "have"
are assigned their own specific meanings.
As a first step, it is necessary to understand the history and structure of the ISO norm. ISO
stands for International Organization for Standardization, a Geneva-based worldwide
federation. The ISO 9000 standard was first published in 1987, one of its sources being the
BS5750 series 1979, developed by the British Standard Institution (BSI) on the basis of
existing military standards. The Single European Marketing Directive on Standards and
Certification stipulates that the application of ISO 9000 should be encouraged among its
member countries. In clause 0, the EN ISO 9000-1 states:
Organizations -- industrial, commercial or governmental -- supply products intended to satisfy
a customer's needs and/or requirements ... To be competitive and to maintain good economic
performance, organizations/suppliers need to employ increasingly effective and efficient
systems ... Customer requirements are often incorporated in specifications. However,
specifications may not in themselves guarantee that a customer's requirement will be met
consistently, if there are any deficiencies in the organizational system to supply and support
the product. These concerns have led to the development of quality system standards and
guidelines that complement relevant product requirements given in the technical
specifications.
The ISO 9000 family comprises the following parts:
-- ISO 9000-1: 1994, Quality management and quality assurance standards -- Part 1:
Guidelines for selection and use. Any organization which is contemplating the development of
a quality system should refer to these guidelines.
-- ISO 9000-2: 1993, Quality management and quality assurance standards -- Part 2: Generic
guidelines for the application of ISO 9001, ISO 9002 and ISO 9003. These guidelines should
be consulted when assistance is needed in the implementation of ISO 9001, 9002 or 9003.
-- ISO 9000-3: 1991, Quality management and quality assurance standards -- Part 3:
Guidelines for the application of ISO 9001 to the development, supply and maintenance of
software. These guidelines are not relevant for public employment services.
-- ISO 9000-4: 1993, Quality management and quality assurance standards -- Part 4: Guide
to dependability program management. This guide is not relevant for public employment
services.
-- ISO 9001: 1994, Quality systems -- Model for quality assurance in design, development,
production, installation and servicing. This model is relevant for public employment services
(especially head offices).
-- ISO 9002: 1994, Quality systems -- Model for quality assurance in production, installation
and servicing. This model is relevant for public employment services (especially for local
offices).
-- ISO 9003: 1993, Quality systems -- Model for quality assurance in final inspection and
tests.
-- ISO 9004-1: 1994, Quality management and quality system elements. This provides useful
guidance for public employment services.
-- ISO 9004-2: 1994, Quality management and quality system elements. This also provides
guidance for (public employment) services.
-- ISO 10011-1: 1990 Guidelines for auditing quality systems -- Part 1: Auditing.
-- ISO 10011-2: 1991 Guidelines for auditing quality systems -- Part 2: Qualification criteria
for quality systems auditors.
-- ISO 10011-3: 1991 Guidelines for auditing quality systems -- Part 3: Management of audit
programmes.
-- ISO 10012-1: 1992 Quality assurance requirements for measuring equipment -- Part 1:
Metrological confirmation system for measuring equipment.
-- ISO/TR 13425: 1993 Guidelines for the selection of statistical methods in standardization
and specification.
ISO standards
ISO - 9000 - basic quality assurance definitions and guidance
ISO - 9001 - quality system for design, production, & service
ISO - 9002 - production and installation
ISO - 9003 - final inspection
Advantages
Better documentation
Greater Quality awareness by employees
Higher perceived quality in the market
Reduced customer quality audits
. ISO 9001 defines 20 elements necessary for a quality management system, as listed below:
The company has to define its commitment to a quality policy, which is understood,
implemented and maintained at all levels of the organization, and to define its quality goals.
Responsibilities and authorities have to be defined and documented. The company must
provide adequate resources and appoint a member of the management as a representative for
quality management. At least once a year, a management review must be held and recorded to
evaluate the quality system.
A quality manual, covering all elements of the ISO standard, has to be prepared to document
the quality system. Procedures must be documented and controlled. The company has to
prepare a quality plan to ensure that quality requirements are understood and fulfilled.
The company has to establish and maintain documented procedures for contract review, to
document the customers' requirements and ensure the capability to fulfill the contract or order
requirements. Records of contract review shall be maintained.
All documents relevant for quality have to be controlled to ensure that the pertinent issues of
appropriate documents are available at all locations. When necessary, they are to be replaced
by updated versions. Changes shall be reviewed and approved by the same
organization/person that performed the original review or approval.
Purchasing (Element 6)
The company must monitor the flow of purchasing and evaluate the subcontractor's ability to
fulfill specified requirements.
Goods supplied by the customer have to be recorded. It must be ensured that they are
separately controlled and stored to prevent loss or damage.
Where appropriate, purchased and delivered products or services must be made traceable
through documentation or batches.
All processes of production or service that directly affect quality must be documented and
planned and carried out under controlled conditions to add consistency to the process. Control
of process parameters and product characteristics must ensure that the specified requirements
are met.
The company must ensure receiving inspection and testing, in-process inspection and testing,
and final inspection and testing. These inspections and tests must be recorded.
Control of inspection, measuring and
The items of equipment used for inspection, measuring and testing must be identified and
recorded. They must be controlled, calibrated and checked at prescribed intervals.
The status of the product or service must be identified at all stages as conforming or
nonconforming. This is to ensure that only conforming products or services are dispatched or
used.
The company must establish procedures to ensure that nonconforming products or services are
prevented from unintended use. The disposal of nonconforming products must be determined
and recorded.
Documented procedures must be established to ensure that products are not damaged and
reach the customer in the required condition.
All records related to the quality system must be identified, collected and stored together. The
quality records demonstrate conformity with specified requirements and verify effective
operation of the quality system.
The company shall establish and maintain documented procedures for identifying training
needs and must have a training record for each employee.
Where servicing is a specific requirement, the company must establish and maintain
documented procedures for performing, verifying and reporting that the servicing meets the
specified requirements.
The company must establish and maintain documented procedures to implement and control
the application of statistical techniques which have been identified as necessary for
performance information.
This structure looks very theoretical at first glance, but this is because ISO 9000 stipulates the
elements of a quality management system for any enterprise, irrespective of its branch of
activity. "ISO 9000 is not a prescriptive standard, it does not detail the how but rather the
what. This allows each individual company to define how it intends to comply with the
standard in a way that best suits that company's method of operation".It is possible that some
of the elements are of no relevance or almost no relevance in specific sectors. For example,
elements 11 and 12 are not relevant for AMS Salzburg, and element 15 is only marginally
relevant.
The 20 elements (or the relevant ones) of ISO 9001 must be addressed in a quality manual and
in operational procedures (possibly set out in a procedure manual) which comply with the
standards set in the quality manual. The quality manual defines and documents the quality
policy of the company. It is a statement of the company's intention to pursue a quality policy.
The operational procedures set out the specific way in which ISO 9000 is implemented
throughout the company's business process. Both types of document are required by ISO
9000: 1994. Almost every element of the standard requires records. Besides these, there may
be other documents in the company, for example work instructions, specifications, check-lists,
charts, data sheets, lists, forms and so on. Some of them are used to record events, but they are
not directly required by the ISO norm, which allows the company a great deal of flexibility
regarding whether or not to use such documents. AMS Salzburg, for example, decided to add
work instructions (which define how an activity is performed) to documented procedures, as
work instructions are liable to alteration. In the management of change, it is relatively easy to
replace the work instructions addressed in the operating procedures, without touching the
basic processes.
The ISO 9000 series of Quality Standards does indicate key characteristics of a properly
functioning Quality System, but how they are implemented is the responsibility of the
organization. ISO documentation must reflect what the company does, not what it thinks the
ISO auditor will want to hear. In determining whether procedural documentation is required,
look at the skill sets of the people performing the task as well as any unique requirements the
company may have for completing the task. In many cases, documentation will not be required
because there is no unique process and/or the person has been trained in how to complete the
task.
This chapter describes the types of audits that government and nongovernmental audit
organizations conduct and those organizations arrange to have conducted, of government
organizations, programs, activities, functions, and funds. This description is not intended to
limit or require the types of audits that may be conducted or arranged. In conducting these
types of audits, auditors should follow the applicable standards included and incorporated in
the chapters, which follow.
All audits begin with objectives, and those objectives determine the type of audit to be
conducted and the audit standards to be followed. The types of audits, as defined by their
objectives, are classified in these standards as financial audits or performance audits.
Audits may have a combination of financial and performance audit objectives or may have
objectives limited to only some aspects of one audit type. For example, auditors conduct audits
of government contracts and grants with private sector organizations, as well as government
and nonprofit organizations, that often include both financial and performance objectives.
These are commonly referred to as "contract audits" or "grant audits." Other examples of such
audits include audits of specific internal controls, compliance issues, and computer-based
systems. Auditors should follow the standards that are applicable to the individual objectives
of the audit.
The internal audits were carried out by the Managing Director and the quality system
representative, who drew up the audit plan, the audit check-list and the formats for the audit
report and for the reports by the regional office experts on their particular fields. The
Managing Director and the quality system representative audited the local offices in respect of
the quality system and the departments of the regional office in every respect. The experts
audited the local offices in respect of their particular business areas. The objective of the audit
was to verify that quality activities and related results comply with the definitions given in
operating procedures and work instructions, and to determine the effectiveness of the quality
system. The internal audit was announced several days before it took place, and all the
required records (see the check-list below) were delivered. The first audits focused on
documents control as a basis for the first stage in implementing the ISO norm. We then had to
look closely into the question of whether the operating procedures were in conformity with
actual processes and were appropriate. Nonconformities were noted and recorded, but it was
astonishing that the quality system worked so well despite the short duration of
implementation. We made two audits before the assessment to ensure that corrective action
could be taken, and that approach was effective in ensuring that AMS Salzburg was in good
shape to face the assessment.
Completed list of
0
participants
Audit
questions/inspection 0
carried out
Deficiency report
0
completed
Final discussions 0
Report signed 0
Section 3.5 of ISO 14001 defines an environmental management system as "the part of the
overall management system that includes organizational structure, planning activities,
responsibilities, practices, procedures, processes, and resources for developing, implementing,
achieving, reviewing, and maintaining the environmental policy. " Although ISO 14001 was
developed independent of ISO 9000 to fulfill environmental rather than quality needs.
An EMS is a structured plan to address the impacts a company or organization has on the
environment. The EMS is implemented and checked to ensure that plan goals are being met.
With the plan being revised to meet new goals, the EMS can guide a company toward
continual environmental improvement.
A basic condition for any EMS is compliance with applicable environmental laws, regulations,
and permits. An effective EMS goes beyond compliance to provide an organization with a
systematic approach to the development, implementation, and maintenance of an
environmental policy.
In response to widespread acceptance of the ISO 9000 quality management standards and to
the proliferation of various environmental management systems, the International
Organization for Standardization formed Technical Committee (TC) 207 to begin
development of the ISO 14000 series of environmental management standards in 1992. As TC
207 carefully crafted the draft EMS standard (ISO 14001), companies around the world began
to assess their existing environmental systems to learn what changes would be needed to meet
ISO 14001.
Because many companies in the United States had not been prepared to step up to ISO 9000 in
the early 90s and had to struggle to catch up with their European and Asian counterparts, U.S.
companies are now carefully tracking the increase of certifications to ISO 14001. While there
are relatively few EMS certifications in the U.S., many savvy companies are aligning their
environmental management systems to conform to ISO 14001.
The ISO 14001 standard provides specific requirements for an EMS and shares some common
management system principles with the ISO 9000 series of standards, including the "plan-do-
check-act model" mentioned above and the requirement for top management commitment.
The basic focus of the ISO 14000 series of standards is environmental protection, while the
ISO 9000 series of standards focuses on quality and customer needs.
It should be noted that ISO 9000 is not a prerequisite for ISO 14001, although companies that
have both have successfully integrated the two management systems.
An effective EMS provides many benefits to the implementing organization, its customers and
stakeholders, and to regulators, including: