You are on page 1of 19

c  

  

  
   

competition in launch services, microgravity research and space technology development. These
systems will also confer an important strategic advantage in the conduct of materials and in life
science research. The objective of this paper is to provide a modest degree of understanding of the
complex inter-relation which exist between performance requirements mission constraints , vehicle
design and trajectory selection of typical re-entry mission. A brief presentation of the flight regimes,
the structural loading and heating environment experienced by booth no lifting and lifting re-entry
vehicle is given.

   

  


In its 23 year history, the NASA space shuttle program has seen exhilarating highs and devastating lows.
The fleet has taken astronauts on dozens of successful missions, resulting in immeasurable scientific gains.
But this success has had a serious cost. In 1986, the challenger exploded during launch procedures, and on
February 1st of 2003, the Columbia broke up during re-entry over Texas.

This seminar report would be covering the following points:-

Yp A BRIEF HISTORY OF THE SPACE SHUTTLE.


Yp THE SPACE SHUTTLE MISSION.
Yp SPACE PLANES AND THE REPLACEMENT OF SPACE SHUTTLES.

This seminar will be taking a brief look into the latest space planes namely the ³HYPER SONIC
PLANES WITH AIR BREATHING ENGINES´ that are being planned to be rolled out by NASA for space
exploration purpose

ÿ 
    


   
 
of a spacecraft simulator in the most economic way. The
simulator has been designed for educating and training control engineers. The greatest difficulty in
implementing spacecraft control law is that ground based experiments must take place in 1-g environments,
where as the actual spacecaft will operate under 0-g condition.In this paper we will be discussing about how to
create a 0-g torque free environment. The paper describes the spacecraft platform ,which houses various
spacecraft components. The advantage of using aluminium platform is discussing in this portion. The subsystem
description in which we discuss about air bearing,batteries etc.In this portion we will discuss about SRA 300
spherical air bearing and electrical systems in the simulator. Spacecraft attitude sensor,which is a dynamic
measuring unit(DMU-AHRS) is chosen as the attitude sensor unit. This portion describes DMU-AHRS sensor unit
which provides the Eulrian angle ,angular rates and linear acceleration. Momentum wheel speed and
acceleration sensor in which we will discuss about why we use encoders instead of tachometer for measuring
wheel speed. Wheels,motors,power amplifiers and all the hardware and software that handles the hardware
controls will be discussing in this portion. Filtering and identification of motor parameters in which we deals with
filter design-a Butterworth or Chebyshev type-1 filter.Estimation of motor damping and choice of reaction
wheels in which we will see the effect of friction torque in choosing reaction wheel. Identification of motor
dynamics in which identification of motor transfer function using least square technique, identification of
moment of inertia ± conclusion.
p

Fuel Cells on Aerospace


m h 
      
   spacecraft where they generate electrical power from
stored hydrogen and oxygen that are carried in cryogenic liquid form. The world¶s petroleum production is
about to pass its peak, so a world wide development effort is being directed into adapting these high efficiency
fuel cells into powering automobiles, buses and trucks. Fuel cells are more efficient than the secondary
batteries used. First Alkali fuel cells are used in aircrafts, but now a days fuel cells using Proton Exchange
Membrane are used.

A fuel cell can theoretically, deliver 500 kilowatt-hours per kilogram of hydrogen plus oxygen. Today¶s best
lithium batteries can deliver around 120 kWh per kilogram. Fuel cells can convert fuel to electric power with an
efficiency of over 80%. Even a diesel engine cannot do better than 40% at it¶s optimum speed and load.

In the past, fuel cells contained a platinum catalyst that is very costly. Now a nickel-tin catalyst which works
was discovered. As a result, fuel cells can be a possible substitute for batteries in spacecraft. The different
power requirements in an aircraft & the application of fuel cells on aerospace are discussed in this paper.
The fuel cell will find applications that lie beyond the reach of the internal combustion engine. Once low cost
manufacturing is feasible, this power source will transform the world and bring great wealth potential to those
who invest in this technology. It is said that the fuel cell is as revolutionary in transforming our technology as
the microprocessor has been. Once fuel cell technology has matured and is in common use, our quality of life
will improve and environmental degradation caused by burning fossil fuels will be decreased. It is generally
known that the maturing process of the fuel cell will not be as rapid as that of microelectronics.
p

    




    

 

    

       
 
    


 
         
   


  time allows them to operate with performance levels well above those that can be achieved from
the conventional radars.

Their ability to make effective use of all the available RF power and to minimize RF losses also makes them a
good candidate for future very long range radars. The AAPAR can provide many benefit in meeting the
performance that will be required by tommorow's radar systems. In some cases it will be the only possible
solution.

It provides the radar system designer with an almost infinte range of possibilites. This flexibility, however,
needs to be treated with caution: the complexity of the system must not be allowed to grow such that it
becomes uncontolled and unstable. The AAPAR breaks down the conventional walls between the traditional
systems elements- antenna, transmitter, receiver etc-such that the AAPAR design must be treated holistically.

Strict requirements on the integrity of the system must be enforced. Rigourous techiues must be used to
ensure that the overall flow down of requirements from top level is achieved and that testeability of the
requirements can be demonstrated under both quiescent and adaptive condition.
p

   



   
   
       
 
           
      



                 circuit is made for less than the amount
of power controlled; and hence power amplification is achieved. The non-linear reactive element is a saturable
reactor. When used in a combination with a set of high-grade rectifiers, it exhibits power amplification
properties in the sense that small changes in control power result in considerable changes in output power. The
basic component of a magnetic amplifier, as mentioned above, is the saturable reactor. It consists of a
laminated core of some magnetic material. The hysteresis loop of the reactor core is a narrow and steep one. A
schematic diagram of a simple saturable core reactor with control winding and a.c. winding wound on two
limbs. The ?   having a number of turns,
a.c. is fed with d.c. supply. By varying the control
current, it is possible to largely vary the degree of saturation of the core. The other winding, called the a.c.
winding or gate winding having a number of turns,
a.c. is fed from an a.c. source, the load being connected in
series with it.

The property of the reactor which makes it behave as a power amplifier is its ability to change the degree of
saturation of the core when the control winding mmf (magneto motive force i.e., ampere turns), established by
d.c. excitation, is changed. The a.c. power supply will have high impedance if the core is unsaturated and the
varying values of lower impedances as the core is increasingly saturated. When the core is completely
saturated, the impedance of the a.c. winding becomes negligibly small and the full a.c. voltage appears across
the load. Small values of current through the control winding, which has a large number of turns, will determine
the degree of saturation of the core and hence change the impedance of the output circuit and control the flow
of current through the load. By making the ratio of control winding turns to the a.c. winding turns large, an
extremely high value of output current can be controlled by a very small amount of control current, The
saturable core reactor circuit shown in Fig. has certain serious disadvantages. The core gets partially
desaturated in the half-cycle in which the a.c. winding mmf opposes the control winding mmf. This difficulty is
overcome by employing a rectifier in the output circuit as shown in Fig. Here the desaturating (damagnetising)
effect by the half-cycle of the output current is blocked by the rectifier. On the other hand, the output and
control winding mmfs aid each other to effect saturation in the half-cycle in which current passes through the
load, thus making the reactor a self-saturating magnetic amplifier. Another difficulty that is experienced is that
a high voltage is induced in the control winding due to transformer action. In order that this voltage is unable to
send current to the d.c. circuit a high inductance should be connected in series with the control winding. This,
however, slows down the response of the control system and hence the overall system. The saturable core is
generally made of a saturable ferromagnetic material. For magnetic amplifiers of lower ratings usual
transformer type construction using silicon steel (3 to 3.5 per cent Si) is used. Use of high quality nickel-iron
alloy materials, however , makes possible much higher performance amplifiers of smaller size and weight. In
order to realize the advantages of these materials, use is made of toroidal core configuration.

u       !u


  "       #         
 $   
  %   
 

  
 "
   
  
 &       
   
involves reconstruction of the internal structural information within an object mathematically from a series of
projections.

The projection here is the visual information probed using an emanation which are physical processes involved.
These include physical processes such as radiation, wave motion, static field, electric current etc. which are
used to study an object from outside.Medical tomography primarily uses X-ray absorption, magnetic resonance,
positron emission, and sound waves (ultrasound) as the emanation. Nonmedical area of application and
research use ultrasound and many different frequencies of electromagnetic spectrum such as microwaves,
gamma rays etc. for probing the visual information.

Besides photons, tomography is regularly performed using electrons and neutrons. In addition to absorption of
the particles or radiation, tomography can be based on the scattering or emission of radiation or even using
electric current as well.When electric current is consecutively fed through different available electrode pairs and
the corresponding voltage, measured consecutively by all remaining electrode pairs, it is possible to create an
image of the impedance of different regions of the volume conductor by using certain reconstruction algorithms.
This imaging method is called impedance imaging.

Because the image is usually constructed in two dimensions from a slice of the volume conductor, the method is
also called impedance tomography and ECCT (electric current computed tomography), or simply, electrical
impedance tomography or EIT.Electrical Impedance Tomography (EIT) is an imaging technology that applies
time-varying currents to the surface of a body and records the resulting voltages in order to reconstruct and
display the electrical conductivity and permittivity in the interior of the body. This technique exploits the
electrical properties of tissues such as resistance and capacitance. It aims at exploiting the differences in the
passive electrical properties of tissues in order to generate a tomographic image.Human tissue is not simply
conductive. There is evidence that many tissues also demonstrate a capacitive component of current flow, and
therefore, it is appropriate to speak of the specific admittance (admittivity) or specific impedance (impedivity)
of tissue rather than the conductivity; hence, electric impedance tomography. Thus, EIT is an imaging method
which maybe used to complement X-ray tomography (computer tomography, CT), ultrasound imaging, positron
emission tomography (PET), and others.
p

  '  ÿ


ÿ  
(ÿu
)      
   
   
 

   
  "    
       
 
   

The basic principle behind the emission of light is that: When charge carrier pairs recombine in a semiconductor
with an appropriate energy band-gap generates light. In a forward biased diode, little recombination occurs in
the depletion layer. Most occurs in a few microns of either P- region or N ±region, depending on which one is
lightly doped.

LEDs produce narrow band radiations, with wave length determined by energy band of the semiconductor. Solid
state electronics have replaced their vacuum tube predecessors for almost five decades. However in the next
decade they will be brighter, more efficient and inexpensive enough to replace conventional lighting sources
(i.e. incandescent bulbs, fluorescent tubes).

Recent development in AlGaP and AlInGaP blue and green semiconductor growth technology have enabled
applications where several single to several millions of these indicator LEDs can be packed together to be used
in full color signs, automotive tail lambs, traffic lights etc. still the preponderance of applications require that
the viewer has to look directly into the LED. This is not ³SOLID STATE LIGHTING´
Artificial lighting sources share three common characteristics:
-They are rarely viewed directly: light from sources are viewed as reflection off the illuminated object.
- The unit of measure is kilo lumen or higher not mille lumen or lumen as it is incase of LEDs
-Lighting sources are pre dominantly white with CIE color coordinates, producing excellent color rendering
Today there is no such commercially using ³SOLID STATE LAMP´ However high power LED sources are
being developed, which will evolve into lighting sources
p

 
 ÿ  
    
   

   
 *+   #  
   

 
         ,-    

 that not only was the number of components doubling yearly, but was doing so at minimum cost.

One of the main factors driving the improvements in complexity and cost of ICs is improvements in optical
lithography and the resulting ability to print ever smaller features.

Recently optical lithography, the backbone of the industry for 45 years has been pushing up against a number
of physical barriers that have led to massive investments in development of alternate techniques such as
Scalpel, Extreme Ultraviolet and others.

Since the mid eighties, the demise of optical lithography has been predicted as being only a few
years away, but each time optical lithography approaches a limit, some new technique pushes out the useful
life of the technology.

The recent interest in immersion lithography offers the potential for optical lithography to be given a reprieve to
beyond the end of the decade.

Surface Mount Technology


     
 

         
.  
/ 
 
       
    

"   

   
"   
 
     
" 
 
"
 

   "   


 
 
  


SMT is completely different from insertion mounting. The difference depends on the availability and cost of
surface mounting elements. Thus the designer has no choice other than mixing the through hole and surface
mount elements. At every step the surface mount technology calls for automation with intelligence.

Electronic products are becoming miniature with improvements in integration and interconnection on the chip
itself, and device ± to ± device (D±to±D) interconnections. Surface Mount Technology (SMT) is a significant
contributor to D±to±D interconnection costs.

In SMT, the following are important

1.p D-to-D interconnection costs.


2.p Signal integrity and operating speeds.
3.p Device- to-substrate interconnection methods.
4.p Thermal management of the assembled package.

D-to-D interconnection costs have not decreased as much as that of the ICs. A computer-on-a-chip costs less
than the surrounding component interconnections. The problem of propagation delay, which is effectively solved
at the device level, resurfaces as interconnections between the devices are made.

The modified new IC packages, having greater integration of functions, less in size and weight, and smaller in
lead pitch, dictate newer methods of design, handling, assembly and repair. This has given new directions to
design and process approaches, which are addresses by SMT.Currently, D-to-D interconnections at the board
level are based on µsoldering¶-the method of joining the discrete components.

The leads of the components are inserted in the holes drilled as per the footprint, and soldered.In the early
decades, manual skills were used to accomplish insertion as well as soldering, as the component sizes were big
enough to be handled conveniently. There have been tremendous efforts to automate the method of insertion
of component leads to their corresponding holes, and solder them en-mass. The leads always posed problems
for auto-insertion. The tendency of Americans against using manual, skilled labour resulted in the emergence of
SMT, which inherits with it automation as precondition for success.
p

Tri-Gate Transistor


 
  
 "
  


  

  
0 
  
   
           
   
'  
transistors, electronic signals travel as if on a flat, one-way road. This approach has served the
semiconductor industry well since the 1960s. But, as transistors shrink to less than 30 nanometers (billionths of
a meter), the increase in current leakage means that transistors require increasingly more power to function
correctly, which generates unacceptable levels of heat.

Intel's tri-gate transistor employs a novel 3-D structure, like a raised, flat plateau with vertical sides, which
allows electronic signals to be sent along the top of the transistor and along both vertical sidewalls as well. This
effectively triples the area available for electrical signals to travel, like turning a one-lane road into a three-lane
highway, but without taking up more space. Besides operating more efficiently at nanometer-sized geometries,
the tri-gate transistor runs faster, delivering 20 percent more drive current than a planar design of comparable
gate size.

The tri-gate structure is a promising approach for extending the TeraHertz transistor architecture Intel
announced in December 2001. The tri-gate is built on an ultra-thin layer of fully depleted silicon for reduced
current leakage. This allows the transistor to turn on and off faster, while dramatically reducing power
consumption. It also incorporates a raised source and drain structure for low resistance, which allows the
transistor to be driven with less power. The design is also compatible with the future introduction of a high K
gate dielectric for even lower leakage.
Intel researchers have developed "tri-gate" transistor design. This is one of the major breakthroughs in the
VLSI technology. The transistor is aimed at bringing down the transistor size in accordance with the Moore¶s
Law. The various problems transistors with very small size face have to be overcome. A reduction in power
dissipation is another aim. This is to develop low power micro processors and flash memories.
Tri-gate transistors show excellent DIBL, high sub threshold slope, high drive and much better short channel
performance compared to CMOS bulk transistor. The drive current is almost increased by 30%. The thickness
requirement of the Si layer is also relaxed by about 2-3 times that of a CMOS bulk transistor.
Tri- gate transistors are expected to replace the nanometer transistors in the Intel microprocessors by 2010. 60
nm tri-gate transistors are already fabricated and 40 nm tri-gate transistors are under fabrication. Tri-gate
transistor is going to play an important role in decreasing the power requirements of the future processors. It
will also help to increase the battery life of the mobile devices.
p
.u h.%
c 

          
 


  
 
  
development of computationally intensive signal processing and
communication systems on FPGAs and Application Specific Integrated Circuits (ASICs).

These advancements also offer mystical solutions to historically intractable signal processing problems resulting
in major new market opportunities and trends. Traditionally for signal processing specific applications off the
shelf Digital Signal Processors (DSPs) are used.

Exploiting parallelism in algorithms and mapping them on VLIW processors are tedious and do not always give
optimal solution. There are applications where even multiple of these DSPs cannot handle the computational
needs of the applications.

Recent advances in speed, density, features and low cost have made FPGA processors offer a very attractive
choice for mapping high-rate signal processing and communication systems, specially when the processing
requirements are beyond the capabilities of off the shelf DSPs.

In many designs a combination of DSP and FPGA are used. The more structured and arithmetic demanding
parts of the application are mapped on the FPGA and less structured parts of the algorithms are mapped on off
the shelf DSPs.

This seminar presents the basic structure of FPGAs and the different features of DSP Enhanced FPGAs
p

Low Power Wireless Sensor Network


'


  



 
          
    
        

nodes that may be deployed and the
long required system lifetimes, replacing the battery is not an option.

Sensor systems must utilize the minimal


possible energy while operating over a wide range of operating scenarios. This paper presents an overview of
the key technologies required for low-energy distributed microsensors.

These include

Yp power aware computation/communication component technology


Yp low-energy signaling and networking
Yp system partitioning considering computation
Yp communication trade-offs
Yp a power aware software infrastructure

FinFET Technology
     hhu  
     
-     


 hhu
   
  
    12nm. Formation of ultra thin fin enables
suppressed short channel effects.

It is an attractive successor to the single gate MOSFET by virtue of its superior electrostatic properties and
comparative ease of manufacturability.

Since the fabrication of MOSFET, the minimum channel length has been shrinking continuously. The motivation
behind this decrease has been an increasing interest in high speed devices and in very large scale integrated
circuits.

The sustained scaling of conventional bulk device requires innovations to circumvent the barriers of
fundamental physics constraining the conventional MOSFET device structure. The limits most often cited are
control of the density and location of dopants providing high I on /I off ratio and finite subthreshold slope and
quantum-mechanical tunneling of carriers through thin gate from drain to source and from drain to body.

The channel depletion width must scale with the channel length to contain the off-state leakage I off. This leads
to high doping concentration, which degrade the carrier mobility and causes junction edge leakage due to
tunneling. Furthermore, the dopant profile control, in terms of depth and steepness, becomes much more
difficult.

The gate oxide thickness tox must also scale with the channel length to maintain gate control, proper threshold
voltage VT and performance. The thinning of the gate dielectric results in gate tunneling leakage, degrading the
circuit performance, power and noise margin.
p

Terahertz Transistor
!

 
         
"
     
 
 

  #
"

  


" 
" .
"    122
  MOS transistors on a single chip. Integration of one billion transistors into a single chip will become a
reality before 2010.

The semiconductor industry faces an environment that includes increasing chip complexity,
continued cost pressures, increasing environmental regulations, and growing concern about energy
consumption. New materials and technologies are needed to support the continuation of Moore's
law.
Moore's Law was first postulated in 1965 and it has driven the research, development, and
investments in the semiconductor industry for more than three decades. The observation that the number of
transistors per integrated circuit doubles every eighteen to twenty four months is well known to industry
analysts and many of the general public.

However, what is sometimes overlooked is that fact that Moore's law is an economic paradigm: that is, the cost
of a transistor on an integrated circuit needs to be reduced by one half every two years.

This type of cost reduction cannot be sustained for an extended period by straightforward continuous
improvement of existing technologies.

The semiconductor industry will face a number of challenges during this decade where new materials and new
technologies will need to be introduced to support the continuation of Moore's Law.
p

Radiation Hardened Chips


  

    - 
  " #   
     
 
 

"     

 

can lead to a disaster. The vast emptiness of space
is filled with radiation, mostly from the sun.

A normal computer unless given proper shielding will not be able to work properly. Thus special types of chips-
rad hard chips are used.

Radiation hardened chips are made by two methods: -

Yp Radiation hardening by process (RHBP)


Yp Radiation hardening by design (RHBD)

The latter method is much more cost effective and has a great potential for the future. . It has been
demonstrated that RHBD techniques can provide immunity from total-dose and single-event effects in
commercially produced circuits.

Commercially produced RHBD memories, microprocessors, and application-specific integrated circuits are now
being used in defense&space industry.

Rad hard chips have a great scope in military application and protecting critical data (both industrial &domestic)
from the vagaries of man and nature.
Current trends throughout military and space sectors favor the insertion of commercial off-the-shelf
(COTS) technologies for satellite applications.

However, there are also unique concerns for assuring reliable performance in the presence of ionizing particle
environments which present concerns in all orbits of interest. This seminar will detail these concerns from two
important perspectives including premature device failure from total ionizing dose and also single particle
effects which can cause both permanent failure and soft errors.

Zigbee - zapping away wired worries


   
  
      


       
  
     
 
    
   
 
 

  /    

During this time applications that required lower data rates but had some other special requirements were
neglected in the sense that no open standard was available.

Either these applications we abandoned in the wireless arena or implemented using proprietary standards
hurting the interoperability of the system.

ZigBee is a wireless standard that caters to this particular sector. Potential applications of ZigBee include Home
Automation, Wireless Sensor Networks, Patient monitors etc. The key features of these applications and hence
aims of ZigBee are

1.p Low Cost


2.p Low Power for increased battery life
3.p Low Range
4.p Low Complexity
5.p Low Data Rates
6.p Co-Existence with other long range Wireless Networks

The ZigBee standard is maintained by ZigBee Alliance is a spin off of the HomeRF group, an unsuccessful home
automation related consortium.

It is built upon the IEEE 802.15.4 protocol which is intended for LR-WPAN (Low Rate - Wireless Personal Area
Network).

In this seminar a general overview of ZigBee is followed by an analysis of how ZigBee and underlying 802.15.4
provide the aims mentioned. Also a brief comparisons with other solutions will be done.
p
  u  
u# 
  
   
  when a vehicle blasts off, pushing through Earth¶s
gravitational pull requires great amounts of fuel, but once they get out of our atmosphere, the rest is easy.

If you could cut out that ³blast off´ portion, space travel would be easier and much more fuel-efficient.

In a Space Elevator scenario, a Maglev vehicle would zoom up the side of an exceedingly tall structure and end up at
a transfer point where they¶d then board a craft to the Moon, Mars, or any other distant destination.

If it all sounds like too much science fiction, take a look at the requirements for making the Space Elevator a reality.
A new material has been developed, however, called carbon nanotubes, that is 100 times as strong as steel but with
only a fraction of the weight.

A carbon nanotube is an idea that makes this all sound much more achievable.

In this concept, which is very fuel efficient and which brings space tourism closer common man uses the newly added
concept of nanotubes to light.
p

 / 


 
  
  "  
    " 
       
  cause it to stop. The simplest way of doing this is to convert the energy into heat.

The conversion is usually done by applying a contact material to the rotating wheels or to discs attached to the
axles. The material creates friction and converts the kinetic energy into heat. The wheels slow down and eventually
the train stops. The material used for braking is normally in the form of a block or pad

The vast majority of the world's trains are equipped with braking systems which use compressed air as the force
used to push blocks on to wheels or pads on to discs. These systems are known as "air brakes" or "pneumatic
brakes". The compressed air is transmitted along the train through a "brake pipe". Changing the level of air
pressure in the pipe causes a change in the state of the brake on each vehicle. It can apply the brake, release it or
hold it "on" after a partial application. The system is in widespread use throughout the world. An alternative to the
air brake, known as the vacuum brake, was introduced around the early 1870s, the same time as the air brake.

Like the air brake, the vacuum brake system is controlled through a brake pipe connecting a brake valve in the
driver's cab with braking equipment on every vehicle. The operation of the brake equipment on each vehicle
depends on the condition of a vacuum created in the pipe by an ejector or exhauster. The ejector, using steam on a
steam locomotive, or an exhauster, using electric power on other types of train, removes atmospheric pressure from
the brake pipe to create the vacuum. With a full vacuum, the brake is released. With no vacuum, i.e. normal
atmospheric pressure in the brake pipe, the brake is fully applied.
p

& .   c    


  
 
          manufacturing system, (HIPARMS), has developed highly
productive, general-purpose and multi-functional machine tools to structure the manufacturing system with a
small number of machines. HIPARMS successfully reduces the functional constraint of products with the
introduction of the reconfigurable machine tools. The project is also advancing the development of a
reconfigurable operation system for higher flexibility.Dividing the total production process into small element
processes has been pursued since the Industrial Revolution, to promote productivity and quality in the machine
industry. Typically, such a manufacturing system is found in automotive industries as a Transfer Machining Line
(TR) developed by General Motors. Its high performance single-task machine enables its simple one-production
line to easily track a defect of the product. Process-dividing, however, has resulted in a long
production line, often characterized by a lower uptime-ratio, long lead-time, and poor production flexibility,
which confines itself to a very limited number of products. In addition, recently, the machining
time of each elemental machining process has been drastically decreased by the rapid development of near-
net-shape forming of a work-piece driven by the material processing technologies, and higher machining speed
actively promoted by advanced tooling. Other aspects of the total production time, however, such as
non-machining time, for transfer, loading, and reloading of work pieces have remained unchanged and are,
therefore, a greater contributor to the total manufacturing time. Furthermore, the continually growing
investment to construct an automated handling system of work-pieces has evolved into a serious cost
problem. In response to these problems, Process integration and Reconfigurability concepts recently
have been proposed.
p


c   h ' 

' 
     
   "   
 

     
heavy metal contamination in water bodies is wide spread. The most dangerous among them are arsenic, lead,
mercury, zinc, copper etc. Water contamination with arsenic is having a world wide concern .Arsenic is a
ubiquitous element being distributed in atmosphere, aquatic environment, soil and sediments and organisms.
Arsenic is a group V metalloid which is tasteless, odourless and brittle in nature. It is grey or tin white in colour
and remains as oxide, hydrate, sulphide, arsenite and arsenate. It also exists in the lava of volcano. Due to the
carcinogenic property of arsenic, environmental regulatory agencies around the world are reviewing the
maximum contamination level in drinking water, which is the main source of arsenic intake. The Public Health
and Environmental Engineering Organization (CPHEEO) has prescribed the limit for arsenic in drinking water as
0.05mg/l.
p p

h u 0 

   
  
"     µFUEL ENERGIZER¶ help us to Reduce Petrol /Diesel
/Cooking gas consumption up to 28%, or in other words this would equal to buying the fuel up to 28% cheaper
prices.

When fuel flow through powerful magnetic field created by Magnetizer Fuel Energizer, the hydrocarbons change their
orientation and molecules in them change their configuration. At the same time inter molecular forces is considerably
reduced or depressed hence oil particles are finely divided. This has the effect of ensuring that fuel actively interlocks
with oxygen producing a more complete burn in the combustion chamber. Hence by establishing correct fuel burning
parameters through proper magnetic means (Fuel Energizer) we can assume that an internal combustion engine is
getting maximum energy per litre as well as environment with lowest possible level toxic emission.
By establishing correct fuel burning parameters through proper magnetic means (Fuel Energizer) we can assume that
an internal combustion engine is getting maximum energy per liter as well as environment with lowest possible level
toxic emission.

Ñ  
    
          
      
 ë 
 
   
          
 

        
    
   
   

   
 
           
     
       
   
           
         
  
  
   
    
       
  
   
   
 
ë
  

     
 
 
         
    





p



c 


 (c)
 h
 
 

c 


 (c)     
     
  of about 11 kilometers length. This has 10 stations, bored tunnels and cut & cover tunnels. This paper
describes in brief, some of the methods used in the construction of the 10 underground stations.
The construction methods (cut-and cover method) of underground stations depend to a large extent on the
geological profile of the area when the stations are located. The geological profile of the underground Delhi-Metro
passed through rock and alluvium.

These are the methods/innovation which have been used in this project. Only some very important methods have
been described above. There methods have helped in transforming the technology to the engineers, supervisors and
to the local contractors and will go a long way in redifining construction management.


ÿ 
    


   
 
 
 
   
    The
simulator has been designed for educating and training control engineers. The greatest difficulty in implementing
spacecraft control law is that ground based experiments must take place in 1-g environments, where as the actual
spacecaft will operate under 0-g condition.In this paper we will be discussing about how to create a 0-g torque free
environment. The paper describes the spacecraft platform ,which houses various spacecraft components. The
advantage of using aluminium platform is discussing in this portion. The subsystem description in which we discuss
about air bearing,batteries etc.In this portion we will discuss about SRA 300 spherical air bearing and electrical
systems in the simulator. Spacecraft attitude sensor,which is a dynamic measuring unit(DMU-AHRS) is chosen as the
attitude sensor unit. This portion describes DMU-AHRS sensor unit which provides the Eulrian angle ,angular rates
and linear acceleration. Momentum wheel speed and acceleration sensor in which we will discuss about why we use
encoders instead of tachometer for measuring wheel speed. Wheels,motors,power amplifiers and all the hardware
and software that handles the hardware controls will be discussing in this portion. Filtering and identification of motor
parameters in which we deals with filter design-a Butterworth or Chebyshev type-1 filter.Estimation of motor
damping and choice of reaction wheels in which we will see the effect of friction torque in choosing reaction wheel.
Identification of motor dynamics in which identification of motor transfer function using least square technique,
identification of moment of inertia ± conclusion. .

& .   c    


  
 
Load monitoring
ÿ   
      

    
    power
system during a given period of time. Non-intrusive monitoring2 is the monitoring of any system or a parameter of
the system without actually entering the system i.e., without the system feeling the presence of the monitoring
system.

Hence, non-intrusive load monitoring is the monitoring of the electrical load coming onto the system without
actually entering the premises.

The interest among the research community to obtain individual load information such as the nature of the
load, its energy consumption, the duration for which it comes on to the system and the time of the day when it was
on and off etc. in a network has given rise to many methods of doing so.

However, majority of these methods are intrusive3 in nature since they require the installation of sensors and other
data acquisition hardware4 to each device in the circuit. This not only makes the system intrude into consumer¶s
property but also makes it very expensive.

Load monitoring is the technique of tracking the various types of electrical loads coming onto the power system
during a period of time. NALM is the monitoring of any system or a parameter of a system without actually entering
the system. NALM has wide range of applications such as power monitoring of devices, monitoring of remote loads &
monitoring of individual loads.

Also CIMS (centre for intelligent Monitoring system) is used to monitor wide range of parameters such as
temperature, Light pressure, colour, sound radio freq, electrical current and voltage etc.

To conclude that, non intrusive load monitoring is a very cost effective monitoring system in many applications. This
can also provide a very convenient and effective method of gathering load data compared to traditional means of
placing sensors on each of the individual components of the load...

   


  "
 
   

    
 
the utility
transmission line or distribution system within customer facility.

It is a temporary voltage drop below 90% of the nominal voltage level. Voltage sags and momentary power
interruptions are probably the most important power quality problems affecting industrial and large commercial
customers.

This paper describes the causes of voltage sags, their impacts on equipment operations and possible solutions. This
paper focuses on system faults as the major cause of voltage sags.

The sensitivity of different types of equipments including programmable logic controllers and motor contactors is
analyzed. Then the range of fault locations on the power system that can cause problems is estimated (area of
vulnerability).

Available methods of power conditioning for these sensitive equipments are also described in this paper.

The problems of voltage sags can be tackled in two ways. The customer will have to improve the ride through
capability of their sensitive equipment. If system power conditioning is expensive it may be economical in the long
term to improve the design of the equipment.
Yp Mitigation of voltage sags requires careful inspection of the characteristics of the process and of the nature
and origin of events.

Yp The installation of mitigation devices (normally the only choice for the customers) can be seen as a short
term solution. The mitigation capability of these devices is mainly limited by the energy storage capacity.
Yp
Yp Only improvement of system performance (for long deep sags) and of equipment tolerance (for short,
shallow sags) can solve the problem in the long term.

The highly productive and reconfigurable manufacturing system, (HIPARMS), has developed highly productive,
general-purpose and multi-functional machine tools to structure the manufacturing system with a small number of
machines. HIPARMS successfully reduces the functional constraint of products with the introduction of the
reconfigurable machine tools. The project is also advancing the development of a reconfigurable operation system for
higher flexibility.Dividing the total production process into small element processes has been pursued since the
Industrial Revolution, to promote productivity and quality in the machine industry. Typically, such a manufacturing
system is found in automotive industries as a Transfer Machining Line (TR) developed by General Motors. Its high
performance single-task machine enables its simple one-production line to easily track a defect of the product.

Process-dividing, however, has resulted in a long production line, often characterized by a lower uptime-
ratio, long lead-time, and poor production flexibility, which confines itself to a very limited number of products.

In addition, recently, the machining time of each elemental machining process has been drastically
decreased by the rapid development of near-net-shape forming of a work-piece driven by the material processing
technologies, and higher machining speed actively promoted by advanced tooling. Other aspects of the total
production time, however, such as non-machining time, for transfer, loading, and reloading of work pieces have
remained unchanged and are, therefore, a greater contributor to the total manufacturing time. Furthermore, the
continually growing investment to construct an automated handling system of work-pieces has evolved into a serious
cost problem. In response to these problems, Process integration and Reconfigurability concepts recently have
been proposed.

'h   .  . 

  "

   
   Consider the example of the sea breeze. The
sun heats both land and sea, but the land heats up more quickly and reaches a higher temperature than the sea. The
air over the land becomes hotter than the air over the sea and the hot air rises, creating an area of lower air
pressure (close to the surface). Air moves from the area of higher pressure over the sea to the area of lower
pressure over the land. The cool sea air heats up as it moves over the land and so it rises, creating a cycle. The
result of this cycle is a steady wind moving from the sea to the land. In this example from nature, the land is acting
like a solar collector, changing sunlight into heat. The heated land heats the air and creates a wind. Wind turbines
can harvest this wind energy. A [   power plant would imitate this same type of system that occurs in
nature, but with a greater degree of control and predictability. This will result in a more reliable wind with a higher
average wind speed.

Electromagnetic Bomb
  #.  &     
searing flash of nuclear light or with the plaintive
wails of those dying of Ebola or its genetically engineered twin. You will hear a sharp crack in the distance. By
the time you mistakenly identify this sound as an innocent clap of thunder, the civilized world will have become
unhinged.

Fluorescent lights and television sets will glow eerily bright, despite being turned off. The aroma of ozone mixed
with smoldering plastic will seep from outlet covers as electric wires arc and telephone lines melt. Your Palm
Pilot and MP3 player will feel warm to the touch, their batteries overloaded.

Your computer, and every bit of data on it, will be toast. And then you will notice that the world sounds
different too. The background music of civilization, the whirl of internal-combustion engines, will have stopped.
Save a few diesels, engines will never start again. You, however, will remain unharmed, as you find yourself
thrust backward 200 years, to a time when electricity meant a lightning bolt fracturing the night sky.

Anyone who's been through a prolonged power outage knows that it's an extremely trying experience. Within
an hour of losing electricity, you develop a healthy appreciation of all the electrical devices you rely on in life.

A couple hours later, you start pacing around your house. After a few days without lights, electric heat or TV,
your stress level shoots through the roof. But in the grand scheme of things, that's nothing. If an outage hits an
entire city, and there aren't adequate emergency resources, people may die from exposure, companies may
suffer huge productivity losses and millions of dollars of food may spoil.

If a power outage hit on a much larger scale, it could shut down the electronic networks that keep governments
and militaries running. We are utterly dependent on power, and when it's gone, things get very bad, very fast.

    "   " is a weapon designed to take advantage of this dependency. But
instead of simply cutting off power in an area, an e-bomb would actually destroy most machines that use
electricity.

Generators would be useless, cars wouldn't run, and there would be no chance of making a phone call. In a
matter of seconds, a big enough e-bomb could thrust an entire city back 200 years or cripple a military unit. ..


p p

You might also like