You are on page 1of 128

www. ijraset.

com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Implementation of HDLC Protocol Using Verilog


K. Durga Bhavani1, B. Venkanna2, K. Gayathri3
1,2,3

Dept. of ECE
RGUKT-Basar
3
Intell Engg. College-Anatapur
1,2

Abstract A protocol is required to transmit data successfully over any network and also to manage the flow at which
data is transmitted. HDLC protocol is the high-level data link control protocol established by International Organization
for Standardization (ISO), which is widely used in digital communications. High-level Data Link Control (HDLC) is the
most commonly used Layer2 protocol and is suitable for bit oriented packet transmission mode. This paper discusses the
Verilog modeling of single-channel HDLC Layer 2 protocol and its implementation using Xilinx.
Keywords- High Level Data link Control (HDLC), Frame Check Sequence (FCS), and Cyclic Redundancy Check
(CRC)
module for the bit-oriented packet transmission mode. It is
I. INTRODUCTION
suitable for Frame Relay, X.25, ISDN B-Channel (64 Kbits/s)
and D-Channel (16 Kbits/s) The Data Interface is 8-bit wide,
HDLC protocol is the high-level data link control protocol synchronous and suitable for interfacing to transmit and
established by International Organization for standardization receive FIFOs. Information is packaged into an envelope,
(ISO), which is widely used in digital communication and called a FRAME [4]. An HDLC frame is structured as
are the bases of many other data link control protocols [2]. follows:
HDLC protocols are commonly performed by ASIC
(Application Specific Integrated Circuit) devices, software FLAG ADDRESS CONTROL INFORMATION FCS FLAG
programming and etc.
8 bits
8 bits
8 /16 bits
variable
8
8 bits
The objective of this paper is to design and implement a
single channel controller for the HDLC protocol which is the
most basic and prevalent Data Link layer synchronous, bitTable 1. HDLC Frame
oriented protocol. The HDLC protocol (High Level Data
A. Flag
link Control) is also important in that it forms the basis for
Each Frame begins and ends with the Flag Sequence which
many other Data Link Control protocols, which use the same
is a binary sequence 01111110. If a piece of data within the
or similar formats, and the same mechanisms as employed in frame to be transmitted contains a series of 5 or more 1s, the
HDLC.
transmitting station must insert a 0 to distinguish this set of
HDLC has been so widely implemented because it 1s in the data from the flags at the beginning and end of the
supports both half duplex and full duplex communication frame. This technique of inserting bits is called bit-stuffing
lines, point to point(peer to peer) and multi-point [3].
networks[1]. The protocols outlined in HDLC are designed
to permit synchronous, code-transparent data transmission. B. Address
Other benefits of HDLC are that the control information is Address field is of programmable size, a single octet or a
always in the same position, and specific bit patterns used for pair of octets. The field can contain the value programmed
control differ dramatically from those in representing data, into the transmit address register at the time the Frame is
started.
which reduces the chance of errors.
C. Control
HDLC uses the control field to determine how to control the

II. HDLC PROTOCOL


The HDLC Protocol Controller is a high-performance

Page 1

www. ijraset.com
October 2014
SJ Impact Factor-3.995

Special Issue-1,
ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
communications process. This field contains the commands,
responses and sequences numbers used to maintain the data
flow accountability of the link, defines the functions of the
frame and initiates the logic to control the movement of
traffic between sending and receiving stations.
D. Information or Data
This field is not always present in a HDLC frame. It is only
present when the Information Transfer Format is being used
in the control field. The information field contains the
actually data the sender is transmitting to the receiver.
E. FCS
The Frame Check Sequence field is 16 bits. The FCS is
transmitted least significant octet first which contains the
coefficient of the highest term in the generated check
polynomials. The FCS field is calculated over all bits of the
addresses, control, and data fields, not including any bits
inserted for the transparency. This also does not include the
flag sequence or the FCS field itself. The end of the data
field is found by locating the closing flag sequence and
removing the Frame Check Sequence field (receiver section)
[5].
III. HDLC MODULE DESIGN
In this design, HDLC procedures contain two modules, i.e.
encoding-and-sending module (Transmitter) and receivingand-decoding module (receiver). The function diagram is
shown as below.

Fig.1. HDLC Block Design

Form this diagram we know that, transmitter module


includes transmit register unit, address unit, FCS generation
unit, zero insertion unit, Flag generation unit, control and
status register unit and transmit frame timer and
synchronization logic unit. Receiver module includes receive
register unit, address detect unit, FCS calculator unit, zero
detection unit, flag detection unit, receive control and status
register unit and frame timer and synchronization logic unit.
A. Transmitter Module
The Transmit Data Interface provides a byte-wide
interface between the transmission host and the HDLC
Controller. The Transmit data is loaded into the controller on
the rising edge of Clock when the write strobe input is
asserted. The Start and End bytes of a transmitted HDLC
Frame are indicated by asserting the appropriate signals with
the same timing as the data byte.
The HDLC Controller will, on receipt of the first byte of a
new packet, issue the appropriate Flag Sequence and
transmit the Frame data calculating the FCS. When the last
byte of the Frame is seen the FCS is transmitted along with a
closing Flag. Extra zeros are inserted into the bit stream to
avoid transmission of control flag sequence within the Frame
data.
The Transmit Data is available on TxD pin with
appropriate to be sampled by Clk. If TxEN is de-asserted,
transmit is stalled, and TxD pin is disabled.
A transmit control register is provided which can enable
or disable the channel. In addition it is possible to force the
transmission of the HDLC Abort sequence. This will cause
the currently transmitted Frame to be discarded. The transmit
section can be configured to automatically restart after an
abort, with the next frame, or to remain stalled until the host
microprocessor clears the abort.
B. Receiver Module
The HDLC Controller Receiver accepts a bit stream on
port RxD. The data is latched on the rising edge of Clock
under the control of the Enable input RxEN. The Flag
Detection block searches the bit stream for the Flag
Sequence in order to determine the Frame boundary. Any
stuffed zeros are detected and remove and the FCS is
calculated and checked. Frame data is placed on the Receive
Data Interface and made available to the host. In addition,
Flag information is passed over indicating the Start and the
End byte of the HDLC Frame as well as showing any error
conditions which may have been detected during receipt of
the Frame.
In normal HDLC protocol mode, all Receiver Frames are
presented to the host on the output register. A status register

Page 2

www. ijraset.com
October 2014
SJ Impact Factor-3.995

Special Issue-1,
ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
is provided which can be used to monitor status of the
Receiver Channel, and indicate if the packet currently being
received includes any errors.

[2]

IV. RESULTS
[3]

[4]
[5]

& Advanced Technology, ISSN: 2250-3676, Volume-2,


Issue-4, 1122 1131.
M.Sridevi, DrP.Sudhakar Reddy / International Journal
of Engineering Research and Applications (IJERA)
ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5,
September- October 2012, pp.2217-2219.
ISO/IEC
13239,
Information
technology
-Telecommunications and Information exchange between
systems High-level data link control (HDLC)
procedures,
International
Organization
for
Standardization, pp 10-17, July 2002.
A.Tannenbaum, Computer Networks, Prentice Hall of
India, 1993.
Mitel Semiconductor, MT8952B HDLC Protocol
Controller, Mitel Semiconductor Inc., pp 2-14, May
1997.

Fig.2. Simulation Waveform


Device Utilization Report: Clock Frequency: 78.2 MHz
Resource
Used
Avail
Utilization
IOs
60
180
33.33%
Function Generators
205
1536
13.35%
CLB Slices
103
768
13.41%
Dffs or Latches
108
1536
7.03%
Table 2. Synthesis Report

V. CONCLUSION
We designed HDLC protocol sending and receiving RTL
level modules in Verilog and had them tested successfully,
which has the following advantages like easy to program and
modify, suitable for different standards of HDLC procedures,
match with other chips with different interfaces. So this
proposed method can be more useful for many applications
like a Communication protocol link for RADAR data
processing.
REFERENCES
[1] Implementation of HDLC protocol Using FPGA,
[IJESAT] International Journal of Engineering Science

Page 3

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
p

Iterative MMSE-PIC Detection Algorithm for


MIMO OFDM Systems
Gorantla Rohini Devi1, K.V.S.N.Raju2, Buddaraju Revathi3
1

Department of ECE, 2Head of ECE Department, 3Asst. Professor, Department of ECE,


SRKR Engineering College
Bhimavaram, AP, India

Abstract- Wireless communication systems are required to provide high data rates, which is essential for many services such
as video, high quality audio and mobile integrated services. When data transmission is affected by fading and interference
effects the information will be altered. Multiple Input Multiple Output (MIMO) technique is used to reduce the multipath
fading. Orthogonal Frequency Division Multiplexing (OFDM) is one of the promising technologies to mitigate the ISI. The
combination of MIMO-OFDM systems offers high spectrum efficiency and diversity gain against multipath fading channels.
Different types of detectors such as ZF, MMSE and PIC, Iterative PIC. These detectors improved the quality of received
signal in high interference environment. Implementations of these detectors verified the improvement of the BER v/s SNR
performance. Iterative PIC technique give best performance in noise environment compared to ZF, MMSE and PIC.
Keywords: Orthogonal Frequency Division Multiplexing (OFDM), Multiple Input Multiple Output (MIMO), Zero Forcing
(ZF), Minimum Mean Square Error (MMSE), Parallel Interference Cancellation (PIC), Bit Error Rate (BER), Signal to
Noise Ratio (SNR), Inter Symbol Interference (ISI), Binary Phase Shift Keying (BPSK).
I. INTRODUCTION

In wireless communication the signal from a transmitter


will be transmitted to a receiver along with a number of
different paths, collectively referred as multipath. These
paths may causes interference from one another and result in
the original data being altered. This is known as Multipath
fading. Furthermore wireless channel suffer from co-channel
interference (CCI) from other cells that share the same
frequency channel, leading to distortion of the desired signal
and also low system performance. Therefore, wireless system
must be designed to mitigate fading and interference to
guarantee a reliable communication.
High data rate wireless systems with very small symbol
periods usually face unacceptable Inter Symbol Interference
(ISI) originated from multi-path propagation and their
inherent delay spread. Orthogonal Frequency Division
Multiplexing (OFDM) has emerged as one of the most
practical techniques for data communication over frequencyselective fading channels into flat selective channels. OFDM
is one of the promising technologies to mitigate the ISI. On
the other hand, to increase the spectral efficiency of wireless
link, Multiple-Input Multiple-Output (MIMO) systems [1]. It
is an antenna technology that is used both in transmitter and
receiver equipment for wireless radio communication. MIMO
exploit the space dimension to improve wireless system
capacity, range, and reliability. MIMO system can be
employed to transmit several data streams in parallel at the

same time and on the same frequency but different transmit


antennas.
MIMO systems arise in many modern communication
channels such as multiple user communication and multiple
antenna channels. It is well known that the use of multiple
transmit and receive antennas promises sub performance gains
when compared to single antenna system. The combination
MIMO-OFDM system is very natural and beneficial since
OFDM enables support of more antennas and large bandwidth
since it simplifies equalization in MIMO systems. In MIMOOFDM system offers high spectral efficiency and good
diversity gain against multipath fading channels [2][3].
In MIMO system depends on the different detection
techniques used at the MIMO receiver. The better detector
that minimizes the bit error rate (BER) is the maximum
likelihood (ML) detector. But the ML detector is practically
difficult as it has computational complexity is exponential. On
the other hand, linear detectors, such as zero-forcing (ZF) and
minimum mean square error (MMSE) receivers, have low
decoding complexity, but detection performance decrease in
portion to the number of transmit antennas.
Therefore, there has been a study on a low complexity
nonlinear receiver, namely, parallel interference cancellation
(PIC) receiver, which parallely decodes data streams through
nulling and cancelling. PIC algorithm [4] relies on a parallel
detection of the received block. At each step all symbols are
detected by subtracted from the received block. PIC detection
is used to reduce the complexity and prevents error
propagation. The PIC detection uses the reconstructed signal

Page 4

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
to improve the detection performance by using iteration
process. Iterative MMSE-PIC detection algorithm [5][6] best
detection technique compared all nonlinear receivers. For
improving the performance of overall system, the output of
detector is regarded as input of the PIC detection to do again.
By exchanging information between the MIMO detection and
decoder, the performance of receiver may greatly be
enhanced.
Where number of iteration increases to improve the bit error
rate (BER) performance.
PIC introduces parallely, which enables to reduce the
interference and therefore increases the reliability of the
decision process. The channel as a flat fading Rayleigh
multipath channel and the modulation as BPSK has been
taken. MIMO-OFDM technology has been investigated as the
infrastructure for next generation wireless networks.
II. SYSTEM MODEL

MIMO Techniques:

Consider a MIMO OFDM system with transmitting and


receiving antennas. When the MIMO technique of spatial
multiplexing is applied encoding can be done either jointly
over the multiple transmitter branches.
U
s
e
r

X1
Modulation

IFFT

MIMO
Channel

X2

Current MIMO system includes MISO and SIMO system that


uses MIMO technique to improve the performance of wireless
system can be divided into two kinds. One is spatial
multiplexing which provides a linear capacity gain in relation
to the number of transmitting antenna and the other is spatial
diversity schemes which can reduce the BER and improve the
reliability of wireless link.
A. Spatial Multiplexing

XL

Y1 Y2

YL

P-element

U
s
e
r

from serial to parallel through convertor. The digitally


modulated symbols are applied to IFFT block. After the
transformation, the time domain OFDM signal at the output of
the IFFT. After that, Cyclic Prefix (CP) is added to mitigate
the ISI effect. This information is sent to parallel to serial
convertor and again, the information symbols are
simultaneously transmitted over the MIMO channel and later
AWGN noise added at receiver side.
At the receiver side, firstly serial to parallel conversion occurs
and cyclic prefix removed. The received signals samples are
sent to a fast Fourier transform (FFT) block to demultiplex the
multi-carrier signals and ZF / MMSE / PIC / Iterative-PIC
detectors is used for separating the user signals at each
element of the receiver antenna array. Finally demodulated
outputs and the resulting data combined to obtain the binary
output data.

Demodulation

ZF/MMSE/
PIC/
Iterative
PIC

F
F
T

Receiver
antenna
array

The transmission of multiple data stream over more than one


antenna is called spatial multiplexing. It yields linear (In the
minimum number of transmit and receive antenna) capacity
increases, compared to systems with a single antenna at one or
both sides of the wireless link, at no additional power or
bandwidth expenditure. The corresponding gain is available if
the propagation channel exhibits rich scattering and can be
realized by the simultaneous transmission of independent data
stream in the same frequency band. The receiver exploits
difference in the spatial signature induced by the MIMO
channel onto the multiplexed data stream to separate the
different signals, there by realizing a capacity gain.
B. Diversity Schemes

Fig1.Schematic of PIC detection for MIMO OFDM system


According to the block diagram in Figure1 consists of two
users, one user source while the other user as destination. The
two users interchange their information as source to different
instant of time. In MIMO channel model, L simultaneous
antennas having same data for transmission, while receiver
has P antennas.
The binary data are converted into digitally modulated signal
by using BPSK modulation technique and after that converted

In which two or more number of signals sent over different


paths by using multiple antennas at the transmitting and
receiving side. The space is chosen, in such a way the
interference between the signals can be avoided. To improve
the link reliability we are using diversity schemes. Spatial
diversity improves the signal quality and achieves higher
signal to noise ratio at the receiver side. Diversity gain is
obtained by transmitting the data signal over multiple
independently fading dimensions in time, frequency, and

Page 5

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
space and by performing proper combing in the receiver.
Spatial diversity is particularly attractive when compared to
time or frequency diversity, as it does not incur expenditure in
transmission time or bandwidth. Diversity provides the
receiver with several (ideally independent) replicas of the
transmitted signal and is therefore a powerful means to
combat fading and interference and there by improve link
reliability.
Two kinds of spatial diversities are considered, Transmitter
diversity and Receiver diversity. There are two famous space
time coding schemes. Space time block code (STBC) and
Space time trellis code (STTC).
III. PROPOSED DETECTION ALGORITHM FOR
MIMO-OFDM SYSTEMS
The co-channel interference is one of the major limitations in
cellular telephone network. In the case of cellular network
such as 3G or beyond 3G (4G), the co-channel interference is
caused by the frequency reuse. Our main idea is to reject the
co- channel interference in MIMO-OFDM cellular systems.
To eliminate the inter symbol interference (ISI) different types
of highly interference channel equalization techniques are
used. MIMO-OFDM detection method consists of linear and
nonlinear detection methods. Linear equalizers are ZF [7] and
MMSE [8] and nonlinear equalizers are PIC and Iterative PIC.
1.

Where,
y1 and y2 are the received symbol on the first and second
antenna, h1,1 is the channel from 1st transmit antenna to 1st
receive antenna, h1,2 is the channel from 1st transmit antenna to
2nd receive antenna, h2,1 is the channel from 2nd transmit
antenna to 1st receive antenna, h2,2 is the channel from 2nd
transmit antenna to 2nd receive antenna, x1, and x2 are the
transmitted symbols and n1 and n2 are the noise on 1st and 2nd
receive antennas respectively.
The sampled baseband representation of signal is given by:
y=
Hx+n
(3)
Where,
y = Received symbol matrix,
H = Channel matrix,
x = Transmitted symbol matrix,
n = Noise matrix.
For a system with NT transmit antennas and NR receiver
antennas, the MIMO channel at a given time instant may be
represented as NT x NR matrix:

H 1,1

H 2 ,1
H

H N R ,1

H 2 ,2

H N R ,2

H 1, N T

H 2,NT

H N R , N T

(4)

Zero Forcing (ZF) equalizer:

Zero forcing Equalizer is a linear equalization algorithm used


in communication systems, it inverse the frequency response
of the channel. The output of the equalizer has an overall
response function equal to one of the symbol that is being
detected and an overall zero response for the other symbols. If
possible, this results in the removal of the interference from
all other symbols in the absence of the noise.
Zero Forcing is a linear method that does not consider the
effects of noise. In fact, the noise may be enhanced in the
process of eliminating the interference.

To solve for x, we find a matrix W which satisfies WH = I.


The Zero Forcing (ZF) detector for meeting this constraint is
given by,
W = (HHH)-1 HH
(5)
Where,
W= Equalization matrix
H= Channel matrix
This matrix is known as the pseudo inverse for a general m x
n matrix where

Consider a 2x2 MIMO system. The received signal on the first


antenna is given by:

y1 h1,1 x1 h1,2 x2 n1 h1,1

x
h1,2 1 n1
x2

The received signal on the second antenna is given by:

y2 h2,1 x1 h2,2 x2 n2 h2,1


(2)

H 1, 2

(1)

h*
H 1*,1
h
1, 2

h 2* ,1
h 2* , 2

h1,1

h 2 ,1

h1, 2

h2 ,2

(6)

It is clear from the above equation that noise power may


increase because of the factor (HHH)-1. Using the ZF
equalization approach, the receiver can obtain an estimate of
the two transmitted symbols and x1 and x2 i.e.

x
h2,2 1 n2
x2

x1
x =
2
(7)

Page 6

(HHH)-1

HH

y1
y
2

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
x1
H
-1 H
x = (H H+N0I) H
2

2. Minimum Mean Square Error (MMSE) Equalizer:


A MMSE estimator is a method in which it minimizes the
mean square error (MSE), which is a universal measure of
estimator quality. The most important characteristic of MMSE
equalizer is that it does not usually eliminate ISI totally but
instead of minimizes the total power of the noise and ISI
components in the output. If the mean square error between
the transmitted symbols and the outputs of the detected
symbols, or equivalently, the received SNR is taken as the
performance criteria, the MMSE detector [9] is the optimal
detection that seeks to balance between cancelation of the
interference and reduction of noise enhancement.
The received signal on the first receive antenna is,

x
y1 h1,1 x1 h1,2 x2 n1 h1,1 h1,2 1 n1
x2

(8)

Equivalently,

3. Parallel Interference Cancellation (PIC):


Here the users symbols are estimated in a parallel manner.
This detects all layers simultaneously by subtracting
interference from other layers regenerated by the estimation
from ZF or MMSE criteria.
PIC detection is used to reduce the complexity and prevents
error propagation. The parallel MMSE detector consists of
two or more stages. The first stage gives a rough estimation of
substreams and the second stage refines the estimation. The
output can also be further iterated to improve the performance.
The first stage will be implemented by using either ZF or
MMSE detection algorithm. The MMSE detector minimizes
the mean square error between the actually transmitted
symbols and the output of the linear detector is

By using MMSE detector the output of the first stage is


(9)
d = Dec(W.y)

Where,
y1, y2 are the received symbol on the 1st and 2nd antenna
respectively, h1,1 is the channel from 1st transmit antenna to 1st
receive antenna, h1,2 is the channel from 1st transmit antenna to
2nd receive antenna, h2,1 is the channel from 2nd transmit
antenna to 1st receive antenna, h2,2 is the channel from 2nd
transmit antenna to 2nd receive antenna, x1, x2 are the
transmitted symbols and n1, n2 is the noise on 1st , 2nd receiver
antennas.
The above equation can be represented in matrix notation as
follows:

y1 h1,1 h1,2 x1 n1
y h h x n
2 2,1 2,2 2 2

(12)

W=[HHH+NoI]-1HH
(13)

The received signal on the second antenna is,

x
y2 h2,1 x1 h2,2 x2 n2 h2,1 h2,2 1 n2
x2

y1
y
2

(14)

Where, W is the parameter of Equalization matrix which is


assumed to be known and Dec(.) is the decision operation. In
each a vector symbol is nulled.
This can be written as
S=I.d

(15)

Where, I is identity matrix and d is rough Estimated symbols


of MMSE.
The PIC detection algorithm can be expressed as

(10)

R=y-H.S

(16)

Hence S is the estimated symbols of MMSE Equalizer. The


estimated symbol using the detection scheme of the
appropriate column of the channel matrix

y = Hx+n

To solve for x, we know that we need to find a matrix W


which satisfies WH=I. The Minimum Mean Square Error
(MMSE) linear detector for meeting this constraint is given
by,
W=[HHH+NoI]-1HH
(11)
Using MMSE equalization, the receiver can obtain an estimate
of the two transmitted symbols x1, x2, i.e.

Z= Dec(W.R)
Where,
R is the output of PIC Equalizer
W is the parameter of MMSE Equalization matrix
Z is the estimated symbols of PIC Equalizer
4. Iterative PIC detection:

Page 7

(17)

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
In which, the estimated signal by decoder is used to
reconstruct the transmitted code signal. The PIC detection
uses the reconstructed signal to improve the detection
performance by using iterative process.
PIC cancellation estimates and subtract out all the interference
for each user in parallel in order to reduce the time delay. At
iteration process the output of PIC detector is given it as input.
Combing MMSE detection with the PIC cancellation directly
impacts on the global performance of the systems and also on
the associated complexity. The complexity directly linked
with the number of iterations for the detection.
The Iterative PIC detection scheme based on MIMO system
algorithm is given by:
For i = 1: nT

From the plot it is clear that 2x2 MIMO-OFDM system with


MMSE equalizer for case of pure equalization compared to ZF
equalizer. Modulation scheme employed here is BPSK.

nT - 1
c = y- H (: , J). Z
j=1
E

Dec

(W.

c)

(18)
Where,
E is the estimation of transmitted symbols of iterative PIC
detector,
W is the MMSE equalization matrix,
c is the output of iterative PIC detector,
nT is the number of transmitting antennas.

Fig. 3. Performance comparison of PIC and Iterative PIC


equalizers in 2x2 MIMO-OFDM system.
From the plot it is clear that 2x2 MIMO-OFDM system with
Iterative PIC equalizer for case of pure equalization compared
to PIC equalizer. The code BER of proposed scheme is
produced after iteration. when iteration increases the BER is
significantly improved. From simulation results the proposed
scheme Iterative PIC is quite effective compared to PIC.
Modulation scheme employed here is BPSK.

IV. SIMULATION RESULTS


In all simulation results shown by using four equalizers (ZF,
MMSE, PIC and Iterative PIC) in MIMO OFDM system.
Rayleigh fading channel is taken and BPSK modulation
scheme was used. Channel
estimation
as
well
as
synchronization is assumed to be ideal. We analyze the BER
performance of data transmission in Matlab software.

Fig. 2. BER for BPSK modulation with ZF and MMSE


equalizers in 2x2 MIMO-OFDM system.

Fig. 4. Performance comparison of ZF, PIC and Iterative PIC


equalizers in 2x2 MIMO-OFDM system.

Page 8

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
From the plot it is clear that 2x2 MIMO-OFDM system with
Iterative PIC equalizers for case of pure equalization
compared to ZF, MMSE, and PIC equalizer. The code BER of
proposed scheme is produced after iteration. when iteration
increases the BER is significantly improved. The Zero
Forcing equalizer removes all ISI and is ideal only when the
channel is noiseless. From simulation results the proposed
scheme Iterative PIC is quite effective compared to ZF and
PIC. Modulation scheme employed here is BPSK.

modulation scheme in high interference environment. The


simulation result shows that the performance of proposed
scheme is greatly improved compared to other detection
receivers for MIMO-OFDM systems.
VI. FUTURE SCOPE
Any type of modulation techniques such as QPSK or QAM
will integrate the channel encoding part.
REFERENCES
[1]

[2]

[3]

[4]

[5]
Fig .5. Performance comparison of ZF, MMSE, PIC and
Iterative PIC equalizers in 2x2 MIMO-OFDM system .
From the plot it is clear that 2x2 MIMO-OFDM system with
Iterative PIC equalizers for case of pure equalization
compared ZF, MMSE, and PIC equalizer. The code BER of
proposed scheme Iterative PIC is produced after iteration.
when iteration increases the BER is significantly improved.
From simulation results the proposed scheme is quite effective
in all simulation configurations. However, Iterative PIC
detection scheme is better in the diversity gain and when the
intefrence comes from the other layers is completely
cancelled. Modulation scheme employed here is BPSK.
V. CONCLUSION

[6]

[7]

[8]

[9]

The combination of MIMO-OFDM systems are used to


improve the spectrum efficiency of wireless link reliability in
wireless communication systems. Iterative PIC scheme for
MIMO OFDM systems transmission including the feasibility
of using the priori information of the transmit sequence of
MMSE compensation. Performance of Iterative PIC detection
technique is better compared to ZF, MMSE, PIC using BPSK

Page 9

I. E. Telatar, Capacity of multiple-antenna Gaussian


channels, Eur. Trans. Telecommun., vol. 10, no. 6, pp.
585595, Nov/Dec. 1999.
G. J. Foschini and M. J. Gans, On limits of wireless
communications in a fading environment when using
multiple antennas, Wirel. Pers. Commun., vol. 6, no. 3,
pp. 311335, Mar. 1998.
A. Paulraj, R. Nabar, and D. Gore, Introduction to Space
Time Wireless Communications, 1st ed. Cambridge, U.K.:
Cambridge Univ. Press, 2003
Junishi Liu, Zhendong Luo,Yuanan Liu, MMSEPIC
MUD for CDMA BASED MIMO OFDM System, IEEE
Transaction Communication., vol.1, oct.2005 .
Hayashi,H.Sakai , Parallel Interference Canceller with
Adaptive MMSE Equalization for MIMO-OFDM
Transmission, France telecom R&D Tokyo.
Z.Wang, Iterative Detection and Decoding with PIC
Algorithm
for
MIMO
OFDM
System
,Int.J.communication, Network and System Science,
published august 2009.
V.JaganNaveen, K.MuraliKrishna, K.RajaRajeswari
"Performance analysis of equalization techniques for
MIMO systems in wireless communication" International
Journal of Smart Home, Vol.4, No.4, October, 2010
Dhruv Malik, Deepak Batra "Comparison of various
detection algorithms in a MIMO wireless communication
receiver" International Journal of Electronics and
Compute Science Engineering, Vol.1, No 3, page
no1678-1685.
J.P.Coon and M. A. Beach, An investigation od MIMO
single-carrier frequency-domain MMSE equalizer in
Proc London comm.Symposium, 2002,pp. 237-240.

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Computational Performances of OFDM using


Different Pruned Radix FFT Algorithms
Alekhya Chundru1, P.Krishna Kanth Varma2
M.Tech Student, Asst Professor Department Of Eelectronics and Communications,
SRKR Engineering College,
Andhra Pradesh, India
Abstract- The Fast Fourier Transform (FFT) and its inverse (IFFT) are very important algorithms in signal processing,
software-defined radio, and the most promising modulation technique i.e. Orthogonal Frequency Division Multiplexing
(OFDM). From the standard structure of OFDM we can find that IFFT/FFT modules play the vital role for any OFDM based
transceiver. So when zero valued inputs/outputs outnumber nonzero inputs/outputs, then general IFFT/FFT algorithm for
OFDM is no longer efficient in term of execution time. It is possible to reduce the execution time by pruning the FFT. In this
paper we have implemented a novel and efficient input zero traced radix FFT pruning (algorithm based on radix-2 DIF FFT,
radix-4 DIF FFT, radix-8 DIF FFT). An intuitive comparison of the computational complexity of orthogonal frequency division
multiplexing (OFDM) system has been made in terms of complex calculations required using different radix Fast Fourier
transform techniques with and without pruning. The different transform techniques are introduced such as various types of Fast
Fourier transform (FFT) as radix-2 FFT, radix-4 FFT, radix-8 FFT, mixed radix 4/2, mixed radix 8/2 and split radix 2/4. With
intuitive mathematical analysis, it has been shown that with the reduced complexity can be offered with pruning, OFDM
performance can be greatly improved in terms of calculations needed.
Index terms- OFDM (Orthogonal frequency division multiplexing), Fast Fourier Transform (FFT), Pruning Techniques,
MATLAB.
I.

INTRODUCTION

Orthogonal Frequency Divisional Multiplexing (OFDM) is


a modulation scheme that allows digital data to be efficiently
and reliably transmitted over a radio channel, even in multi-path
environments [1].
In OFDM system, Discrete Fourier
Transforms (DFT)/Fast Fourier Trans- forms (FFT) are used
instead of modulators. FFT is an efficient tool in the fields of
signal processing and linear system analysis. DFT isn't
generalized and utilized widely until FFT was proposed. But the
inherent contradiction between FFT's spectrum resolution and
computational time consumption limits its application. To match
with the order or requirement of a system, the common method
is to extend the input data sequence x(n) by padding number of
zeros at the end of it and which is responsible for a increased
value of computational time. But calculation on undesired
frequency is unnecessary. As the OFDM based cognitive radio
[2] has the capability to nullify individual sub carriers to avoid
interference with the licensed user. So, that there could be a
large number of zero valued inputs/outputs compare to non-zero
terms. So the conventional radix FFT algorithms are no longer
efficient in terms of complexity, execution time and hardware
architecture. Several researchers have proposed different ways

to make FFT faster by pruning the conventional radix FFT


algorithms.
In this paper we have proposed an input zero traced radix
DIF FFT pruning algorithm for different radix FFT algorithms,
suitable for OFDM based transceiver. The computational
complexity of implementing radix-2, radix-4, radix-8, mixed
radix and split radix Fast Fourier Transform with and without
pruning has been calculated in an OFDM system and compared
their performance. Result shows IZTFFTP of radix algorithms
are more efficient than without pruning.
II.

OFDM SYSTEM MODEL

OFDM is a kind of FDM (Frequency Division


Multiplexing) technique in which we divide a data stream into a
number of bit streams which are transmitted through subchannels [3].
The characteristics of these sub-channels are that they are
orthogonal to each other. As the data that are transmitted
through a sub-channel at a particular time are only a portion of
the data transmitted through a channel so bit rate in a subchannel can be kept much low. After splitting the data in N
parallel data streams each stream is then mapped to a tone at a

Page 10

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
unique frequency and combined together using the Inverse Fast
Fourier Transform (IFFT) to yield the time domain waveform to
be transmitted [4]. After IFFT is done, the time domain signals
are then converted to serial data and cyclic extension is added to
the signal. Then the signal is transmitted. At the receiving side
we do the reverse process to get original data from the received
one [4,5].
In case of deep fade, several symbols in single carrier is
damaged seriously, but in parallel transmission each of N
symbol is slightly affected. So even though the channel is
frequency selective, the sub-channel is flat or slightly frequency
selective. This is why OFDM provide good protection against
fading [6].
In an OFDM system there are N numbers of sub-channels.
If N is high then it will be very complex to design a system with
N modulators and demodulators. Fortunately, it can be
implemented alternatively using DFT/FFT to reduce the high
complexity. A detailed system model for OFDM system is
shown in Figure 1 [5,6].

Tukey provided a lot of ways to reduce the computational


complexity. From that, many fast DFT algorithms have been
developing to reduce the large number of the computational
complexity, and these fast DFT algorithms are named fast
Fourier transform (FFT) algorithms. Decomposing is an
important role in the FFT algorithms. There are two
decomposed types of the FFT algorithm. One is decimation-intime (DIT), and the other is decimation-in-frequency (DIF).
There is no difference in computational complexity between
these two types of FFT algorithm. Different Radix DIF
algorithms we used are
A. Radix-2 DIF FFT Algorithm
Decomposing the output frequency sequence X[k] into the
even numbered points and odd numbered points is the key
component of the Radix-2 DIF FFT algorithm [6]. We can
divide X[k] into 2r and 2r+1, then we can obtain the following
equations
2

2 +1 =
= 0,1,2, . . ,

(1)

(2)

Because the decomposition of the Equation (1) and


Equation (2) are the same, we only use Equation (1) to explain
as shown in Equation (3).

(3)

Finally, by the periodic property of twiddle factors, we can


get the even frequency samples as

Figure1: OFDM System Model


III. FOURIER TRANSFORM ALGORITHM
Discrete Fourier Transform (DFT) computational
complexity is so high that it will cause a long computational
time and large power dissipation in implementation. Cooley and

= 0,1,2, . . ,

+ /2 )

Similarly, the odd frequency samples is

Page 11

(4)

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
2 +1 =

Equation (7) can thus be expressed as

= 0,1,2, . . ,
1
(5) From Equation
(4) and (5), we can find out the same components, x[n] and
x[n+N/2], so we can combine the two equations as one basic
butterfly unit shown in Figure 2. The solid line means that x[n]
adds
x[n + N / 2] , and the meaning
of the dotted line is that x[n] subtracts x[n + N / 2] .

( )=

( ) + ( ) ( +
4) + (1)
( +
)
(
)
2 +
( + 3 4)

(8)
So, Equation (8) can then be expressed as four N/ 4 point DFTs.
The simplified butterfly signal flow graph of radix-4 DIF FFT is
shown in Figure 3.

Figure 2: The butterfly signal flow graph of radix-2 DIF FFT


We can use the same way to further decompose N-point
DFT into even smaller DFT block. So from the radix-2 dif FFT,
there is a reduction of number of multiplications, which is about
a factor of 2, showing the significance of radix-2 algorithm for
efficient computation. So this algorithm can compute N-point
FFT in N/2 cycles.
B.Radix-4 DIF FFT
In case N-data points expressed as power of 4M, we can
employ radix-4 algorithm [9] instead of radix-2 algorithm for
more efficient estimation. The FFT length is 4M, where M is the
number of stages. The radix-4 DIF fast Fourier transform (FFT)
expresses the DFT equation as four summations then divides it
into four equations, each of which computes every fourth output
sample. The following equations illustrate radix-4 decimation in
frequency.
( )=
=

( )
( )

(6)
( )

+
+

( )

( )

Figure 3: The simplified butterfly signal flow graph of radix-4


DIF FFT
This algorithm results in (3/8)N log
complex
multiplications and (3/2)N log
complex additions. So the
number of multiplications is reduced by 25%, but the number of
addition is increased by 50%.
C.Radix-8 DIF FFT
Comparing with the conventional radix-2 FFT algorithm
and radix-4 FFT algorithm, the advantage of developing radix-8
FFT algorithm is to further decrease the complexities, especially
the number of complex multiplications in implementation. We
can split Equation (2.1) and replace index k with eight parts,
including 8r, 8r+1,8r+2, 8r+3, 8r+4, 8r+5, 8r+6, and 8r+7.
Hence, we can rewrite Equation (6) and obtain the Equation (9).
(8 + ) =

(7)

Page 12

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
+

+
+

+
+3

+5

+7

+2
+4

+6
8

Figure 5: The butterfly signal flow graph of mixed-radix-4/2


DIF FFT
(9)
It uses both the radix-22 and the radix-2 algorithms can perform
fast FFT computations and can process FFTs that are not power
of four. The mixed-radix 4/2, which calculates four butterfly
outputs based on X(0)~X(3). The proposed butterfly unit has
three complex multipliers and eight complex adders.

The butterfly graph can be simplified as shown in Figure 4

Figure 4: The simplified butterfly signal flow graph of radix-8


DIF FFT
D. Mixed radix DIF FFT
There are
two kinds of mixed-radix DIF FFT algorithms. The first kind
refers to a situation arising naturally when a radix-q algorithm,
where q = 2m > 2, is applied to an input series consisting of N =
2k qs equally spaced points, where1 k < m. In this case, out
of necessity, k steps of radix-2 algorithm are applied either at the
beginning or at the end of the transform, while the rest of the
transform is carried out by s steps of the radix-q algorithm.
For example if N = 22m+1 = 2 4m, the mixed-radix
algorithm [7][8] combines one step of the radix-2 algorithm and
m steps of the radix-4 algorithm. The second kind of mixedradix algorithms in the literature refers to those specialized for a
composite N = N0 N1 N2 ... Nk. Different algorithms may
be used depending on whether the factors satisfy certain
restrictions. Only the 2 4m of the first kind of mixed-radix
algorithm will be considered here.
The mixed-radix 4/2 butterfly unit is shown in Figure5.

E. Split-Radix FFT Algorithms


Split-radix FFT algorithm assumes two or more parallel
radix decompositions in every decomposition stage to fully
exploit advantage of different fixed-radix FFT algorithm. As a
result, a split-radix FFT algorithm generally has fewer counts of
adder and multiplication than the fixed-radix FFT algorithms,
while retains applicability to all power-of-2 FFT length.
More computational complexity of the odd frequency terms
than the even frequency terms, so we can further decompose the
odd terms to reduce complexities. If we use radix-2 DIF FFT
algorithm for the even frequency terms and the radix-22 DIF
FFT algorithm for the odd parts, we can obtain the split-radix
2/4 algorithm [10,11] as shown in the equation
in the Equation (10).
2

= 0,1,2, . . ,

(4 + 1) =

( ) + ( ) ( +
4)
+(1) ( +
2) + ( ) ( + 3
(4 + 3) =

Page 13

(10)

4)

(11)

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
( )+( ) ( +
4)
( +
2) + ( ) ( + 3

signal flow graph, because the signal flow graph is an L-shape


topology.
IV PRUNING TECHNIQUES
4)

(12)

Thus the N-point DFT is decomposed into one N/2 -point DFT
without additional twiddle factors and two N/4 -point DFTs with
twiddle factors. The N-point DFT is obtained by successive use
of these decompositions up to the last stage. Thus we obtain a
DIF split-radix-2/4 algorithm. The signal flow graph of basic
butterfly cell of split-radix-2/4 DIF FFT algorithm is shown in
Figure 6

Figure 6: The butterfly signal flow graph of mixed-radix-2/4


DIF FFT
(0) = ( ) +

we have

(1) =
(3) =

(2) =

3
4

( ) + ( ) ( +
4)
(
)
+(1) ( +
2) +
( +3
( )+( ) ( +
4)
( +
2) + ( ) ( + 3

(13)

To increase the efficiency of the FFT technique several


pruning and different other techniques have been proposed by
many researchers. In this paper, we have implemented a new
pruning technique i.e. IZTFFTP by simple modification and
some changes and also includes some tricky mathematical
techniques to reduce the total execution time.
Zero tracing- as in wide band communication system a large
portion of frequency channel may be unoccupied by the licensed
user, so no. of zero valued inputs are much greater than the nonzero valued inputs in a FFT/IFFT operation at the transceiver.
Then this algorithm will give best response in terms of reduced
execution time by reducing the no. of complex computation
required for twiddle factor calculation. IZTFFTP have a strong
searching condition, which have an array for storing the input &
output values after every iteration of butterfly calculation. In a
input searching result whenever it found zero at any input,
simply omit that calculation by considering useful condition
based on radix algorithm used.
A Input Zero Traced Radix-2 DIF FFT Pruning
In radix-2 since we couple two inputs to obtain two outputs,
we therefore have 4 combinations of those two inputs at radix-2
butterfly. Now there exist three conditions only based upon
zeros at the input.
No zero at input: No pruning happens in this case, butterfly
calculations are same as conventional radix-2.
Any one input zero: Output will be only the copied version of
input available, butterfly calculations are reduced compared to
conventional radix-2.
All zero input: Output is zero and is obtained from
mathematical butterfly calculations is zero.

4)

B. Input Zero Traced Radix-4 DIF FFT Pruning


In radix-4
since we couple four inputs to obtain four outputs, we therefore
have 16 combinations of those four inputs at radix-4 butterfly.
Now therefore for radix-4 pruning there exist five conditions
only based upon zeros at the input.

4)

As a result, even and odd frequency samples of each basic


processing block are not produced in the same stage of the
complete signal flow graph. This property causes irregularity of

No zero at the input: No pruning takes place, butterfly


calculations are same as radix-4
Any one input zero: Output will be only the copied version of
remaining inputs available, butterfly calculations are reduced
compared to radix-4.

Page 14

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Any two inputs are zeros: Output will be only the copied
version of that remaining two inputs available, butterfly
calculations are reduced compared to radix-4 pruning with
one zero at input.
Any three inputs are zeros: Output will be only the copied
version of that remaining single input available, butterfly
calculations are reduced compared to radix-4 pruning with
two zero at input.
All zeros input: Output is zero and is obtained from
mathematical calculations is zero.

C. Input Zero Traced Radix-8 DIF FFT Pruning


In radix-8
since we couple eight inputs to obtain eight outputs, we
therefore have 256 combinations of those eight inputs at radix-8
butterfly. Now therefore for radix-8 pruning there exist seven
conditions only based upon zeros at the input. Similarly to
radix-4 pruning, output is the version of non zero input. The
more the number of zeros at input leads to less mathematical
calculations compared to radix-8.

OFDM
Block
Size

Radix -2

Radix-4

D. Input Zero Traced Mixed radix DIF FFT Pruning If


we
consider mixed radix 4/2, it uses the combination of radix-2
pruning and radix-4 pruning. Similarly mixed radix 8/2 uses the
combination of radix-2 pruning and radix8 pruning.
E. Input Zero Traced Split radix DIF FFT Pruning If
we
consider spilt radix 2/4, it uses the combination of radix-2
pruning and radix-4 pruning.
V RESULTS
In order to compare the computational complexities among
the different radix DIF FFT algorithms on OFDM, the
calculations based on the OFDM block sizes have been
performed which are given in Table 1 and with pruning
comparison in Table 2.
The speed improvement factors from without to with pruning
of different radix algorithms are seen in Table 3.

Radix-8

Mixed
Radix-4/2

Mixed
Radix-8/2

Split
Radix-2/4

cm cadd cm cadd cm cadd cm cadd cm cadd cm cadd


2
1
2
4
4
8
3
8
0
8
8
12
24
7
24
10
24
4
24
16
32
64
24
64
28
64
22
64
12
64
32
80
160
64
160
60
160
36 160
64
192 384 144 384 112 384 160 384 152 384
92 384
Table 2: Comparison of complex additions(cadd) and complex multiplications(cm) of different radix algorithms without pruning
OFDM
Block
Size
2
4
8
16
32
64

Radix -2
cm
0
0
12
31
76
179

cadd
2
8
24
64
160
384

Radix-4
cm
3
24
141

cadd
8
64
384

Radix-8
cm
-

cadd
-

7
112

24
384

Mixed
Radix-4/2

Mixed
Radix-8/2

Split
Radix-2/4

cm
8
26
64
157

cm
22
60
152

cm
0
4
12
36
90

cadd
24
64
160
384

cadd
64
160
384

cadd
8
24
64
160
384

Table 2: Comparison of complex additions(cadd) and complex multiplications(cm) of different radix algorithms with pruning

Page 15

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
FF
T
Size

Radix
- 2

Radix
-4

Radix
-8

Mixed
Radix
-4/2

Mixe
d
radix8/2

Split
radix
-2/4

8
1
1
1.25
1
16
1.03
1
1.07
1
1
32
1.05
1
1
1
64
1.07
1.02
1
1.01
1
1
Table 3: Speed Improvement Factor without to with pruning
in terms of Multiplications
Output shows the significant reduction of computational
complexity by reducing the total no. of complex operation
i.e. both the multiplications and additions compare to the
ordinary radix FFT operations. The complex multiplications
and additions are compared for different radix and pruned
algorithms.
The
comparison of complex multiplications for different radix
DIF FFT algorithms is shown in Figure 7 and for different
input zero traced radix DIF FFT pruned algorithms are shown
in Figure 8.

Figure 7: Comparison of complex multiplications for


different radix DIF FFT

Figure 8: Comparison of complex multiplications for


different Radix DIF FFT pruned algorithms
VI CONCLUSION
The computational performance of an OFDM system
depends on FFT as in an OFDM system. FFT works as a

Page 16

modulator. If the complexity decreases, then the speed of


OFDM system increases. Results shows input zero traced
radix DIF FFT pruned algorithms are much efficient than the
Radix DIF FFT algorithms as it takes very less time to
compute where number of zero valued inputs/outputs are
greater than the total number of non zero terms, with
maintaining a good trade-off between time and space
complexity, and it is also independent to any input data sets.
REFERENCES
[1] B. E. E. P. Lawrey, Adaptive Techniques for MultiUser OFDM, Ph.D. Thesis, James Cook University,
Townsville,2001, pp. 33-34.
[2] J. Mitola, III, "Cognitive Radio: An Integrated Agent
Architecture for Software Defined Radio," Thesis (PhD),
Dept. of Teleinformatics, Royal Institute of Technology
(KTH), Stockholm Sweden, May 2000.
[3] S. Chen, Fast Fourier Transform, Lecture Note, Radio
Communications Networks and Systems, 2005.
[4] OFDM for Mobile Data Communications, The
International Engineering Consortium WEB ProForum
Tutorial, 2006. http://www.iec.org.
[5] Andrea Goldsmith, Wireless Communications
Cambridge
university
press,
2005,
ISBN:
978052170416.
[6] J.G. Proakis and D.G. Manolakis, Digital Signal Processing: Principles, Algorithms and Edition, 2002, pp.
448-475.
[7] E. Chu and A. George, Inside the FFT Black Box :Serial
& Parallel Fast FourierTransform Algorithms. CRC
Press LLC, 2000.
[8] B. G. Jo and M. H. Sunwoo, New Continuous-Flow
Mixed-Radix (CFMR) FFT Processor Using Novel InPlace Strategy, Electron Letters, vol. 52, No. 5, May
2005.
[9] Charles Wu, Implementing the Radix-4 Decimationin
Frequency (DIF) Fast Fourier Transform (FFT)
Algorithm Using aTMS320C80 DSP, Digital Signal
Processing Solutions,January 1998.
[10] P. Duhamel and H. Hollmann, Split-radix FFT
Algorithm, Electron Letters, vol. 20, pp 14-16, Jan.
1984.
[11] [4] H. V. Sorensen, M. T. Heideman and C. S. Burrus,
On Computing the Split-radixFFT, IEEE Trans.
Acoust., Speech, Signal Processing, vol. ASSP-34, pp.
152-156,Feb. 1986.

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Chaos CDSK Communication System


Arathi. C
M.Tech Student, Department of ECE,
SRKR Engineering College,
Bhimavaram, India
Abstract: In recent years chaotic communication systems have emerged as an alternative solution to conventional spread
spectrum systems. The chaotic carrier used in this kind of modulation-demodulation schemes, have unique properties that
make them suited for secure, and multi-user communications. The security of chaos communication system is superior to
other digital communication system, because it has characteristics such as non-periodic, wide-band, non - predictability,
easy implementation and sensitive initial condition. In this paper, a new approach for communication using chaotic signals is
presented.
Keywords Chaos Communication System, CDSK
I.

INTRODUCTION
II.

Previous digital communication technology continually


used a linear system. However, as this technology reached basic
limit, people started to improve performance of nonlinear
communication systems applying chaos communication systems
to nonlinear systems [1]. Chaos communication systems have
the characteristics such as non - periodic, wide-band, nonpredictability and easy implementation. Also, chaos
communication system is decided by initial conditions of
equation, and it has sensitive characteristic according to initial
condition, because chaos signal is changed to different signal
when initial condition is changed [2]. Chaos signal is expressed
as randomly and non-linearly generated signal. If initial
conditions of chaos signal is not exact, users of chaos system are
impossible to predict the value of chaos signal because of its
sensitive dependence on initial conditions [1][3]. As these
characteristics, the security of chaos communication system is
superior to other digital communication system.
Due to security and other advantages, chaos
communication systems are being studied continuously. Look at
existing research, in order to solve disadvantage that bit error
rate (BER) performance of this system is bad, chaos
communication system is evaluated the BER performance
according to chaos maps, and find a chaos map that has the best
BER performance [4]. In addition, chaos users evaluate the BER
performance according to chaos modulation system [5][6], and
propose a new chaos map that has the best BER performance.
In this paper, in AWGN and Rayleigh fading channel,
BER performances of chaotic CDSK system is evaluated. At
existing study, we proposed a novel chaos map in order to
improve the BER performance [7], and we named a novel chaos
map "Boss map".

CHAOTIC SYSTEM

A chaotic dynamical system is an unpredictable,


deterministic and uncorrelated system that exhibits noise-like
behavior through its sensitive dependence on its initial
conditions, which generates sequences similar to PN sequence.
The chaotic dynamics have been successfully employed to
various engineering applications such as automatic control,
signals processing and watermarking. Since the signals
generated from chaotic dynamic systems are noise-like, super
sensitive to initial conditions and have spread and flat spectrum
in the frequency domain, it is advantageous to carry messages
with this kind of signal that is wide band and has high
communication security. Numerous engineering applications of
secure communication with chaos have been developed [8].
III.

CHAOTIC SIGNALS

A chaotic sequence is non-converging and non-periodic


sequence that exhibits noise-like behavior through its sensitive
dependence on its initial condition [1]. A large number of
uncorrelated, random-like, yet deterministic and reproducible
signals can be generated by changing initial value. These
sequences so generated by chaotic systems are called chaotic
sequences [8].
Chaotic sequences have been proven easy to generate
and store. Merely a chaotic map and an initial condition are
needed for their generation, which means that there is no need
for storage of long sequences. Moreover, a large number of
different sequences can be generated by simply changing the
initial condition. More importantly, chaotic sequences can be the
basis for very secure communication. The secrecy of the

Page 17

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
transmission is important in many applications. The chaotic
sequences help achieve security from unwanted reception in
several ways. First of all, the chaotic sequences make the
transmitted signal look like noise; therefore, it does not attract
the attention of an unfriendly receiver. That is, an ear-dropper
would have a much larger set of possibilities to search through
in order to obtain the code sequences [4][8].
Chaotic sequences are created using discrete, chaotic
maps. The sequences so generated even though are completely
deterministic and initial sensitive, have characteristics similar to
those of random noise. Surprisingly, the maps can generate large
numbers of these noise-like sequences having low crosscorrelations. The noise-like feature of the chaotic spreading
code is very desirable in a communication system. This feature
greatly enhances the LPI (low probability of intercept)
performance of the system [4].
These chaotic maps are utilized to generate infinite
sequences with different initial parameters to carry different user
paths, as meaning that the different user paths will spread
spectrum based on different initial condition [8].
IV.

SYSTEM OVERVIEW

A. Correlation delay shift keying system


CDSK system has an adder in transmitter. Existing
modulation system than CDSK system consists switch in
transmitter, and problem of power waste and eavesdropping
occurs by twice transmission. Technique that has been proposed
for overcoming these problems is CDSK system. And,
transmitted signal does not repeat by replacing an adder with a
switch in the transmitter [9].
Chaotic
signal

Above equation (1) indicates transmitted signal from


transmitter.
r
y
d
r r
r
LFigure 2: Receiver of CDSK system
CDSK receiver is correlator based receiver, and it is
performed in order to recover the symbol. Received signal and
delay received signal are multiplied, and this signal is as much
added as spreading factor. Afterward the signal pass through the
threshold, and information signal recover through decoding.
Information bits are possible to recover when delay
time and spreading factor have to use exact value that is used in
transmitted signal.
B. Chaos maps
In this paper, types of chaos map used are Tent map
and Boss map. At existing study, Boss map means a novel chaos
that we proposed for BER performance improvement [8].

Sk

L
d +1, 1
Figure 1: Transmitter of CDSK system

Figure 3: Trajectory of tent map

CDSK transmitter is composed of sum in which


delayed chaos signal multiplied with information bit is added to
generated chaos signal from chaos signal generator. Here,
information bit that is spread as much as spreading factor is
multiplied by delay chaos signal.
s = x +dx

(1)

Figure (3) shows trajectory of Tent map. The x-axis


and the y-axis of figure (3) mean xn and xn+1, and Tent map has
trajectory of triangular shape.
x

=b x c F x

(2)

Equation (2) of tent map is expressed as above.


Equation (2) of Tent map uses existing output value as current

Page 18

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
input value, and it is indicated as figure when initial value is 0.1
and parameter alpha is 1.9999.

at initial values the BER is same for both maps but as SNR
increases the BER for Boss map is less than Tent map.

Figure 5: BER analysis in AWGN channel


Figure 4: Trajectory of boss map
Figure (4) shows trajectory of Boss map, a novel map
that is proposed in order to improve the BER performance. The
x-axis and the y-axis of Boss map mean xn and yn unlike the
Tent map, it draws trajectory like pyramid shape.
=
=

0.45 0.503
0.3

In Rayleigh fading channel, figure (6) shows the BER


performance of chaotic CDSK system. Here, the performance is
evaluated for both Tent map and Boss map. We observe that at
initial values of SNR the BER performance is the same for both
maps. But as SNR value increases the BER performance of Boss
map is better than Tent map.

(3)

Equation (3) of Boss map is expressed as above.


Equation (3) form of Boss map is similar to Tent map because
Boss map was proposed by transforming from Tent map. And,
trajectory of Boss map is indicated as figure (4) when initial
value is 0.1 and parameter alpha is 2.5.
V.

PERFORMANCE EVALUATION

In this paper, the BER performance of chaotic CDSK


system in AWGN (adaptive white Gaussian noise) channel and
Rayleigh fading channel is evaluated for Tent map and Boss
map.
In AWGN channel, figure (5) shows BER performance
of chaotic CDSK system is evaluated. Looking at the figure (5),
the BER performance of chaotic CDSK system with tent map
and boss map is observed. Here, we observe that the BER
performance of Boss map is better than Tent map at each stage
i.e. at different values of SNR we observe that the Boss map
shows better performance than Tent map. We also observe that

Figure 6: BER performance in Rayleigh fading channel


VI.

CONCLUSION

In this paper, a new type of communication system


using chaos is proposed. Chaos sequences are non periodic
sequences which are sensitive to their initial conditions. Chaos
sequences are generated using chaos map. CDSK system using

Page 19

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
chaos has many advantages over other systems. But the BER
performance of chaos communication system is bad. In order to
improve this, we proposed a new chaos map that has better BER
performance than existing map. In AWGN and Rayleigh fading
channel the chaotic CDSK system is evaluated and we observed
that the BER performance of chaos system with Boss map has
better than with Tent map which improves BER of CDSK
communication system.
VII.

FUTURE SCOPE

Chaos communication system increases the number of


transmitted symbols by spreading and transmitting information
bits according to characteristic of chaos maps. So the research
that improves data transmission speed is necessary for chaos
communication system. If many antennas are applied to chaos
communication system, the capacity of data is proportional to
the number of antenna. So it is good way applying multipleinput and multiple-output (MIMO) to the chaos communication
system.

Computers and Communication (ITC-CSCC 2013), Yeosu,


Korea, pp. 775-778, July2013.
[7] M.A. Ben Farah, A. Kachouri and M. Samet, "Design of
secure digital communication systems using DCSK chaotic
modulation," Design and Test of Integrated Systems in
Nano-scale Technology, 2006. DTIS 2006.International
Conference on, pp. 200-204, Sept. 2006.
[8] Ned J. Corron, and Daniel W. Hahs A new approach to
communication using chaotic signals, IEEE transactions
on circuits and systemsI: fundamental theory and
applications, VOL. 44, NO. 5, MAY 1997.
[9] Wai M. Tam, Francis C. M. Lau, and Chi K. Tse,
Generalized Correlation-Delay-Shift-Keying Scheme for
Non - coherent Chaos-Based Communication Systems
IEEE transactions on circuits and systemsI: regular
papers, VOL. 53, NO. 3, MARCH 2006.

REFERENCES
[1] M. Sushchik, L.S. Tsimring and A.R. Volkovskii,
"Performance analysis of correlation- based communication
schemes utilizing chaos," Circuits and Systems I:
Fundamental Theory and Applications, IEEE Transactions
on, vol. 47, no. 12, pp. 1684-1691, Dec. 2000.
[2] Q. Ding and J. N. Wang, "Design of frequency-modulated
correlation delay shift keying chaotic communication
system," Communications, IET, vol. 5, no. 7, pp. 901-905,
May 2011.
[3] Chen Yi Ping, Shi Ying and Zhang Dianlun, "Performance
of differential chaos-shift-keying digital communication
systems over several common channels," Future Computer
and Communication (ICFCC), 2010 2nd International
Conference on, vol. 2, pp. 755- 759, May 2010.
[4] Suwa Kim, Junyeong Bok and Heung-Gyoon Ryu,
"Performance evaluation of DCSK system with chaotic
maps,"
Information
Networking
(ICOIN),
2013
International Conference on, pp. 556-559, Jan. 2013.
[5] S. Arai and Y. Nishio, Noncoherent correlation-based
communication systems choosing different chaotic maps,
Proc. IEEE Int. Symp. On Circuits and Systems, New
Orleans, USA, pp. 1433-1436, June 2007.
[6] Jun-Hyun Lee and Heung-Gyoon Ryu, "New Chaos Map
for CDSK Based Chaotic Communication System," The
28th International Technical Conference on Circuit/System,

Page 20

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Design and Analysis of Water Hammer Effect


in a Network of Pipelines
V. Sai Pavan Rajesh
Department of Control Systems, St. Marys Group of Institutions, Jawaharlal Nehru Technological University- Hyderabad,
Main Road, Kukatpally Housing Board Colony, Kukatpally, Hyderbad, Telangana, India.
Abstract-There will be a chance for the destruction of the system due to transient if it is not provided with adequate
protection devices. Generally, transient takes place when parameters involving in conventional flow are distorted with
respect to the time. Rapid closing of valve in a pipe network will be resulting into hydraulic transient known as water
hammer occurs due to sudden change in pressure and velocity of flow with respect to time. Due to impulsive action, pressure
surges are induced in the system travel along the pipe network with the rapid fluid acceleration leading to the dramatic
effects like pipe line failure, damage to the system etc. Considering the importance of hydraulic transient analysis, we design
a system capable of verifying pipe network containing fluid flow.This paper demonstrates design of different pipe structures
in pipe line network and analysis of various parameters like excess pressure distribution, velocity variations and water
hammer amplitude with respect to time using COMSOL Multiphysics v 4.3. The magnitude of water transient in pipe line
network at different pressure points has been discussed in detail.
Keywords- COMSOL, Pressure distribution, Velocity variation, Water Hammer.
I. INTRODUCTION

by introduction of new steady state condition. The intensity of

The key to the conservation of water is good water

water hammer effects will depend upon the rate of change in

measurement practices. As fluid will be running in water

the velocity or momentum. Conventional water hammer

distribution system, system flow control is dependent based

analyses provide information under operational conditions on

on the requirement for opening or closing of valves, and

two unknown parameters i.e., pressure and velocity within a

starting and stopping of pumps. When these operations are

pipe system. Generally effects such as unsteady friction,

performed very quickly, they convert the kinetic energy

acoustic radiation to the surroundings or fluid structure

carried by the fluid into strain energy in pipe walls, causing

interaction are not taken into account in the standard theory of

hydraulic transient

[1]

phenomena to come into existence in the

water hammer, but were considered in general approach

[3]

water distribution system i.e., a pulse wave of abnormal

But mechanisms acting all along the entire pipe section such

pressure is generated which travels through the pipe network.

as axial stresses in the pipe and at specific points in the pipe

Pressure surges that are formed or fluid transients in pipelines

system such as unrestrained valves will fall under fluid

are referred to as Water hammer. This oscillatory form of

structure interaction extension theory for conventional water

unsteady flow generated by sudden changes results in system

hammer method.

damage or failure if the transients are not minimized. So now


the steady state flow conditions are altered by this effect

[2]

resulting in the disturbance to the initial flow conditions of the


system. Where the system will tend to obtain a static flow rate

Figure 1. Pipe connected to control valve at the end with water


inlet from reservoir.

Page 21

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
subjected to one pressure measuring point. Various parameters
In the past three decades, since a large number of water

like inlet input pressure, wall thickness and measurement

hammer events occurred in the light-water- reactor power

point are changed for analysis.

plants

[4]

, a number of comprehensive studies on the

II. USE OF COMSOL MULTIPHYSICS

phenomena associated with water hammer events have been


The software package selected to model and simulate

performed. Generally water hammer can occur in any thermalhydraulic systems and it is extremely dangerous for the
thermal-hydraulic system since, if the pressure induced
exceeds the pressure range of a pipe given by the
manufacturer, it can lead to the failure of the pipeline
integrity. Water hammers occurring at power plants are due to
rapid valve operation

[5]

, void induced operation, and

condensation induced water hammer

[6]

. In existing Nuclear

Power Plants water hammers can occur in case of an inflow of


sub-cooled water into pipes or other parts of the equipment,

The water hammer theory has been proposed to account for a


number of effects in biofluids under mechanical stress, as in
the case of the origin of Korotkoff sounds during blood
[8, 9]

4.3. It is a powerful interactive environment for modelling and


Multiphysics were selected because there was previous
experience and expertise regarding its use as well as
confidence in its capabilities. A finite element method based
commercial software package, COMSOL Multiphysics, is
used to produce a model and study the flow of liquid in
different channels. This software provides the flexibility for
selecting the required module using the model library, which
consists of COMSOL Multiphysics, MEMS module, micro

which are filled with steam or steam-water mixture [7].

pressure measurement

the pipe flow module was COMSOL Multiphysics Version

, or the development of a fluid-

fluidics module etc. Using tools like parameterized geometry,


interactive meshing, and custom solver sequences, you can
quickly adapt to the ebbs and flows of your requirements,
particle tracing module along with the live links for the

filled cavity within the spinal cord [10]. In the voice production

MATLAB. At present this software can solve almost

system, the human vocal folds act as a valve [11 which induces

problems in multi physics systems and it creates the real world

pressure waves at a specific point in the airways (the glottis),

of multi physics systems without varying there material

through successive compressing and decompressing actions


(the glottis opens and closes repeatedly). Ishizaka was
probably the first to advocate in 1976 the application of the

properties. The operation of this software is easier to


understand and easier to implement in various aspects for
designers, in the form of finite element analysis system.

water hammer theory, when discussing the input acoustic


impedance looking into the trachea

[12]

. More recently, the

water hammer theory was invoked in the context of tracheal


wall motion detection [13]. Generally Water utilities, Industrial
Pipeline Systems, Hydropower plants, chemical industries,
Food, pharmaceutical industries face this water transient
problem.
The present work reports the design of different pipe
channels and analysis of the pressure distribution and velocity
variation produced all along the pipe flow network when

Figure 2. Multiphysics modelling and simulation softwareCOMSOL

Page 22

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
into account uid wall friction, extended water hammer
In this model as the valve is assumed to close instantaneously

allows for pipe motion and dynamic Fluid Structure

the generated water hammer pulse has a step function like

Interaction [15, 16].

shape. To correctly solve this problem requires a well posed

In water hammer at static condition pressure wave is a

numerics. The length of the pipe is meshed with N elements

disturbance that propagates energy and momentum from one

giving a mesh size dx = L/N. For the transient solver to be

point to another through a medium without significant

well behaved requires that changes in a time step dt are made

displacement of the particles of that medium. A transient

on lengths less than the mesh size. This gives the CFL number

pressure wave, subjects system piping and other facilities to

condition

oscillating at high pressures and low pressures. This cyclic

CFL= 0.2= c.dt/dx

loads and pressures can have a number of adverse effects on

(1)

Meaning that changes during the time dt maximally move 20

the hydraulic system. Hydraulic transients can cause hydraulic

% of the mesh length dx. Thus increasing the mesh resolution

equipments in a pipe network to fail if the transient pressures

also requires decreasing the time stepping. This advanced

are excessively high. If the pressures are excessively higher

version of software helps in designing the required geometry

than the pressure ratings of the pipeline, failure through pipe

using free hand and the model can be analysed form multiple

or joint rupture, or bend or elbow movement may occur.

angles as it provides the rotation flexibility while working

Conversely, excessive low pressures (negative pressures) can

with it.

result in buckling, implosion and leakage at pipe joints during


III. THEORITICAL BACKGROUND

sub atmospheric phases. Low pressure transients are normally

Water hammer theory dates to 19th century, where several


authors have contributed their work in analyzing this effect.
Among them, Joukowsky

[14]

conducted a systematic study of

the water distribution system in Moscow and derived a


formula that bears his name, that relates to pressure changes,

when the valve is closed energy losses are introduced in the


system and are normally prescribed by means of an
empirical law in terms of a loss coecient. This
coecient,

ordinarily

determined

under

steady

ow

conditions, is known as the valve discharge coecient,

p, to velocity changes, v, according to the equation


P = cU

experienced on the down streamside of a closing valve. But

especially when the pipeline is terminated by the valve. It

(2)

Where is the fluid mass density and c is the speed of sound.

enables to quantify the ow response in terms of the

This relation is commonly known as the Joukowsky

valve action through a relationship between the ow rate

equation, but it is sometimes referred to as either the

and pressure for each opening position of the valve. The

For a compressible uid in an elastic tube, c depends on the

discharge coecient provides the critical piece of missing

bulk elastic modulus of the uid K on the elastic modulus of

existing relationship between pressure and ow rate is often

Joukowsky-Frizell or the Allievi equation.

information for the water hammer analysis. Because the

thickness. The water hammer equations are some version of

a quadratic law type, the empirical coecient is defined in

the compressible uid ow equations. The choice of the

system comprising a short length of pipes (i.e., <2,000 ft.

version is problem-dependent: basic water hammer neglects

{600m}) will usually be less vulnerable to problems with

friction and damping mechanisms, classic water hammer takes

hydraulic transient. This is because wave reflections e.g., at

the pipe E, on the inner radius of the pipe D, and on its wall

terms of the squared ow rate. When water distribution

Page 23

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
tanks, reservoirs, junctions tend to limit further changes in

thickness of the wall about 8 mm and Youngs modulus of

pressure and counteract the initial transient effects. An

210GPa was designed. In order to verify pressure distribution,

important consideration is dead ends, which may be caused by

a pressure sensor measurement point at a distance of z0= 11.15

closure of check valves that lock pressure waves into the

m from the reservoir was arranged and flow has been sent into

system in cumulative fashion. Wave reflections will be both

pipe with an initial flow rate of Q0= 0.5 m3/sec.

positive and negative pressures; as a result the effect of dead


ends must be carefully evaluated in transient analysis.
These pressure surges provide the most effective and viable
means of identifying weak spots, predicting potentially
negative effects of hydraulic transient under a number of
worst case scenarios, and evaluating how they may possibly
be avoided and controlled. The basic pressure surge modeling
is based on the numerical conservation of mass and linear
momentum
Elurian(LE)

equations.
[17]

For

this

Arbitary

Lagrangian

numerical solution helps in providing the exact

Figure 3. Single pipe line with closing vale at the output

analytical solution. On the other hand when poorly calibrated


hydraulic network models results in poor prediction of
pressure surges thus leading to more hydraulic transients. In
more complex systems especially, the cumulative effect of
several types of devices which influence water hammer may
have an adverse effect. However, even in simple cases, for
example in pumping water into a reservoir, manipulations
very unfavorable with regard to water hammer may take
place. For example, after the failure of the pump, the operator
may start it again. Much depends on the instant of this

Figure 4. Three pipe line intersection in a network.

starting. If it is done at a time when the entire water hammer

After designing the geometry for a flow channel in a pipe,

effect has died down, it is an operation for which the system

materials are to be selected from the material browser. Water,

must have been designed.

liquid and structural steel are selected from the built in section

IV. DESIGN PROCEDURE

of material browser. Edges are selected for water flow and

The design and analysis of the hydraulic transient in a


pipe flow includes geometry, defining the parameters for the
required geometry, providing mesh & inputs. The 3D model is
constructed in the drawing mode of COMSOL Multiphysics.
In this, a pipe of length L = 20 m is constructed assuming that
one end is connected to a reservoir, where a valve is placed at
the other end. The pipe with inner radius of 398.5mm, the

steel pipe model sections. Now pipe properties are defined one
by one by first selecting the round shape from the shape list of
pipe shape. Initially the reservoir acts as a constant source for
pressure producing p0 which is equal to 1 atm. As the fluid is
allowed to flow from the reservoir tank into the pipe model,
the fluid enters the left boundary of the pipe first and leaves
the right boundary of the pipe with the valve in open

Page 24

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
condition. As the valve is open water is flowing at a steady

based on the excess pressure measurement point but also

flow rate, so at time t = o seconds the valve on the right hand

valve point. On the other hand Study 3 corresponds to the

side is closed instantaneously creating disturbance in the

pressure profile in the pipe line with respect to different times

normal flow rate leading to a change in discharge at the valve.

As it is a time dependent study, the ranges of the times are

As a result of the compressibility of the water and the elastic

edited in the study setting section from range (0, 0.005, and

behaviour of the pipe a sharp pressure pulse is generated

0.75) to range(0, 1e-3, 0.24). Now from study 1 right click and

travelling upstream of the valve. The water hammer wave

select show default solver. Then expand the solver window

speed c is given by the expression

and click the time stepping section in which the maximum

1/ c

=1/c2s +A

step check box is marked and in the associated edit field type

(3)

Where cs is the isentropic speed of sound in the bulk fluid and

dt. Now right click study 1 to compute the results. So pressure

is 1481 m/s while the second terms represents the component

along pipe line at t=0.24s is obtained along with the velocity

due to the pipe flexibility. The water density is and A the

variation. Results are analysed by transforming the pipe line

pipe cross sectional compressibility. Resulting in an effective

shapes to different geometrys. From the analysis of results, it

wave speed of 1037 m/s. The instantaneous closure of the

allow us to conclude that when more pipes lines are inter

valve will results in a water hammer pulse of amplitude P

connected there is more chance for water transient to tack

given by Joukowskys fundamental equation [18]

place at an easy rate and cause damage to the system.

P = cuo

VI. MESHING

(4)

Where u0 is the average fluid velocity before valve was closed.

Meshing can provide the information of required outputs

Exact solution can only be obtained based on the verification

anywhere on the proposed structure along with the input

of the pipe system and valve point [19]. Study was extended for

given. Numerical ripples are visible on the point graphs of

different pipe line intersection models based on the reference

excess pressure history at pressure sensor and water hammer

model

[20]

. To design a three axis pipe line intersection, from

amplitude. As the closure in the valve is instantaneous the

geometry more primitives, three polygons are chosen where

pressure profile has a step like nature. This is difficult to

first polygon coordinates corresponds to x in (1 z0 L), while y

resolve numerically. The ripples can be reduced by increasing

& z remain in (0 0 0). For second polygon y coordinates are (1

the mesh resolution parameter N. So the number of mesh

z0 L) and the remaining are left for (0 0 0), in a similar way z

points (N) selected in this model is about 400. In this model

coordinates are (1 z0 L), and the resulting geometry is

meshing is done for the Edges of the pipe. Where the

depicted in figure 4.

maximum element size of the parameter is defined for L/N m


(L= 20 m & N= 400) and the minimum element size is 1[mm].

V. RESULTS AND DISCUSSION

VII. SIMULATION
Using

this

Multiphysics

modelling

and

simulation

COMSOL software three different kinds of studies were


carried out i.e., Study 1 is the time dependent is based on the
fluid flow and its corresponding interaction with the pipe
which results in pressure distribution along the pipe line along
with the velocity variation. Where the Study 2 is not only

In this study, the simulations are performed using the


fundamental equation in water hammer theory that relates to
pressure changes, p, to velocity changes, v, according to
the equation (2). Simulation comprises of application of
different input initial pressure at the inlet portion for different

Page 25

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
pipe network sections. Pressure measurement points are

multiple pipes were connected where negative pressure

changed along the pipe length L and computed for time

difference exists for singe pipe line geometry, which states

interval from T=0 to T=0.24 seconds. Both the velocity and

that network of pipe lines results in less water transient effect.

pressure are measured at the above time interval. Other

This study can be extended by observing the changes in the

parameters like water hammer amplitude, maximum and

flow by inclining the pipe line, by using T, L shaped piping

minimum pressure for two different geometrys; along with

geometries. Further extension was made for micro piping

velocity variations are listed in table 1.

system by changing the dimensions of the geometry from

Table 1. Pressure distribution and velocity variation


values for single pipe line & three pipe lines geometry.

meters to micro meters. This study helps in building micro


piping network systems that are used in bio medical
applications and Automobile industries.

Parameters
Min.

Pressure

at

Single pipe

Three pipes

-9.579*105 Pa

-1.577*105 Pa

VIII. ACKNOWLEDGEMENTS
The authors would like to thank NPMASS for the

T=0.07s
Max.

Pressure

at

4.047*105 Pa

1.1617*106 Pa

Lakireddy Bali Reddy Engineering College. The authors

T=0.07s
Velocity

establishment of National MEMS Design Centre (NMDC) at

Variation

4.257*10-4

to

0.2672

to

Range at T=0.23s

1.1632 m/s

1.2742 m/s

Excess

1.35*106 Pa

-0.85*106 Pa

Pressure

would also like to thank the Director and Management of the


college for providing the necessary facilities to carry out this
work.
REFERENCES

distribution along the


pipe for t = 0.24 s.

[1] Avallone, E.A., T. Baumeister ID, "Marks' Standard


Handbook for Engineers" McGraw-Hill, 1987, 9th
VII. CONCLUSION

Edition. pp. 3-71.

Flow channel is designed in a pipe network and its

[2] Moody, F. J., "Introduction to Unsteady Thermo fluid

reaction with the valve when closed was analysed using

Mechanics" John Wiley and Sons, 1990,Chapter 9, page

COMSOL Multiphysics Version 4.3. Simulation for the

405.

proposed model is done by changing the initial flow rates

[3] A.S. Tijsseling, Fluid-structure interaction in liquid-lled

along with the pipe networks to explore the variations in the

pipe systems: A review, Journal of Fluids and Structures,

properties of fluid like pressure distribution and velocity

1996,(10), PP. 109-146.

variation with respect to time. When inlet mean velocity is

[4] Algirdas Kaliatka.; Eugenijus Uspuras.; Mindaugas

increased the magnitude of water hammer amplitude remains

Vaisnoras.; Analysis of Water Hammer Phenomena in

the same but the chances for the water transient is more which

RBMK-1500

results in easy breakdown of pipe section. When multiple pipe

International Conference on Nuclear Energy for New

line was connected the maximum pressure distribution and

Europe 2006, Portoro, Slovenia, 2006 September 18-21.

Reactor

Main

Circulation

Circuit,

velocity variation were very less even though the water

[5] M. Giot, H.M. Prasser, A. Dudlik, G. Ezsol, M. Habip, H.

hammer amplitude remains the same when compared for

Lemonnier, I. Tiselj, F. Castrillo, W. Van Hove, R.

different cases. Positive pressure difference exists when

Perezagua, & S. Potapov, Twophase flow water hammer

Page 26

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
transients and induced loads on materials and structures

[13] G. C. Burnett, Method and apparatus for voiced speech

of nuclear power plants (WAHALoads) FISA-2001 EU

excitation function determination and

Research

assisted feature extraction, U.S Patent 20020099541,

in

Reactor

Safety,

Luxembourg,

2001,

November, pp. 12- 15.

non-acoustic

(2002) A1.

[6] P. Griffith, Screening Reactor Steam/Water Piping

[14] N. Joukowsky, Uber den hydraulischen stoss in

Systems for Water Hammer, Report Prepared for U. S.

wasserleitungsrohren,

Nuclear

Imperiale des Sciences de St. Petersbourg Series 8, 9

Regulatory Commission

NUREG/CR-6519,

1997.

Memoires

de

lAcademie

(1900).

[7] M. Giot, J.M. Seynhaeve, Two-Phase Flow Water

[15] F. DSouza, R. Oldeburger, Dynamic response of uid

Hammer Transients : towards the WAHA code, Proc. Int.

lines, ASMEJournal of Basic Engineering, 1964, (86), pp.

Conf. Nuclear Energy for New Europe '03, Portoro,

589598.

Slovenia, 2003,Sept. 811, Paper 202, 8p.

[16] D. J. Wood, A study of the response of coupled liquid


ow-structural

[8] D. Chungcharoen, Genesis of Korotkoff sounds, Am. J.


Physiol.; 1964, 207, pp. 190194.

systems

disturbances, ASME

[9] J. Allen, T. Gehrke, J. O. Sullivan, S. T. King, A. Murray

subjected

to

periodic

Journal of Basic Engineering,

1968,(90) , pp.532540.

, Characterization of the Korotkoff sounds using joint

[17] J. Donea, Antonio Huerta,J. Ph. Ponthot and A.

time-frequency analysis, Physiol. Meas.; 2004, (25), Pp.

Rodrguez-Ferran, Arbitrary Lagrangian Eulerian

107117.

Methods,UniversitedeLi`ege, Li`ege, Belgium.

[10] H. S. Chang, H. Nakagawa, Hypothesis on the

[18] M.S. Ghidaoui, M. Zhao, D.A. McInnis, and D.H. Ax

pathophysiology of syringomyelia based on simulation of

worthy, A Review of Water Hammer Theory and

cerebrospinal fluid dynamics, Journal of Neurology

Practice, Applied Mechanics Reviews, ASME, 2005.

Neurosurgery and Psychiatry, 2003, (74), pp. 344347.

[19] A.S. Tijsseling, Exact Solution of Linear Hyperbolic

[11] N. H. Fletcher, Autonomous vibration of simple pressure-

Four-Equation Systems in Axial Liquid-Pipe Vibration,

controlled valves in gas flows, J. Acoust. Soc. Am, 1993,

Journal Fluids and Structures, vol. 18, pp. 179196,

93 (4), pp. 21722180.

2003.

[12] K. Ishizaka, M. Matsudaira, T. Kaneko, Input acoustic-

[20] Model library path: pipe_flow_module /verification

impedance measurement of the sub glottal system, J.

_modelswater_hammer_verification{.http://www.comsol.

Acoust. Soc. Am, 1976, 60 (1), pp. 190197.

co.in/showroom/gallery/12683/}

Page 27

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
FIGURES AND TABLES
CAPTION FOR FIGURES
FIGURE 1. Pipe connected to control valve at the end with water inlet from reservoir.
FIGURE 2. Multiphysics modelling and simulation software-COMSOL
FIGURE 3. Single pipe line with closing vale at the output.
FIGURE 4. Three pipe line intersection in a network.
FIGURE 5. Pressure distribution at T=0.07s for single pipe line geometry.
FIGURE 6. Pressure distribution at T=0.07s for three pipe line intersection geometry.
FIGURE 7. Velocity variation at T=0.23s for single pipe.
FIGURE 8. Velocity variation at T=0.23s for three pipe line intersection geometry.
FIGURE 9. Excess pressure history measured at the pressure sensor for single pipe line geometry.
FIGURE 10. Excess pressure history measured at the pressure sensor for three pipe line geometry.
FIGURE 11. Excess pressure at the valve (green line) & Predicted water hammer amplitude (Blue line) for single pipe line.
FIGURE 12. Excess pressure at the valve (green line) & Predicted water hammer amplitude (Blue line) for three pipe lines.
FIGURE 13. Excess pressure distribution along the pipe for t= 30 s for single pipeline geometry.
FIGURE 14. Excess pressure distribution along the pipe for t= 30 s for three pipeline geometry.
CAPTION FOR TABLE
Table 1. Pressure distribution and velocity variation values for single pipe line & three pipe lines geometry.

Page 28

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Figure 5: Pressure distribution at T=0.07s for single pipe


line geometry.

Figure 6: Pressure distribution at T=0.07s for three


pipe line intersection geometry.

Figure 7: Velocity variation at T=0.23s for single pipe


line geometry

Figure 8: Velocity variation at T=0.23s for three


pipe line intersection geometry.

Page 29

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Figure 9. Excess pressure history measured at the


pressure sensor for single pipe line geometry.

Figure 11. Excess pressure at the valve (green line) &


Predicted water hammer amplitude
(Blue line) for single pipe.

Figure 10. Excess pressure history measured at the


pressure sensor for three pipe line geometry.

Figure 12. Excess pressure at the valve (green line) &


Predicted water hammer amplitude
(Blue line) for three pipe lines.

Page 30

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Figure 13. Excess pressure distribution along the pipe for t= 30 s for single pipeline geometry.

Figure 14. Excess pressure distribution along the pipe for t= 30 s for three pipe line intersection geometry.

Page 31

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Deblurring of Noisy or Blurred Image by Using


Kernel Estimation Algorithm
Joseph Anandaraj.S1, R.Deepa2, C.Helen prema3
2

PG Student- M.Tech. VLSI Design & Embedded System Engineering


1,3
Assistant Professor
Dept. Electronics and Communication Engineering
Dr.M.G.R. Educational and Research Institute, University
Maduravoyal, Chennai, Tamil Nadu, India.

Abstract: This paper presents, taking the photos under dim lighting conditions using a hand-held camera becomes blurry or
noisy. If the camera is set to a long exposure time, the image is blurred due to camera shake. While, the image will be dark and
noisy if it is taken with a short exposure time with a high camera gain. By combining the both information extracted from both
blurred and noisy images, this paper shows how to produce a high quality image that cannot be obtained by simply denoising the
noisy image or deblurring the blurred image alone. The aim of is image deblurring with the help of the noisy image. First, both
images are used to estimate an accurate blur kernel from a single blurred image. Second, by using both images, a residual
deconvolution is proposed to reduce ringing artifacts inherent to image convolution. Third, the remaining ringing artifacts in
smooth image regions are further suppressed by a gain-controlled deconvolution process. We demonstrate the effectiveness of
our approach using a number of indoor and outdoor images taken by hand-held cameras in low lighting environments with
some applications.
Keywords: Matlab, deconvolution, kernel estimation algorithm, deblurring and denoising process, iterative method.
I. INTRODUCTION
Capturing satisfactory images under low light conditions
using a hand-held camera can be a frustrating experience. Often
the taken images are blurry or noisy. The brightness of the image
can be increased in three ways are shutter, aperture and ISO
settings. First, reducing the shutter speed (the reciprocal of the
focal length of the lens, in the unit of seconds) as well as safe
shutter speed. But with a safe shutter speed and camera shake
will result a blurred image. Second, the aperture should be large.
A large aperture will reduce the depth of field. Moreover, the
range of apertures in a consumer-level camera is very limited.
Third, the ISO range should be high. However, the high ISO
image is very noisy due to the amplification of noise as the
cameras gain increases. For taking a sharp image in a dim
lighting environment, the best settings are: safe shutter speed, the
largest aperture, and the highest ISO. Even with this combination,
the captured image may still be dark and very noisy. To avoid
that, flash is using in the camera. But unfortunately the flash
introduces artifacts such as shadows and secularities. On the
other hand, the flash is not effective for distant objects.
In this paper, a novel approach to produce a high quality
image by combining two degraded images. One is a blurred
image which is taken with a slow shutter speed and low ISO

settings. With enough light, it has the correct color, intensity


and a high Signal-Noise Ratio (SNR),though it is blurry due to
camera shake. Another one is an under exposed and noisy image
with a high ISO settings and a fast shutter speed. It is sharp but
very noisy due to high camera gain and insufficient exposure.
The colors of this image can also partially lost due to low
contrast.
Recovering a high quality image from a very noisy
image is not a easy task; because fine image details and textures
are concealed in noise. The process of denoising cannot separate
signals from noise completely. While, deblurring from a single
blurred image is such a challenging blind deconvolution
problem - both blur kernel estimation and image deconvolution
is highly under-constrained. Moreover, unpleasant artifacts (e.g.,
Ringing) from image deconvolution are occurs even when using
a perfect kernel in the reconstructed image.
We considering that difficult image reconstruction problem
like a image deblurring problem, by using a pair of blurred and
noisy images. The most image deblurring approaches proposed
that the image blur can be described well by a single blur kernel
caused by camera shake while the scene is static. The two nonblind deconvolution problems are non-blind kernel estimation
and non-blind image deconvolution. In kernel estimation, a very

Page 32

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
accurate initial kernel can be recovered from the blurred image
by exploiting the large scale, sharp image structures in the noisy
image. The proposed kernel estimation algorithm is able to
handle larger kernels than those recovered by using a single
blurred image. To reduce the ringing artifacts from the image
deconvolution, by residual deconvolution approach. A gaincontrolled deconvolution is for suppress the ringing artifacts in
smooth image regions. All three steps - kernel estimation,
residual deconvolution, and gain controlled deconvolution are
takes the information of both images. The final reconstructed
image is sharper than the blurred image and clearer than the
noisy image.
II. PREVIOUS WORKS
2.1 Deblurring of single image:
Image deblurring can be categorized into two types are
blind deconvolution and non-blind deconvolution. It is more
difficult since the blur kernel is unknown. A literature view on
image deblurring can be as demonstrated in the real kernel caused
by camera shake is complex, beyond a simple parametric form
(e.g., single one direction motion or a Gaussian) assumed in
previous approaches natural image statistics together with a
sophisticated variation Bayes inference algorithm are used to
estimate the kernel. The image is then reconstructed using a
standard non-blind deconvolution algorithm. Very nice results are
obtains when the kernel is small (e.g. 3030 pixels of fewer).
Kernel estimation for a large blur is, however, inaccurate and
unreliable using a single image. Even with a known kernel, nonblind deconvolution is still under-constrained. Reconstruction
artifacts, e.g., ringing effects or color speckles, are inevitable
because of high frequency loss in the blurred image. The errors
due to sensor noise and quantization of the image/kernel are also
amplified in the deconvolution process.
For example, more iteration in the Richardson-Lucy
(RL) algorithm [H. Richardson 1972] will result in more
ringing artifacts. We present an adaptively accelerated LucyRichardson (AALR) method for the restoration of an image
from its blurred and noisy version. The conventional LucyRichardson (LR) method is nonlinear and therefore its
convergence is very slow. The LR method by using an exponent
on the correction ratio of LR. This exponent is computed
adaptively in each iteration, using first-order derivatives of the
deblurred image from previous two iterations. Upon using this
exponent, the AALR improves speed at the first stages and
ensures stability at later stages of iteration. An expression for
the estimation of the acceleration step size in AALR method is

derived. The super resolution and noise amplification


characteristics of the proposed method are investigated
analytically. Our proposed AALR method shows better results
in terms of low root mean square error (RMSE) and higher
signal-to-noise ratio (SNR), in approximately 43% less iteration
than those required for LR method. Moreover, AALR method
followed by wavelet-domain denoising yields a better result than
the recently published state of-the-art methods. In our approach,
we significantly reduce the artifacts in a non-blind
deconvolution by taking advantage of the noisy image.
Recently, spatially variant kernel estimation has also
been proposed in [Bardsley et al. 2006]. In [Levin 2006], the
image is segmented into several layers with different kernels.
The kernel in each layer is uni-directional and the layer motion
velocity is constant. Hardware based solutions to reduce image
blur include lens stabilization and sensor stabilization. Both
techniques physically move an element of the lens, or the
sensor, to counter balance the camera shake. Typically, the
captured image can be as sharp as if it were taken with a shutter
speed 2-3 stops faster.
2.2 Denoising of single image:
Using two images for image deblurring or enhancement
has been exploited. This paper shows the superiorities of our
approach in image quality compared with previous two-image
approaches. These approaches are also practical despite that
requires two images. We have found that the motion between
two a blurred/noisy image, when taken in a quick succession, is
mainly a translation. This is significant because the kernel
estimation is independent of the translation, which only results
in an offset of the kernel in computer graphics. Other
approaches include anisotropic diffusion [Perona and Malik
1990], PDE-based methods [Rudin et al.1992; Tschumperle and
Deriche 2005], fields of experts [Roth and Black 2005], and
nonlocal methods [Buades et al. 2005].
2.3 Deblurring and denoising of multiple images:
Deblurring and denoising can benefit from multiple
images. Images with different blurring directions [Bascle et al.
1996; Rav-Acha and Peleg 2000; Rav-Acha and Peleg 2005]
can be used for kernel estimation. In [Liu and Gamal 2001], a
CMOS sensor can capture multiple high-speed frames within a
normal exposure time. The pixel with motion replaced with the
pixel in one of the high-speed frames. Raskar et al. [2006]
proposed a fluttered shutter camera which opens and closes
the shutter during a normal exposure time with a pseudo-random
sequence images, without the need for special hardware.
Another related work [Jia et al. 2004] also uses a pair of images,

Page 33

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
where the colors of the blurred image are transferred into the
noisy image without kernel estimation process. The work most
related to this approach is [Lim and Silverstein 2006] and [Lu
Yuan and Jian Sun 2007] are also makes use of a short exposure
image to help estimate the kernel and deconvolution. However,
our proposed technique can obtain much accurate kernel and
produce almost artifact-free image by a de-ringing approach in
deconvolution.
III. PROBLEM FORMULATION
We take a pair of images are a blurred image B with a
slow shutter speed and low ISO and a noisy image N with high
shutter speed and high ISO. The noisy image is usually
underexposed and has a very low SNR since camera noise is
dependent on the image intensity level [Liu et al. 2006].
Moreover, the noise in the high ISO image is also larger than
that in the low ISO image since the noise is amplified by camera
gain. But the noisy image is sharp because we are using the fast
shutter speed that is above the safe shutter speed. Premultiplying the noisy image by a ratio to compensate for the
exposure difference between the blurred and noisy images
ISO.B.Tb / ISO.N.tN
where t is the exposure time. We perform the multiplication in
irradiance space then go back to image space if the camera
response curve [Debevec and Malik 1997] is known. Otherwise,
a gamma (=2.0) curve is used as an approximation.
3.1 PROCESS:
The block diagram of the process of kernel estimation
is shown in fig.1:

where K is the blur kernel and is the convolution operator


.For the noisy image N, we compute a denoised image ND
[Portilla et al.2003] . ND loses some fine details in the denoising
process, but preserves the large scale, sharp structures. We
represent the lost detail layer as a residual image I:

I = ND +I,
(2)
Our first important observation is that the denoised image ND is
very good initial approximation to I for the purpose of kernel
estimation from Equation (1). The residual image I is
relatively small with respect to ND. The power spectrum of the
image I mainly lies in the denoised image ND. Moreover, the
large scale, sharp image structures in ND make important
contributions for the kernel estimation. As will be shown in our
experiments on synthetic and real
images, accurate kernels can be obtained using B and ND in
non-blind convolution. Once K is estimated, we can again use
Equation (1) to non-blindly deconvolute I, which unfortunately
will have significant artifacts, e.g, ringing effects. Instead of
recovering I directly, we propose to first recover the residual
image I from the blurred image B. By combining Equations
(1) and (2), the residual image can be reconstructed from a
residual deconvolution:
B = IK,
(3)
where B = BND K is a residual blurred image.
Our second observation is that the ringing artifacts from residual
deconvolution of I (Equation (3)) are smaller than those from
deconvolution of I (Equation (1)) because B has a much
smaller magnitude than B after being offset by
ND K,
(4)
The denoised image ND also provides a crucial gain
signal to control the deconvolution process so that we can
suppress ringing artifacts, especially in smooth image regions.
We propose a deconvolution algorithm to further reduce ringing
artifacts. The above three steps - kernel estimation, residual deringing approach using a gain-controlled deconvolution, and deringing - are iterated to refine the estimated blur kernel K and
the deconvolute image I.
IV. KERNEL ESTIMATION ALGORITHM

Fig. 1 Block diagram of kernel estimation algorithm


The goal is to reconstruct a high quality image I using the input
images B and N
B = IK,
(1)

In this section, we show that a simple constrained leastsquares optimization is able to produce a very good initial
kernel.
4.1 Iterative kernel estimation:
The goal of kernel estimation is to find the blur kernel
K from with the initialization
B =IK
and
I =ND

Page 34

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
In vector-matrix form, it is b = Ak, where b and k are
the vector forms of B and K, and A is the matrix form of I. The
kernel k can be computed in the linear least-squares sense. To
stabilize the solution, we use Tikhonov regularization method
with a positive scalar by solving
mink ||Akb||2+2||k||2.
But a real blur kernel has to be non-negative and preserve
energy, so the optimal kernel is obtained from the following
optimization system flowchart as shown in fig.2:
Initialize (0)=0

Measurement z(k)

Calculate X

Calculate Z (K)

Given {z(j); j=1,.,m}

Calculate *l+1

|m(l*)| < ?

fine details are inevitably suppressed by gain-controlled RL.


Fortunately, we are able to add fine scale image details for the
residual RL result I using the following approach.
min k ||Akb||2+2||k||2 ; ki 0 and iki = 1.
We adopt the Landweber method [Engl et al. 2000] to iteratively
update as follows.
1. Initialize k0 =, the delta function.
2. Update kn+1 = kn +(AT b(ATA+2I)kn).
3. Set kin+1 =0 if kin+1 <0 and Kin+1=kin+1/i kin+1
is a scalar that controls the convergence. The iteration
stops when the change between two steps are sufficiently small.
We typically runabout 20 to 30 iterations by setting = 1.0.
The algorithm is fast using FFT, taking about 8 to 12 seconds
for a 6464 kernel and a 800600 image.
4.2 Maximum thresholding in large scale space:
The above iterative algorithm can be implemented in
scale space to make the solution to overcome the local minimal.
A straightforward method is to use the kernel estimated at the
current level to initialize the next finer level. However, we have
found that such initialization is insufficient to control noise in
the kernel estimation. The noise or errors at coarse levels may
be propagated and amplified to fine levels. To suppress noise in
the estimate of the kernel, the global shape of the kernel at a fine
level to be similar to the shape at its coarser level. To achieve
this, we propose a hysteresis thresholding [Canny 1986] in scale
space. At each level, a kernel mask M is defined by thresholding
the kernel values, Mi = 1 if ki > tkmax, where t is a threshold
and kmax is the maximum of all kernel values. We compute two
masks Mlow and Mhigh by setting two thresholds tlow and
thigh. Mlow is larger and contains Mhigh. After kernel
estimation, we set all elements of Kl outside the mask Mhigh to
zero to reduce the noise at level l. Then, at the next finer level l
+1, we set all elements of Kl+1 outside the up-sampled mask of
Mlow to zero to further reduce noise. This hysteresis
thresholding is performed from coarse to fine. The pyramids are
constructed using a down sampling factor of 1/2 until the
kernel size at the coarsest level reaches 99. We typically
choose tlow = 0.03, and thigh = 0.05.

Y
N

V. IMPLEMENTATION DETAILS

(k)=*conv

Fig.2 Flowchart of an kernel estimation algorithm


The solution is given by (ATA+2I)k=AT b in closed-form if
there are no other constraints on the kernel k. However, some

5.1 Image acquisition:


In practice, it requires one image be taken soon after
another, to minimize misalignment between two images. It has
two options to capture such image pairs very quickly. Initially,
choosing a noisy an deblurred image fig.3(a); First, two

Page 35

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
successive shots with different camera settings are triggered by
a laptop computer connected to the camera. This frees the user
from changing camera settings between two shots. Second,
using exposure bracketing built in many DSLR cameras. In this
mode, two successive shots can be taken with different shutter
speeds by pressing the shutter only once.
Using these two options, the time interval between two shots
can be very small, typically only 1/5second which is a small
fraction of typical shutter speed (> 1 second)of the blurred
image. The motion between two such shots is mainly a small
translation if we assume that the blurred image can be modeled
by a single blur kernel, i.e., the dominant motion is translation.
Because the translation only results in an offset of the kernel, it
is unnecessary to align two images. It can also manually change
the camera settings between two shots. In this case, we have
found that the dominant motions between two shots are
translation and in-plane rotation. To correct in-plane rotation,
we simply draw two corresponding lines in the blurred/noisy
images. In the blurred image, the line can be specified along a
straight object boundary or by connecting two corner features.
The noisy image is rotated around its image center such that two
lines are virtually parallel. If an advanced exposure bracketing
allowing more controls is built to future cameras, this manual
alignment will become unnecessary.

noise removal and detail preservation with denoised results.


Because the noise image is scaled up from a very dark, low
contrast image, partial color information is also lost. This
approach recovers correct colors through image deblurring and
standard RL deconvolution results which exhibit the unpleasant
ringing facts with denoised image in fig.3(c).

Fig.3(a) Original image as noised and blurred

Fig.3(b) Image denoised by a filter

5.2 Image denoising:


For the noisy image N, we apply a wavelet-based
denoising algorithm [Portillaetal. 2003] with Matlab code . The
algorithm is one of the state-of-art techniques and comparable to
several commercial denoising software. It also experimented
with bilateral filtering but found that it is hard to achieve a good
balance between removing noise and preserving details, even
with careful parameter tuning is clearly shown in fig.3(b).
Fig.3(c) Blurred image for the process of deblurring

VI. SIMULATION RESULTS


This approach to a variety of blurred/noisy image pairs
in low lighting environments using a compact camera (Canon
S60,5M pixels) and a DSLR camera (Canon 20D, 8M pixels).
6.1 Comparison:
Comparing this approach with denoising [Portillaetal.
2003], and standard RL algorithm. Figure 8, from left to right,
shows a blurred image, noisy image (enhanced), denoised
image, standard RL result (using our estimated kernel), and this
result. It tunes the noise parameter (standard deviation) in the
denoising algorithm to achieve a best visual balance between

Page 36

Fig. 3(d) Deblurred and denoised image

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
6. 2 Large noise:
A blurred or noisy pair captured by the compact
camera then the noisy image has very strong noises. The
estimated initial kernel and the refined kernel by the iterative
optimization. There fined kernel has a sharper and sparser shape
than the initial one.
6.3 Large kernel
Compared with the state-of-art single image kernel
estimation approach [Fergus et al. 2006] in which the largest
kernel is 30 pixels, this approach using an image pair
significantly extends the degree of blur that can be handled.

Fig.5 Using medical imaging process

Fig .6 For Iris Recognition


Fig. 4 Graph for the estimated output
6.4 Low noise and kernel
In a moderately dim lighting environment, the captured
input images have small noise and blur. This is a typical case
assumed in Jias approach [2004] which is a color transfer based
algorithm. The third and fourth columns are color transferred
result [Jiaet al. 2004] and histogram equalization result from the
blurred image to the denoised image. The colors cannot be
accurately transferred, because both approaches use global
mappings. The shutter speeds and ISO settings are able to
reduce exposure time (shutter speed ISO) by about 10 stops.
Therefore, the final deblurred and denoised image is estimated
as shown in fig.3(d).
Images play an important role in research and
technology. But the main drawback in digital images is presence
of noise and degradation during their acquisition or
transmission. One of the important image processing techniques
is image restoration. Image restoration aims at improving the
quality of an image by removing defects and makes it better. It
is widely used in various fields of applications, such as medical
imaging, astronomical imaging, remote sensing and some
commercial purposes are shown in fig. (5),(6),(7) and (8).

Page 37

Fig.7 Using in astronomy

Fig.8 In commercial purpose

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
VII. CONCLUSION
This paper proposed an image deblurring approach
using a pair of blurred/noisy images. This approach takes
advantage of
both images to produce a high quality
reconstructed image. By formulating the image deblurring
problem using two images, it has developed an iterative
deconvolution algorithm which can estimate a very good initial
kernel and significantly reduce deconvolution artifacts. No
special hardware is required. This proposed approach uses offthe-shelf, hand-held cameras.
Limitations remain in approach, however, shares the
common limitation of most image deblurring techniques are
assuming a single, spatial-invariant blur kernel. For spatialvariant kernel, it is possible to locally estimate kernels for
different parts of the image and blend deconvolution results.
Most significantly, this approach requires two images; the
ability to capture such pairs will eventually move into the
camera firmware, thereby making two-shot capture easier and
faster.
In the future, planning to extend our approach to other
image deblurring applications, such as deblurring video
sequences, or out-of-focus deblurring. Our techniques can also
be applied in a hybrid image system [Ben-Ezra and Nayar 2003]
or combined with coded exposure photography [Raskar et al.
2006].
REFERENCES
[1] Li Xu, Shicheng Zheng Jiaya Jia,The Chinese University of
Hong Kong; Unnatural L0 Sparse Representation for
Natural Image Deblurring.
[2] Lin Zhong, Sunghyun Cho, Dimitris Metaxas, Sylvain
Paris, Jue Wang,Rutgers University; Handling Noise in
Single Image Deblurring using Directional Filters.
[3] Esmaeil Faramarzi, Member, IEEE, Dinesh Rajan, Senior
Member, IEEE,and Marc P. Christensen, Senior Member,
IEEE; Unified Blind Method for Multi-Image SuperResolution and Single/Multi-Image Blur Deconvolution.
[4] Lu Yuan, Jian Sun, Long Quan,The Hong Kong University
of Science and Technology 2Microsoft Research Asia;
Image deblurring with blurred/noisy image pairs
[5] Jiaya Jia, Department of Computer Science and
Engineering, The Chinese University of Hong Kong; Single
Image Motion Deblurring Using Transparency.
[6] James G.Nagy, Katrina palmer, Lissa perrone; Iterative
methods for image deblurring.
[7] J. Telleen , A. Sullivan , J. Yee , O. Wang , P.
Gunawardane , I. Collins , J. Davis, University of
California, Santa Cruz; Synthetic Shutter Speed Imaging

Page 38

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Page 39

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Critical Failure Analysis of Caustic Slurry Pump


S.Ganesh kumar1, K.Gowtham 2, G.Terence Anto3
Dept. of Mechanical Engineering,
SNS College of Engineering, Coimbatore
Abstract: This project is to design the impeller of the turbine for a centrifugal caustic slurry pump to increase its efficiency
and showing the merits of designing parameters (six blade turbine, design (material) changes from impeller) comparing with
the old material (stainless steel 2324) of a turbine. An investigation in to usage of new materials is required. In the present
work impeller is designed with two different materials. An attempt has been made to investigate the effect of temperature on
the impeller. By identifying the true design feature, the extended service life and long term stability is assured is defined. A
thermal (transient) analysis has been carried out to investigate the maximum heat flux of the impeller is defined. An attempt
is also made to suggest the best material for an impeller of a turbine by comparing the results obtained for two different
materials Inconel alloy 783, Inconel alloy 740 for impeller is to made it. Based on the results best material is recommended
for the impeller of a turbocharger.
Key words: Pump, Turbine, Material, FEM.
1.INTRODUCTION
Centrifugal pump are a class of machinery intended to
increase the power of turbine. This is accomplished by
increasing the pressure of intake air, allowing more fuel to be
flow condition. In the late 19th century, Rudolf Diesel and
Gottlieb Daimler experimented with pre-compressing air to
increase the power output and fuel efficiency. The first
exhaust gas turbocharger was completed in 1925 by the Swiss
turbine Alfred Buchi who introduced a prototype to increase
the power of a compressor by a reported 76%. The idea of
salary condition at that time was not widely accepted.
However, in the last few decades, it has become essential in
almost all diesel compressors with the exception of very small
diesel turbo charger.
Their limited use in gasoline compressors has also

in Switzerland sought to determine consensus regarding

resulted in a substantial boost in power output and efficiency.

strategies for heparin unit dosage ranged from 70 U/kg to 500

Their total design, as in other turbo machines, involves several

U/kg. Variation was also seen in target ACT, with 24% of

analyses including: mechanical, aerodynamic, thermal, and

respondents requiring a value of 200 s, 18% requiring 250 s,

acoustic. Turbo chargers and researchers still seek ways to

and 26% requiring 300 s. Whereas 91% of respondents used

improve their designs while governed by rules of cost and

protamine for heparin reversal, 52% used full dose reversal (1

manufacturing

simply

mg per 100 U) while others used less. Antifibrinolytics were

attempted to develop the conceptual designs into reliable

used by 40% of respondents, and 70% reported using cell

products for end users.

saving devices. Respondents also differed in opinions of

capabilities.

At

first,

scientists

Englberger L, Streich M, Tevaearai HT, Carrel TP.

anticoagulation

strategies

during

OPCAB

through

questionnaire survey of European cardiothoracic surgeons.


Survey questions included volume of OPCAB procedures
performed, use of antiplatelet therapy, heparinization during
intra- and perioperative periods, and general management
techniques (ACT limits, protamine reversal, and use of
antifibrinolytics). Of 750 surveys distributed, researchers
obtained a sample size of 325 (43.7% participation).
There was significant variation in anticoagulation
strategies. While 78% of participants used low or high
molecular weight heparin for thrombosis prophylaxis,

bleeding and risk for early graft occlusion with OPCAB. Fifty

Interact Cardiovasc Thorac Surg. 2008 Feb 26 (4) Researchers

Page 39

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
six percent of respondents thought average bleeding was not

damped mode shapes and Campbell diagrams are presented

reduced in OPCAB compared to CPB, while 34% thought the

and

OPCAB technique a risk factor for early graft occlusion.

experimental measurements carried out on an opposite

discussed.

Calculation

results

are

confirmed

by

Sylvia Hurtado, Kevin Eagan, Bryce Hughes Higher

impeller multistage pump, where non contacting probes have

Education Research Institute, UCLA (8) Establishes most of

been installed nearby sleeve bearings locations, and order

the growth in the new jobs will require science and technology

tracking method has been applied during start-up and coast-

skills Those groups that are most underrepresented in S&E

down transients.

are also the fastest growing the general population (National

The modifications incorporated in the pump include

Academies, 2011, p. 3). In an effort to achieve long-term

enlargement of flow passages to accommodate bigger solid

parity in a diverse workforce, they recommend a near term,

particles, robust impeller with smaller number of vanes,

reasonable goal of improving institutional efforts to double the

special seals and proper material of construction to ensure

number

longer life. These have to be operated with relatively wide

of

underrepresented

minorities

receiving

undergraduate STEM degrees.

clearance at impeller-casing contacts to minimize choking and

Increasing the retention of STEM majors from 40%

localized wear. These modifications increase the hydraulic

to 50% would, alone, generate three-quarters of the targeted 1

losses in the pump and deteriorate the pump performance. The

million additional STEM degrees over the next decade.

present study is concerned with the evaluation of the

Retaining more students in STEM majors is the lowest-cost,

performance characteristics of a centrifugal slurry pump when

fastest policy option to providing the STEM professionals that

handling bottom fly ash at 30% concentration of bottom fly

the nation needs. Changing productivity levels means

ash slurry at different speeds1000, 1150, 1300 and 1450 rpm.

changing practices, and mindsets from priming the sieve to

From the experimental evolution it is concluded that the

priming the pump, or talent development.

parameters defined for head and capacity of the conventional


pumps are also applicable for the slurry pumps with water

G. Agrati and A. Piva Weir Gabbioneta, Sesto S.

despite the constructional differences. From the bottom ash

Giovanni, Italy (3) Multistage horizontal boiler feed pumps

characteristics it is also observed that addition of the fly ash in

are designed and built in two different configurations: with

the bottom ash developed head decreases with increases of the

equidirectional or with opposite impellers. The study is

flow rate results decreases the power consumptions of the

carried out from hydraulic and structural point of view. A

transportation of the bottom ash in pipelines.

particular attention is addressed to the axial load balance and


to the lateral dynamic analysis, with new and worn clearance

II. METHODOLOGY
3. FAILURE SURVEYING

conditions. A complete calculation of rotor dynamic


behaviour in both configurations has been performed using the

4. MATERIAL SELECTION

finite element method. The model of the shaft has been


meshed using beam elements, while linearised coefficients

5. MODEL GENERATE

have been evaluated in order to simulate stiffness and


damping of sleeve bearings, impeller wear rings, balancing

6. THERMAL DISTRIBUTION

drums and interstage seals. Undamped critical speed map,


7. RESULT

Page 40

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
III. FAILURE SURVEYING

(a) While applying more capacity of the fuel applying system

(a)While analyzing the slurry pump equipment of impeller

is highly temperature is maintained.

design. It consider more failure area is occurred. This failure

(b) While Occur the problem is noted impeller blade area. So

is rectify or recovery of the process to problem solving the

in this type of blade design is perimeter in changes and also

frontal solution.

total impeller design calculating the function to creating the

(b) The failure occurs in three problem

model by using the solid works software.


3.2 Material Changes

(i)Wear plate material problem

Nowadays SS2324 is using the impeller material and our

(ii)Design changes

project is considered of two cases of material one of the

(iii)Material changes
This type of centrifugal pump is supporting to the impeller the
flow rate efficiency is higher performed to actuating the force
of the equipment is called as Wear plate. The sudden life time

Inconel alloy 740 and Inconel Allo


783 is considering the analysis of FEA method transient
temperature analysis.
783 is considering the analysis of FEA method transient

period is exist wear Plate is corrosion and wear.

temperature analysis.

3.1 Dimension Changes


IV. SELECTION OF MATERIAL

properties:

Material selected for this work is inconel alloy 740

Corrosive resistance

and inconel alloy 783 has very good heat flux to Steel plate,

Elastic Modulus

Sheet, Coil, Flat bar, Round bar, Strip steel, wire, All kinds of

Tensile Strength

forgings. It is high strength alloys with major alloying


elements are Iron and silicon.

4. 2 Applications of Aluminium Alloys 740& 783


The applications for inconel Alloys 740& 783are:

4.1 Properties of inconel alloy 740 &783

Steel plate, Sheet, Coil, Flat bar, Round bar, Strip steel, wire,

Inconel Alloy 740& 783 has a range of useful

All kinds of forgings.

V. MODEL GENERATE

2.

Associatively

There are most software packages are available for

3.

Parametric based design

creating the 3D model of the slurry pump impeller design and

4.

Design indent

some of the softwares are


1.

SOLID WORKS

2.

PRO ENGINEER

3.

CATIA

4.

UNIGRAPHICS

5.

INVENTOR

Here we have chosen the solid works as the modeling


software because of following advantages.
1.

Figure 5.1 Flow Diagram for Impeller Design


VI. THERMAL DISTRIBUTION IN IMPELLER

It is the feature based modeling software

Page 41

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Fig6.1Heat Flux Distributions Inconel alloy 783

Fig 6.3 Heat Flux Distribution Stainless Steel 2324 (actual


material)

Type of analysis

= Transient

Time taken

= 60 seconds
-9

Type of analysis

= Transient

Time taken

= 60 seconds

Minimum Heat Flux

= 1.0236x10-9 w/mm2

Maximum Heat Flux

= 1.1343 w/mm2

Minimum Heat Flux= 1.1283x10 w/mm


2

Maximum Heat Flux= 1.805 w/mm

The graphical view is consider result

Fig 6.2 Heat Flux Distribution Inconel Alloy 740


Type of analysis

= Transient

Time taken

= 60 seconds

Minimum Heat Flux

= 3.1436x10-9 w/mm2

Fig 6.4Graphical View of Maximum Heat Flux


VII. CONCLUSION
We are doing the project in caustic slurry pump. In

Maximum Heat Flux

= 6.6625 w/mm

this company, stainless steel 2324 is used as impeller material


in pump. It has tensile strength of

620-880 N/mm2 and

youngs modulus of 194 KN/mm2. Hence different studies


were conducted to replace impeller material of the pump. The

Page 42

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
different materials were put forward to increase the efficiency.
So we analyse the Inconel alloy 740 having tensile strength of
1158 N/mm2 and youngs modulus of

218 KN/mm2. Since

inconel alloy 740 materials has considerably very high


thermal conductivity and maximum heat flux, for our design
analysis. we considered this as suitable material when
compared to stainless steel 2324. By using inconel alloy 740,
the corrosion rate will be reduced and also there is increase in
efficiency.
REFERENCES
[1] Baha Abulnaga (2004). Pumping Oilsand Froth. 21st
International Pump Users Symposium, Baltimore,
Maryland. Published by Texas A&M University, Texas,
USA.
[2] Centtrifugal pumps, Design & Application V.S.
Lobanoff, R.R. Ross Gulf Publishing Company,
(1985).
[3] Englberger L, Streich M, Tevaearai HT, Carrel TP.
Interact Cardiovasc Thorac Surg. (2008) Feb 26 (3)
[4] G. Agrati and A. Piva Weir Gabbioneta, Sesto S.
Giovanni, Italy (1)
[5] Larry Bachus, Angle Custodio (2003). Know and
understand
centrifugal
pumps.
Elsevier
Ltd.. ISBN 1856174093.
[6] Richards, John (1894). Centrifugal pumps: an essay on
their construction and operation, and some account of
the origin and development in this and other countries.
The Industrial Publishing Company. pp. 4041.
[7] Shepard,
Dennis
G.
(1956). Principles
of
Turbomachinery.
McMillan. ISBN 0-471-855464. LCCN 56002849.
[8] Sylvia Hurtado, Kevin Eagan, Bryce Hughes Higher
Education Research Institute, UCLA (2)
[9] Warman Slurry Pumping Manual Warman
International Ltd., Internal publication, (1981).

Page 43

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Investigation of Thermal Barrier Coating on I.C


Engine Piston
S. Lakshmanan1 G. Ranjith Babu2 S. Sabesh3 M. Manikandan4
1

Research scholar, Anna University, Chennai


Assistant Professor, 2 3 4 Department of Mechanical Engineering,
Veltech Multi tech Engineering college, Avadi

Abstract: Thermal Barrier Coating(TBC) are used to stimulate the reduced heat rejection in engine cylinders. It reduces the
heat transfer to the water cooling jacket and exhaust system. Thus improves the mechanical efficiency. In this operation
Zirconia Ceramic is coated on the I.C engine piston using Plasma arc technique. Their performance characteristics and
results are studied and tabulated.
I. INTRODUCTION
According to the First law of thermo dynamics, thermal
energy is conserved by reducing the heat flow to the cooling
and exhaust systems. it's known that only one third of energy
is converted into useful work, theoretically if rejection of heat
is reduced then the thermal efficiency likely to be increased.
to a considerable extend. The Application of TBC decreases
the heat transfer to the cooling and exhaust system which
ultimately results in the high temperature gas and high
temperature combustion chamber wall which reduces the level
of smoke and hydrocarbon(HC) emission.
In particular, for the latter, durability concerns for the
materials and components in the engine cylinders, which
include piston, rings, liner, and cylinder head, limit the
allowable in-cylinder temperatures. The application of thin
TBCs to the surfaces of these components enhances hightemperature durability by reducing the heat transfer and
lowering temperatures of the underlying metal. In this article,
the main emphasis is placed on investigating the effect of a
TBC on the engine fuel consumption with the support of
detailed sampling of in-cylinder pressure. The optimization of
the engine cycle and the exhaust waste heat recovery due to a
possible increase in exhaust gas availability were not
investigated in this study.
II. LITERATURE REVIEW
The selection of TBC materials is restricted by some basic
requirements:
(1) High melting point,

(4) Chemical inertness,


(5) Thermal expansion match with the metallic substrate,
(6) Good adherence to the metallic substrate and
(7) Low sintering rate of the porous microstructure.
Among those properties, thermal expansion
coefficient and thermal conductivity seem to be the most
important.
III. MATERIALS
Zirconia PSZ are cream colored blends with approximately
10% MgO and are high in toughness, retaining this property to
elevated temperatures. it retains many properties including
corrosion resistance at extremely high temperatures, zirconia
does exhibit structural changes that may limit its use to
perhaps only 500 C. It also becomes electrically conductive
as this temperature is approached. Zirconia is commonly
blended with MgO, CaO, or Yttria (3&4) as a stabilizer in
order to facilitate transformation toughening. This induces a
partial cubic crystal structure instead of fully tetragonal during
initial firing, which remains metastable during cooling. Upon
impact, the tetragonal precipitates undergo a stress induced
phase transformation near an advancing crack tip.
This action expands the structure as it absorbs a great deal of
energy, and is the cause of the high toughness of this material.
Reforming also occurs dramatically with elevated temperature
and this negatively affects strength along with 3-7%
dimensional expansion. PSZ is adopted

(2) No phase transformation between room temperature and


operation temperature,
(3) Low thermal conductivity,

Page 44

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Type

Horizontal 4 stroke single


cylinder water cooled
diesel engine

Combustion chamber

Direct injection

`Bore

95 mm

Stroke

95 mm

Displacement

673.4 cc

Compression ratio

18:1

Max. torque

4.2 kg- /1900 rpm

Max. HP

13 HP@2400 rpm

S.F.C

192 g/ Hp / hr

Cooling system
Lighting system

Condenser type thermo


siphon cooling system
12 volts / 35 watts

Std. pulley (DIA)

100 mm/optional 120mm

schematic of the overall arrangement of the engine test bed.


To appreciate the effect of a TBC on engine performance, in
particular fuel consumption, obtaining engine indicator
diagrams is necessary. A 10-mm water-cooled piezoelectric
pressure transducer was used to measure the dynamic cylinder
pressure. Unfortunately, the transducer of this size to be
directly mounted on it because no fill space is available for
such installation. To fix the transducer, an adapter mounting
was fabricated.

Table1
Zirconia Ceramic is a ceramic material consisting of
at least 90% of Zirconium Dioxide (ZrO2). Zirconium Oxide
is produced from natural minerals such as Baddeleyite
(zirconium oxide) or zirconium silicate sand. Pure zirconia
changes its crystal structure depending on the temperature: At
temperatures below 2138 F (1170C) zirconia exists in
monoclinic form. At temperature of 2138F (1170C)
monoclinic structure transforms to tetragonal form which is
stable up to 4300F (2370C). Tetragonal crystal structure
transforms to cubic structure at 4300F (2370 C). Structure
transformations are accompanied by volume changes which
may cause cracking if cooling/heating is rapid and nonuniform and structural failure of any ceramic coating. Additions
of some oxides (MgO, CaO,Y2O3) to pure zirconia depress
allotropic transformations (crystal structure changes) and
allow to stabilize either cubic or tetragonal structure of the
material at any temperature. The most popular stabilizing
addition to zirconia is yttria (Y2O3), which is added and
uniformly distributed in proportion of 5.15%.
IV. EXPERIMENTAL SETUP AND OPERATION
A fully instrumented CI engine was mounted on a
computer-controlled engine dynamometer. Table 1 tabulates
the specifications of the engine, while figure shows the

To draw the pressurized gas out of the combustion chamber, a


1.3-mm through-hole was drilled into the third cylinder at the
rear of the cylinder head (Fig. 2), the only place suitable for
the mounting of the adapter and bypassing of the water jacket
of the cylinder head. In addition to pressure measurement, a.
crank shaft encoder was used to trigger the acquisition of the
pressure signal and also to provide crank positional
information. The shaft encoder possessed a resolution of 0.1
crank angle (CA); however, the data acquisition was set at a
sampling rate of 0.2 CA. In this experiment, a non-dispersive
infrared (NDIR) analyser and flame ionization detector (FID)
measured concentrations of carbon monoxide and unburned

Page 45

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
hydrocarbons (HCs), respectively. The TBC was examined
with scanning electron microscopy (SEM) after the tests had
been conducted, and the chemical composition was analysed
with an energy dispersive x-ray (EDX) unit
A. Plasma spraying technique:
In the various ceramic materials, partially stabilized zirconia
(PSZ) has excellent toughness, hot strength, and thermal
shock resistance, low thermal conductivity, and a thermal
expansion coefficient close to those of steel and cast iron. PSZ
has been widely used as a thermal barrier coating in the
combustion chambers of diesel engines.(6) Hence, in the
present work, PSZ was chosen as the material for the thermal
barrier coating in the piston crown. In the present
investigation,(5) the piston crown was coated with the PSZ
ceramic material, using a plasma spraying technique. Plasma
spraying is a thermal spray process that uses an inert plasma
stream of high velocity to melt and propel the coating material
on to the substrate.(1)
V. PERFOMANCES AND CHARTERISTICS
Figures compares the results obtained from both the baseline
and TBC piston tests. In general, the TBC piston tests showed
lower exhaust gas temperatures, which, combined with the
results shown in Figure, positively indicated that the
performance of the engine would be improved.(2)
Swept volume
Speed Voltage Current Power FC
FC
FC
SFC
BMEP CV
Fuel EnergyEfficiency
cu.m rpm V
A
kW
s/5cc g/hr kg/s g/(kW.hr) bar
kJ/kg kW
%
0.000673 1500
230
0
0 28.13 537.5044 0.000149 0 42500 6.345539
0
0.000673 1500
230
2 0.541176 23.38 646.7066 0.00018 1195.001 0.642937 42500 7.634731 7.088351
0.000673 1500
230
4 1.082353 18.57 814.2165 0.000226 752.2652 1.285873 42500 9.612278 11.26011
0.000673 1500
230
6 1.623529 17.31 873.4835 0.000243 538.0152 1.92881 42500 10.31196 15.74414
0.000673 1500
230
8 2.164706 15.87 952.741 0.000265 440.1249 2.571747 42500 11.24764 19.24587
0.000673 1500
230
10 2.705882 12.31 1228.27 0.000341 453.9258 3.214683 42500 14.50041 18.66073
0.000673 1500
230
12 3.247059 11.86 1274.874 0.000354 392.6241 3.85762 42500 15.05059 21.5743

A .T H E R M A L E F F I C I E N C Y O N F U E L C O N S U M P T I O N :
T h e r e je c tio n o f h e a t flo w to th e w
ja c k e ts a n d th e e x h a u s t s y s te m w h ic h e n s u re s
c o m b u s tio n in a e n g in e th a n th e b a s e lin e e n g in e .T h e
in th e fu e l c o n s u m p tio n le v e l w h ic h a ls o in d ic a te s
th e r m a l e f fic ie n c y in th e c o a te d I .C . e n g in e

a te r c o o l
a b e tte r
d e c re a se
a b e tte r

B ra k e M e a n E ffe c t iv e
P re ssu re v s B ra k e T h e rm a l
E ffic ie n c y
B T E ( B a s e lin e )
%

50
0

B T E (C o a te d )
%

B M E P (b a r)

In th e e m is s io n m e a s u re m e n ts , th e
ta ilp ip e u H C a n d C O c o n c e n tra tio n s w e re c o n d u c te d . It w a s
d is c o v e re d th a t th e C O d id n o t v a r y m u c h in e ith e r th e
b a s e lin e o r T B C te s t. T h e v a r ia tio n s w e re m o re o r le s s w ith in
th e r e s o lu tio n o f th e N D I R a n a ly z e r , w h ic h w a s 0 .1 v o l.%
c o n c e n tr a tio n ,( 2 1 ) w h e r e a s th e r e s o lu tio n o f th e F I D u s e d w a s
1 p p m . F ig u re 6 c o m p a re s th e b ra k e s p e c ific fu e l
c o n s u m p tio n b e tw e e n th e b a s e lin e a n d T B C p is to n te s ts .
R e s u lts s h o w th a t, in g e n e ra l, th e fu e l c o n s u m p tio n w a s lo w e r
in th e T B C p is to n te s ts fo r th e s a m e o p e ra tin g c o n d itio n , w ith
a n im p ro v e m e n t o f u p to 6 % a t lo w e r e n g in e p o w e r. T h e s e lfo p tim iz e d c y c le e ffic ie n c y d u e to th e a lte re d ig n itio n
c h a ra c te ris tic s in th e T B C p is to n e n g in e o u tn u m b e re d th e
s lig h tly re d u c e d c o m b u s tio n e ffic ie n c y w ith a n o v e r a ll
im p ro v e m e n t in th e r m a l e ffic ie n c y a s a w h o le . T h e le v e l o f
im p ro v e m e n t th a t h a s b e e n p re d ic te d ra n g e d fro m 2 to 1 2 % .
T h e y a ttrib u te th is to in s u la tio n o f in -c y lin d e r c o m p o n e n ts
.

B A S E L IN E E N G IN E V S C O A T E D E N G IN E
Swept volume
Speed Voltage Current Power FC
FC
FC
SFC BMEP CV
Fuel EnergyEfficiency
cu.m rpm V
A
kW s/5cc g/hr kg/s g/(kW.hr) bar kJ/kg kW %
0.000673 1500 230
0
0 24.86 608.206 0.000169 #DIV/0!
0 42500 7.180209
0
0.000673 1500 230
2 0.541176 22.85 661.7068 0.000184 1222.719 0.642937 42500 7.811816 6.927665
0.000673 1500 230
4 1.082353 20.12 751.4911 0.000209 694.3124 1.285873 42500 8.871769 12.19997
0.000673 1500 230
6 1.623529 18.38 822.6333 0.000229 506.6944 1.92881 42500 9.711643 16.71735
0.000673 1500 230
8 2.164706 15.37 983.7345 0.000273 454.4426 2.571747 42500 11.61353 18.63951
0.000673 1500 230
10 2.705882 13.27 1139.412 0.000317 421.0871 3.214683 42500 13.45139 20.116
0.000673 1500 230
12 3.247059 13.13 1151.561 0.00032 354.6475 3.85762 42500 13.59482 23.88453

Page 46

B ra k e P o w e r v s F u e l
C o n s u m p t io n
0 .0 0 0 4
0 .0 0 0 2

FC
( B a s e lin e )
kg/s

0
0

B P (k W )

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
EXHAUST EMISSION:
A.HYDROCARBONS:
The
level
of
emission
of
unburned
hydrocarbon(UHC) is considerably decreased due to reduction
of flow of heat to the water cool jackets and exhaust
system.But due to high temperature gas and high temperature
combustion wall which contributes combustion of lubricating
oil,which ultimately leads to emission of unburned
hydrocarbon.

that in water-cooled engines. They say this is due to higher


combustion temperature and longer combustion duration. .
Reference reports an increase in the LHR engine NOx
emissions and concluded that diffusion burning is the
controlling factor for the production of NOx. Almost equal
number of investigations report declining trend in the level of
emission of NOx. Reference indicates reduction in NOx level.
They reason this to the shortening of the ignition delay that
decreases the proportion of the premixed combustion.
Load vs NOx

Load vs HC

Load vs NOx

15
10
5

HC
(Baseline)
ppm

HC (Coated)
ppm
0

10

800
600
400
200
0

Nox in
ppm(baselin
e)
0

15

10

20

Nox in
ppm(Final)

LOAD (Amps)

LOAD (Amps)

Few drawbacks with TBCs


B. Carbon monoxide(CO):
The higher temperatures both in the gases and at the
combustion chamber walls of the LHR engine assist in
permitting the oxidation of CO.The higher temperature causes
complete combustion of carbon which results in combustion
of CO emission.

Load vs CO

0.08

CO- by
volume
%(base line)

0.06
0.04
0.02
0
0

10
20
LOAD (Amps)

CO-by
volume
%(Coated)

C. Nitrogen oxides:
NOx is formed by chain reactions involving Nitrogen
and Oxygen in the air. These reactions are highly temperature
dependent. Since diesel engines always operate with excess
air, NOx emissions are mainly a function of gas temperature
and residence time. Most of the earlier investigations show
that NOx emission from LHR engines is generally higher than

During operation TBCs are exposed to various thermal and


mechanical loads such as thermal cycling, high and low cycle
fatigue, hot corrosion and high temperature erosion. Currently,
because of reliability problems, the thickness of TBCs is
limited, in most applications, to 500m. Increasing coating
thickness increases the risk of coating failure and leads to a
reduced coating lifetime. The failure mechanisms that cause
TTBC coating spallation differ in some degree from that of
the traditional thinner coatings. A major reason for traditional
TBC failure and coating spallation in gas turbines is typically
bond coat oxidation. When the thickness of the thermally
grown oxide (TGO) exceeds a certain limit, it induces the
critical stress for coating failure. Thicker coatings have higher
temperature gradients through the coating and thus have
higher internal stresses.
Although the coefficient of thermal expansion
(CTE) of 8Y2O3-ZrO2 is close to that of the substrate
material, the CTE difference between the substrate and
coating induces stresses at high temperatures at the coating
interface. The strain tolerance of TTBC has to be managed by
controlling the coating microstructure.
Use of thicker coatings generally leads to
higher coating surface temperatures that can be detrimental if

Page 47

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
certain limits are exceeded. In the long run, the phase structure
of yettria stabilized zirconia (8Y2O3-ZrO2) is not stable
above 1250C. Also the strain tolerance of the coating can be
lost rapidly by sintering if too high surface temperatures are
allowed.
VI. CONCLUSIONS
The results showed that, increasing the brake thermal
efficiency and decreasing the specific fuel consumption for
Light heat Rejection engine with thermal coated piston
compared to the standard engine. There was increasing the
NOx emission and O2 for thermal barrier coated engine.
However there was decreasing the CO and HC emissions for
thermal coated piston engine compared to the standard engine.
The following conclusions can be drawn.

The TBC, using PSZ, applied to the combustion chamber


of the internal combustion engine showed some
improvement in fuel economy with a maximum of up to
4% at low engine power.
The peak cylinder pressures were increased by a
magnitude of eight to ten bars in the TBC piston engine,
in particular at high engine power outputs, though the
exhaust gas temperatures were generally lower, indicating
good gas expansion in the power stroke.
The unburned hydrocarbon concentrations were increased
most seriously at low engine speed and/or low engine
power output with a TBC piston engine. The authors
suspected that this could be due to the porous quenching
effect of the rough TBC piston crowns, where oxidation
of hydrocarbons was unable to be achieved by the
combustion air.
Sampling of cylinder pressures in the cylinders showed
that the ignition point of the TBC piston engine advanced
slightly relative to the baseline engine, indicating the
improvement in ignitability and heat release before the
top dead center, which caused the peak cylinder pressure
to raise.
REFERENCES

[1] Krzysztof Z. Mendera, Effects of Plasma Sprayed


Zirconia Coatings on Diesel Engine Heat Release.
Journal of KONES. Internal Combustion Engines, Vol. 7,
No.1-2, 2000.
[2] Pankaj N. Shrirao, Anand N. Pawar. Evaluation of
Performance
and
Emission
characteristics
of
Turbocharged Diesel Engine with Mullite as Thermal
Barrier Coating. P. N. Shrirao et al. / International

Journal of Engineering and Technology Vol.3 (3), 2011,


256-262.
[3] Pankaj N. Shrirao, Anand N. Pawar. An Overview on
Thermal Barrier Coating (TBC) Materials and Its Effect
on Engine Performance and Emission. International
Journal of Applied Research in Mechanical Engineering
(IJARME), Volume1, Issue2, 2011.
[4] H. Samadi, T.W.Coyle. Alternative Thermal Barrier
Coatings for Diesel Engines.
[5] S.H. Chan and K.A. Khor, The Effect of Thermal
Barrier Coated Piston Crown on Engine Characteristics.
Submitted 14 July 1998, in revised form 7 June 1999.
[6] V. Ganesan, Internal Combustion Engines. 4th Edition,
Tata McGraw Hill Education Private Limited.
[7] S. PalDey, S.C. Deevi, Mater. Sci. Eng. A 342 (1-2)
(2003) 58-79. [2] J.G. Han, J.S. Yoon, H.J. Kim, K. Song,
Surf. Coat. Technol. 86-87(1996)82-87.
[8] K.L. Lin, M.Y Hwang, C.D. Wu, Mater. Chem. Phys. 46
(1996)77-83.
[9] T. Bjrk, R. Westergrd, S. Hogmark, J. Bergstrm, P.
Hedenqvist, Wear 225-229 (1999) 1123-1130.
[10] J.G. Han, K.H. Nam, I.S. Choi, Wear 214 (1998) 91-97.
[6] R.D. James, D.L. Paisley, K.A. Gruss, S. Parthasarthi,
B.R. Tittmann,
[11] Y. Horie, R.F. Davis, Mater. Res. Soc. Symp. Proc. 410
(1996) 377382.
[12] D.-Y Wang, C.-L. Chang, K.-W. Wong, Y -W. Li, W.-Y
Ho, Surf.
[13] Coat. Technol. 120-121 (1999) 388-394.
[14] A. Kimura, H. Hasegawa, K. Yamada, T. Suzuki, J.
Mater. Sci. Lett.19 (2000)601-602.
[15] T. Suzuki, D. Huang, Y Ikuhara, Surf. Coat. Technol. 107
(1998)41-47.
[16] T. Ikeda, H. Satoh, Thin Solid Films 195 (1991) 99-110.
[17] Y. Tanaka, T.M. Gr, M. Kelly, S.B. Hagstrom, T. Ikeda,
K. Wakihira,
[18] H. Satoh, J. Vac. Sci. Technol. A 10 (4) (1992) 17491756.
[19] .R. Roos, J.P. Celis, E. Vancoille, H. Veltrop, S. Boelens,
F. Jungblut, J. Ebberink, H. Homberg, Thin Solid Films
193-194 (1990) 547-556.
[20] Taymaz, I.; Cakir, K.; Mimaroglu, A. Experimental
study of effective efficiency in a ceramic coated diesel
engine. Journal of Surface and coatings technology
(2005).

Page 48

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Innovative Brick Material


1

M.Scinduja , S.Nathiya , C.V.Shudesamithronn1, M. Harshavarthana Balaji1,T.Sarathivelan1, S. Jaya pradeep2


1

Assistant Professor, 2U .G Student ,


Department of Civil Engineering, Knowledge Institute of Technology, Kakkapalayam, Salem 637 504, Tamil
Nadu, India
I. INTRODUCTION

CEMENT: Cement is one of the binding materials in this

Since the large demand has been placed on building

project. Cement is the important building material in

material industry especially in the last decade owing to the

todays construction world 53 grade Ordinary Portland

increasing population which causes a chronic shortage of

Cement (OPC) conforming to IS: 8112-cement used.

building materials, the civil engineers have been

Properties of cement

challenged to convert the industrial wastes to useful


building and construction materials. This experimental
study which investigates the potential use of waste paper
for producing a low-cost and light weight composite brick

Description

of

test

Test results
obtained

IS: 8112 1989

as a building material. These alternative bricks were made


Initial

with papercrete.

setting

time

II. OBJECTIVES
The major Objective of the project is replacing the costly

Requirements of

Final setting time

65 minutes

Min. 30minutes

270 minutes

Max. 600minutes

and scarce conventional building bricks by an innovative


and alternative building bricks, which satisfies the
Fineness

following characteristics,

Required

Cost effective

Environmental friendly

Less weight

Inflammable

Less water absorption

Easily available

the above mentioned needs.

m /kg

Min. 225 m2/kg

Fig . Cement

The main objective of this project is optimize the


papercrete mix with desirable properties, which satisfies

412.92

GROUND GRANULATED BLAST FURNACE SLAG


(GGBS): Ground-granulated blast-furnace slag (GGBS) is

III. MATERIALS USED

a byproduct which is obtained during the manufacturing

In this project waste materials were utilized to produce

process of pig iron in blast furnace. This process produces

building bricks. The following materials were used in this

a glassy, homogeneous, non-crystalline material that has

investigation

cementitious properties. GGBS powder was collected from


Quality polytech, Mangalore. It is off white in colour by

Page 49

www. ijraset.com
SJ Impact Factor-3.995
3.995

Special Issue-1,
Issue October 2014
ISSN: 2321-9653
2321

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
appearance. The specific gravity is 3.09. The GGBS

Paper: Paper is principally wood cellulose. Cellulose is

powder is shown in fig

natural polymer. And Fig.3.4.1 shows the links of cellulose


bonds. The cellulose chain bristles with polar-OH
polar
groups.
These groups form hydrogen bonds with -OH group on
adjacent chains, bundling, and the chain together. The
chains also pack regularly in places to form hard, stable
crystalline region that give the bundled chains even more
stability and strength.

Fig GGBS
QUARRY DUST: Getting good Quarry dust free from
organic impurities and salts is very difficult in now a day.
While adding the Quarry dust to the mix. And the Quarry
dust should be in uniform size i.e., all the Quarry dust

Fig. Cellulose hydrogen bonds

particles should be fine. The Quarry Dust obtained from


Fig.shows the network of cellulose fibers and

local resource was used in concrete to cast test bricks. The


physical and chemical properties of Quarry Dust obtained
by testing the samples as per Indian
an Standards are listed in

smaller offshoots from the fibers called fibrils. In this,


fibers and fibrils network forms a matrix, which becomes
coated with Portland cement. When these networks of

table

fibers and fibrils dry, they intertwine and


an cling together
Properties of Quarry dust

with the power of hydrogen bond. Coating this fiber with


Portland cement creates a cement matrix, which encases

Property

Quarry dust

Natural sand

the fibers for extra strength. Of course paper has more in it

Specific gravity

2.54-2.60

2.60

than cellulose.

1720-1810

1460

1.20-1.50

Nil

Nil

1.50

12-15

06

Zone II

Zone II

Bulk

relative
3

texture. Clay,
lay, rice husk ash is added to make the cellulose

density (kg/m )
Absorption (%)
Moisture

content

(%)
Fine particles less
than 0.075mm (%)
Sieve analysis

Raw cellulose has comparatively rough

very smooth.

Fig. Microscopic view of cellulose


While adding more sand or glass to the mix
results in a denser, stronger, more flame retardant material,
Fig Quarry Dust

but adds weight and reduces R- value

Page 50

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Heavy mixes with added sand, glass etc.,

plasters. It makes concrete cohesive and prevents

increases strength and resistance to abrasion, but also

segregation.

reduces flexibility somewhat, adds weight and may reduce

Features & Benefits:

R-value. So the trick is finding the best mix for the

application. This mould was collected from ACC brick


suppliers in the size of 230mm length, 110mm wide and

hence protects steel better against corrosion.

80mm deep. The papers, which were collected, cannot be


used directly. It should be made into paper pulp before

Corrosion resistant - Makes concrete more cohesive,


Compatibility Being a liquid, easily dispersible &
compatible with concrete/mortar mixes.

mixing with other ingredients.

Permeability It reduces the permeability of water


into concrete.

Water: Water is an important ingredient of papercrete as it


actively participates in the chemical reaction with cement.
It should be free from organic matter and the pH value

Strength . The setting time and compressive strength


of the concrete remains within the specification limits

Shrinkage Reduces shrinkage crack development in


plaster & concrete.

should be between 6 to 7.

Workability Improves workability of freshly mixed


cement concrete.

Durability Increases durability by improving


waterproofing of concrete.

MODIFIER CUM BONDING COMPOUND: Dr. Fixit


Super Latex is a highly potent and versatile SBR based
liquid for high performance applications in waterproofing
and repairs.
Features & Benefits:

Fig Materials Used


WATER PROOFING COMPOUND FOR CONCRETE

Excellent Coverage - 70-80 sq.ft per kg/ in 2 coats

Less material wastage- material does not fall baclV

AND PLASTER

rebound

Highly cost effective due to better coverage & lesser


wastage

High Bonding Strength

Prevents leakages & dampness

Enhances strength & provides durability

Fig. Water Proofing Compound


Dr. Fixit Pidiproof LW+ is specially formulated
integral liquid waterproofing compound composed of
surface active plasticizing agents, polymers & additives. It
is used as an additive for cement concrete, mortar &

Fig. Modifier cum Bonding Compound

Page 51

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
IV. MATERIAL CHARACTERISTICS BRICKS

sunshade. The absence of grey or white deposits on its

The bricks are obtained by moulding clay in a rectangular

surface indicates absence of soluble salts.

block of uniform size and then by drying and burning the


blocks. As the bricks are of uniform size, they can be
properly arranged and further, as they are in lightweight,
no lifting appliance is required for them. The common
brick is one of the oldest building materials and it is
extensively used at present as a leading material in
construction. In India, process of brick making has not

If the white deposit covers about 10% surface, the


efflorescence is said to be slight and it is considered as
moderate, when the white deposit cover about 50%
surface. If grey or white deposits are found on more than
50% of surface, the efflorescence becomes heavy and it is
treated as serious, when such deposits are converted into
powdery mass.

changed since many centuries except some minor


refinements. There has been hardly any effort in our

SHAPE AND SIZE


In this test, a brick is closely inspected. It should

country to improve the brick-making process for

be of standard size and its shape should be truly

enhancing the quality of bricks.


A brick is generally subjected to the following

rectangular with sharp edges. For this purpose, 20 bricks of


standard size (190mm X 90mm X 90mm) are selected at

tests to find out its suitability for the construction work.

random and they are stacked length wise, along the width
ABSORPTION

and along the height.

A brick is taken and it is weighed dry. It is then


For a good quality brick, the results should be

immersed in water for a period of 24 hours. It is weighed


again and the difference in weight indicates the amount of
water absorbed by the brick. It should not, in any case,

within the following permissible limits:Length: 3680mm


to 3920mm

exceed 20% of weight of dry brick.

Width : 1740mm to 1860mm

CRUSHING STRENGTH

Height: 1740mm to 1860 mm

The crushing strength of a brick is found out by

SOUNDNESS

placing it in a compression-testing machine. It is

In this test, two bricks are taken and they are

compressed till it breaks, as per BIS: 1077-1957, the

struck each other. The bricks should not break and a clear

minimum crushing strength of brick is 3.50 N/mm .The

ringing sound should be produced.

brick with crushing strength of 7-14 N/mm are graded as


A and those having above 14 N/mm2 is graded as AA.
HARDNESS

STRUCTURE
A brick is broken and its structure is examined. It
should be homogeneous, compact and free from defects

In this test, a scratch is made on the brick surface


with the help of finger nail. If no impression is left on the

such as holes, lumps etc.


PAPERCRETE

surface, the brick is treated to be sufficiently hard.


Papercrete is a tricky term. The name seems to
PRESENCE OF SOLUBLE SALTS

imply a mix of paper and concrete, hence papercrete. But

The soluble salts, if presents in brick will cause


efflorescence on the surface of bricks. For finding out the
presence of soluble salts in brick, it is immersed in water
for 24 hours. It is then taken out and allowed to dry

more accurately, only the Portland cement part of concrete


is used in the mix-if used at all. Arguably, it could have
been called paperment papercrete may be mixed in many
ways. Different types of paperecrete contain 50-80% of

Page 52

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
waste paper. Up to now, there are no hard and fast rule, but

FIBROUS CONCRETE

recommended standard will undoubtedly be established in


future.

Fibrous concrete is a mixture of paper, Portland


cement, water. There are on harmful by-products or

The basic constituents are waste nearly any kind of paper,

excessive energy use in the production of papercrete.

board, glossy magazine stock, advertising brochure, junk

While it can be argued that the Portland cement is not

mail or just about any other types of mixed grade paper

environmental friendly, it is not used in all types of

is acceptable. some types of paper work better than other,

papercrete, and when it represents a fairly small percentage

but all types of works, newsprint are the best. Water

of cured material by volume. Once of the most

proofed paper and card board, such as butcher paper, beer

advantageous properties of papercrete is the way paper

cartons ect., are hard to break down in water. Catalogs,

fibers hold the Portland cement or perhaps the way

magazines and other publication are fine in and of

Portland cement adheres to paper fibers when the water

themselves, but some have a stringy, rubbery, sticky spine,

added to the Portland cement drains from the mix, it come

which is also water resistance. Breaking down this kind of

out almost clear.

material in the mixing process cant be done very well.


Small fragments and strings of these materials are almost
always in the final mix.

There is no messy eco-unfriendly cement sediment left on


the ground, running in to waterways ect., papercrete can be
produced using solar energy. The only power needed is for

When using papercrete containing the unwanted material

mixing and pumping water. Its R-value is in 2.0-3.0 per

in a finish, such as stucco or plastering ,the unwanted

inch. Since walls in a one or two storey house will be12-16

fragment some time shown up on the surface, but this is

inch thick, the long energy saving of building with

not the serious problem.

papercrete will be a bonanza for the home owner and the

In the optimization work the admixture like conplast


WP90, Dr.Fixt 105 water proof and

polymer like

environmental.
PADOBE

nitrobond SBR are as the water repellent agents

Padobe has on Portland cement. it is admix paper,

Papercretes additives can be,

water, earth with clay. Here clay is binding material

instead of using the cement, earth is used in this types of

Cement

brick. This earth should have clay content more than 30%.

GGBS

With regular brick, if clay content is too high the brick

Quarry dust

may crack while drying, but adding paper fiber to the earth

Paper

mix strengthens the drying block and give some flexibility

Papercrete is having the following derivatives,

helps to prevent cracking.


FIDOBE

Fibrous concrete

Padobe

fibrous materials

Fidobe

ECO FRIENDLY

Fidobe is like padobe,but it may content other

Phenomenal growth in the construction industry


depends upon the deflectable resources of the country.

Page 53

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Production of

building materials lead to irreversible

5.3 PULP GENERATION

environmental impacts. Using eco-friendly materials is the


The papers, which were collected cannot be used

best way to build an eco-friendly building. Eco-friendly,


describes a product that has been designed to do the least

directly. It should be made into paper pulp before mixing


with other ingredients. The following are the steps

possible damage to the environment.

involved in the generation of pulp.


V. EXPERIMENTAL PROCEDURE

MANUFACTURING OF BRICKS

were removed.

There was no clear past details about the project. And there
is no hard procedure for casting the bricks. So the

Then the papers were teared into small pieces of


papers.

procedure that is given below was followed by our own.


And the equipments which were used in this project are for

First the pins, threads and other materials in the papers

Then, a 200 litre water tank was taken. And 2/3 rd of it


was filled with water.

our convenience only.

MOULD PREPARATION
After collecting all the materials, a mould was
prepared. A typical mould is shown in the below figure.

Then the small pieces of paper were immersed in the


water tank. The paper pieces were immersed
individually not in a bulky manner in order to make
the pieces completely wet. Before immersing it into
the water, the papers were weighed. The figure shows
the papers were being immersed in the water tank.

Fig.

Immersed

Fig. Mould
This mould was non-water absorbing in the size
of 230mm length, 110mm wide and 80mm deep. The
shorter sides of the mould are slightly projecting to serve
as handle. And joints were made without any hole or gap
Fig. Paper Pulp

to avoid leakage.

Page 54

Paper

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
The papers were kept in the tank for 2 to 3 days otherwise
until the papers degrade into a paste like form. Then the
paper was taken out from water and taken to the mixer
machine to make it as a paper pulp.
The pulp generating process was tedious and time
consumption. For lab purpose only these procedures were
followed. While going for mass production, the Tow
mixers were recommended to reduce the cost. The Tow

Fig. Mixing

mixers have sharp blades and it can operate mechanically


PAPERCRETE MIX RATIO
Trial Mix

Weigh batching was carried out in this project .So the


materials were measured in Kilograms. According to

Ingredients (%)

the particular proportion the materials were measured


Ident
Qu

ificat
S.No.

first and kept separately .This was done just before the

Dr
Dr

ion

Ce

arr

GGBS

Mar

me

ap

% of weight

nt

er

du

of cement

101

Fixi

st

mixing starts.

Fixit
302

Glows, shoes, masks were wearied before the mixing.

Super

Then, the nonwater absorbing and smooth surface


was made for mixing.

Latex

Water was sprinkled over that surface. And this


mixing place was selected nearer to the casting place.

First the ingredients like Quarry dust/ GGBS were


placed.

1.

2.

P1

P2

20%

30%

20%

20%

50ml

Then cement was placed over that ingredient.

These two were dry mixed with shovel thoroughly still


uniform color was formed.

50ml

3.

P3

50%

20%

Then the paper pulp, which was in a wet condition,


was placed separately. Paper pulp should contain less

50ml

water. So the excess water was squeezed out.

or electrically. Table : PAPERCRETE MIX RATIO

was placed over the paper pulp and mixed thoroughly

MIXING: After all the ingredients were ready, the mixing


was done. In this project, mixing was done manually. The
mixing process of fibrous concrete bricks and padobe

to get the uniform mix.

proportions were used in this project.

There was no further water was added separately


unless it was essential. The water in the pulp was

bricks are different, and that processes are given below.


The exact mix proportion was not known. So, trial

The already mixed cement and GGBS/Quarry dust

utilized for mixing the papercrete.

After the mix, the required amount of papercrete was


taken to the site and the remaining amount was kept
free from evaporation.

Page 55

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
CASTING OF BRICKS

WEIGHT
Table : Weight of Papercrete Bricks
S.No.
Identification

% 0f

Mark

GGBS

1.

P1

20

2.

P2

30

3.

P3

50

Dry
Weight
(kg.)
1.773
1.842
1.862

Fig. Manufacturing of Brick


After mixing, it should be placed in the mould
within 30 minutes. So, two moulds were used at the time to
make the process very fast. The bricks were moulded
manually by hand and on the table. The following are the
steps involved in molding,

The mould was over a table

The lump of mix was taken and it was placed in the


mould.

The extra or surplus mix was removed either by


wooden strike or the metal strike or frame with wire.

The casted papercrete bricks dried for 14 days.

VI. RESULT & DISCUSSION


After casting the bricks, they were analyzed for
using as a brick. Various tests were carried out to check
the properties of the bricks. And the results of the test were
analyzed with the existing and standard results. The

Fig. Weight of bricks-(P1,P2,P3)

following tests were carried out to check the strength of


The ordinary conventional bricks weight varies

the brick.

from 3 to 3.5 Kg but the fibrous concrete and padobe


bricks weight varies from 1 to 2 Kg. The maximum weight
is less than 2Kg only. In this above proportion GGBS is

Page 56

www. ijraset.com
SJ Impact Factor-3.995
3.995

Special Issue-1,
Issue October 2014
ISSN: 2321-9653
2321

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
rd

having 1/3 of the conventional brick weight only. Sand


based bricks are having weight 2/3

rd

COMPRESSION TEST

of conventional brick

weight only. So this bricks are light weight and it will also
reduce total cost of construction due to the reduction in
dead load.
WATER ABSORBTION TEST : Dry the specimen in
ventilated oven at a temperature of 105C to 115C till it
attains substantially constant mass. Cool the specimen to
room temperature and obtain its weight (M1) specimen too
warm to touch shall not be used for this purpose. Immerse
completely
mpletely dried specimen in clean water at a temperature

Fig. Compression test

of 27+2C for 24 hours. Remove the specimen and swipe


The test was carried out by a Compression

out any traces of water with damp cloth and weigh the

Testing Machine. This test was carried out on 14th day

specimen after it has been removed from water (M2)

from the date of casting papercrete brick. While testing


TABLE

WATER

ABSORPTION

TEST

PAPECRETE BRICKS

OF

the papercrete brick great care must be taken, because


papercrete brick never failed catastrophically, it just
compressed like squeezing
ezing rubber. So load was applied up

Water Absorption result in % (24

to half compression.

hours)

When papercrete brick failed at the higher load,


the structure was not fully collapsed. Only the outer faces

Trail
Mix

20%

30%

50%

cracked and peeled out. The papercrete brick are having


elastic behavior and less brittleness.
eness.

40.11%

33.85%

23.74%

The following steps were followed for compression


testing.

45
40
35
30
25
20
15
10
5
0

WATER ABSORPTION TEST


40.11
33.85
%

First the irregularities in the surface were removed.

The brick was placed centrally on the bottom plate of


the universal testing machine.

G-20%

was lowered down up to the brick was hold tightly

G-30%

without any movement.

G-50%
P1

P2

P3

Then the upper plate of the universal testing machine


machi

Then the load was applied axially at a uniform rate

This load was applied till the half of the brick.

Three bricks from same proportion were tested every

24 HOURS

time.

Fig. Water absorption test for Trial Mix

Page 57

www. ijraset.com
SJ Impact Factor-3.995
3.995

Special Issue-1,
Issue October 2014
ISSN: 2321-9653
2321

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
And the compressive strength
trength was calculated by this

brick. While the scratch was made with the help of finger

formula,

nail on the bricks, very light impression was left on the


fibrous concrete brick surface. So this test results that

Compression strength= (load/surface area)

fibrous concrete bricks are sufficiently hard.


PRESENCE OF SOLUBLE SALTS
The soluble salts, if presents in bricks will cause
efflorescence on the surface of bricks. For finding
fi
out the
presence of soluble salts in a brick, this test was carried
out. In this test fibrous concrete brick were immersed in
water for 24 hours. Then the bricks were taken out and

Fig. Brick after testing

allowed to dry in shade. And there was no any grey or


TABLE COMPRESSIVE STRENGTH OF PAPECRETE

white deposit onn the bricks surface. It results that the

BRICKS

bricks are free from soluble salts.


SOUNDNESS TEST
In this test two bricks from same proportion
2

Best Compressive Strength in N/mm

14 Days)

bricks were not broken and a clear ringing sound was

Trial
Mix

were taken and they were struck with each other. The

produced. So the bricks are good.


20%

30%

50%

STRUCTURE TEST
In this test, the bricks were broken and the

5.9

7.5

8.7

structures of that bricks were examined, whether they


were free from any defects such as holes, lumps, etc.

COMPRESSIVE STRENGTH TEST

7.5N/

10

8.7N/
2

8
6

G2

4
2
0
P1

P2 14 DAYS P3

Fig. Inner structure of fibrous concrete brick

Fig. Compression test for Trial Mix

In this test fibrous concrete brick were cut into

HARDNESS TEST
In this test, a scratch was made on brick surfaces.
This test was carried out for all the three proportions of

equal parts. The fibrous concrete


concret brick piece structure was
homogenous, compact, and free from defects and this brick
pieces look like a cake piece.

Page 58

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
exactly two pieces (Fig. 6.9.1) by using conventional saw

NAILING

blades. So, we can get any shape and size of fibrous


concrete brick.

Fig. Nail in the brick


Fibrous concrete bricks are less hard when
compare to conventional bricks. So this test was carried

Fig.6.9.2 Joined brick pieces

out to find out whether bricks hold the nail or not. A nail
was hammered in the brick and a screw is also screwed in

Many cut bricks are wasted in now a day. But the

the brick. In this two (Fig.6.8), fibrous concrete brick did

two fibrous concrete brick pieces can be hold together by

not hold nails any better than dry wall, but screws worked

putting a medium amount of glue on the bottom piece.

well and hold a considerable weight. So, the screws are the

This will not come apart (Fig.6.9.2). This would seem to

anchors of choice for fibrous concrete bricks.

indicate that papercrete could be used in application


calling for quick assembly by cutting the pieces to size in

CUTTING AND GLUE

advance and letting the user simply glue them together.


PLUMBING AND ELECTRICAL

Fig. Brick pieces


In site lot of bricks are wasted while cutting only.
The labors could not able to cut the bricks exactly what
they need. But, fibrous concrete bricks can be cut into

Page 59

Fig. Hole in the brick

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
concrete bricks not for padobe brick. Because padobe brick
was already heated in kiln at high temperature so, it wont
burn. The following are the steps involved in this test,

First, the brick was wiped with cloths and all the
foreign matters were removed.

Then the flammable sticks were fired. After that, the


bricks were held on the flame for five minutes.

After five minutes fixing was stopped and the bricks


were observed.

Fig.Channel in the brick

From the above test, it was observed that the

Installing plumbing lines requires cutting holes

fibrous concrete bricks did not burn with an open flame.

and channels in papercrete. It was very easy in fibrous

They smoldered like charcoal. But these brick would be

concrete bricks. Electrical runs were cut with a circular

reduced to ashes after burning several hours. If the interior

saw or chain saw. To make holes for outlets, horizontals

plaster and exterior stucco is provided on the fibrous

and vertical slits was cut with a circular saw. Then

concrete bricks, the bricks wont burn. The only weak

unwanted pieces were removed with a screwdriver.

point is inside the block, near electrical outlets, switches

Outlet boxes can be angle screwed directly into the


papercrete. Home fires start, where the wiring enters the
outlet boxes. So, nonflammable mortar should be put
behind the outlet boxes for safety. Once the electrical
wiring and outlets are installed and then tested, the

and other places where wires gives through walls, into


boxes etc.,. Properly wired places never cause fire. If we
apply the plaster without any hole or leakage on the bricks,
it wont burn or smolder inside. Because there will be lack
of oxygen for burning.

channels for the electrical runs are for filled with

VII. CONCLUSIONS

papercrete.
From the above experimental studies we can conclude that,
FIRE

Papercrete bricks are suitable for non-load bearing


walls only.

The weight of this brick is 1/3rd to 2/5th lesser than


conventional clay brick.

These bricks are not suitable for water logging and


external walls. It can be used in inner partition walls.

Due to less weight of these bricks, the total dead load


of the building will be reduced.

Fig. Fire test

Since, these bricks are relatively light weight and


more flexible, these bricks are potentially ideal

A brick which is used for construction should


not flammable in open flame, so this test was carried out

material for earthquake prone areas.

for the bricks. This test was carried out only for fibrous

Papercrete brick does not expand or contract, so sheets


of glass or glass block can be embedded in and
trimmed with papercrete.

Page 60

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
The papercrete bricks are good sound absorbent,

[9] IS: 383-1970, Specification for coarse and fine

hence paper is used in these bricks. So, these bricks

aggregates from natural sources for concrete, Bureau

can be used in auditoriums.

of Indian Standards. New Delhi.

Since, the waste materials are used, it will reduce the


landfills and pollution.

Using the papercrete brick in a building, total cost will


be reduced from 20% to 50%.
REFERENCES

[1] B J Fuller, AFafitis and J L Santamaria. (May 2006)


The Paper Alternative, ASCE Civil Engineering
Vol. 75 No.5 pp. 72-77.
[2] J Pera and J Ambroise (2005) Properties of Calcined
Paper

Sludge,

Construction

and

Building

MaterialsV0l.21, No.5 pp 405-413.


[3] Lex Terry, (2006)Papercrete Construction- Building
Environment

and

Thermal

Envelope

Council

(BETEC) Symposium was held on 13th to 16th


October 2006 at the Northen New mexico Community
College in EIRito.
[4] M OFarrell and S Wild. (2006) A New Concrete
incorporating Watepaper Sludge Ash (WSA) Cement
& Concrete CompositesVol.17, No.3 pp 149-158. ,
[5] ParvizSoroushian et al (February 1994) Durability
and Moisture Sensitivity of Recycled Wastepaper
Fiber Cement Composites, Cement & Concrete
Composites Vol. 16 No.2 pp 115-128.
[6] R.Garcia et al (2008) Pozzolanic Properties of Paper
Sludge Waste, Construction and Building Materials,
Vol.22 No.7, July 2008, pp 1484 1490.
[7] Tarun R Naik et al (January 2004) Concrete
Containing

Pulp

and

Paper

Mill

Residuals,

Proceedings of International Conference on Fibre


Composites, High Performance Concretes and Smart
Materials, Chennai, pp 515-525.
[8] IS: 3495-1992 (Part 1 to 4), Methods of Tests for
Burnt Clay Bricks, Bureau of Indian Standards, Third
Revision, New Delhi.

Page 61

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Nano materials filled Polymers for reducing the


thermal Peak temperature in a vehicle
Sidharth Radhakrishnan1, Sudhirnaath2
1,2

UG Student, Dept. of Mechanical Engineering, RMK Engineering College, Chennai.

I. INTRODUCTION
There is an increasing demand for fuel nowadays and it is soon
expected that there will be an acute shortage in the fuel that we
are using at present. Hence there is a need to optimize the fuel
usage.
Almost 10% of fuel in a vehicle is used for maintaining the
temperature within it for the comfort level of the passengers and
the main factor that influences this is the air conditioner in the
vehicle have made a carbon nano tube which is blasted with
Graphite vapors forming a chicken wired structure. It is then
condensed with a polymer, which brings up the required
behavior of the material i.e. it can act both as a sun proof sheet
or as a normal transparent sheet (allows sunlight to pass
through) according to the requirement. This can be stuck to the
window pane. Now a voltage of 5V is given to change its
behavior from sun proof sheet to normal sheet and vice versa.
This eliminates the peak thermal temperature attained in the
vehicle when parked and hence the work load of the AC is
abruptly reduced.
II. FABRICATION OF CNT
Carbon nano tubes are tubular fibrous structures composed
entirely of graphitic carbon planes. The carbon carbon double
bonds form a hexagon shape within the lamellar graphite planes
that resemble common chicken wire. The orientation of the
graphite planes is parallel to the fiber axis along with the
seamless nature of tube structure that enables their extreme
mechanical properties. This can be done by ball milling or by
normal chemical vapor Deposition method. It is then condensed
with a polymer such as SMP (Shape memory polymer) to get
the required property .Large quantities of SWNTs can be
synthesized by catalytic decomposition of methane over welldispersed metal particles supported on MgO at 1000C. The thus
produced SWNTs can be separated easily from the support by a
simple acidic treatment to obtain a product with high yields (70
80%) of SWNTs. Because the typical synthesis time is 10 min, 1
g of SWNTs can be synthesized per day by this method. The
SWNTs are characterized by high-resolution transmission
electron microscopy and by Raman spectroscopy, showing the
quality and the quantity of products. The catalytic

decomposition method was suitable for scaling up and for


achieving a "controlled production" of SWNT. By this we
implied the ability to control the selectivity towards SWNT by
changing catalyst parameters and operating conditions, all
combined with the ability to obtain a reliable quantitative
measurement of the amount of SWNT produced. The CVD
processes offer the best approach to the manufacturing of larger
SWNT quantities, with perhaps the most scalable being the
CoMoCAT process which uses a fluidized bed reactor ( similar
to those used in petroleum refining, albeit, on a much smaller
scale.

Fig .1 SCALABLE PROCESS


An illustration of a fluidized bed reactor which is able to scale
up the generation of SWNTs using the CoMoCAT process. In
this CoMoCAT method, SWNT are grown by CO
disproportionate (decomposition into C and CO2) at 700-950C
in flow of pure CO at a total pressure that typically ranges from
1 to 10 atm. A process was developed that is able to grow
significant amounts of SWNT in less than one hour, keeping
selectivity towards SWNT better than 90 percent. We
discovered a synergistic effect between Co and Mo that is

Page 62

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
critical for the performance of the catalyst. The catalyst is
effective when both metals are simultaneously present on a
silica support with a low Co: Mo: separated, they are unselective
shows the selective synthesis of a SWNT using the CoMoCAT
method.

can retain two or sometimes three shapes, and the transition


between those is induced by temperature. In addition to
temperature change, the shape change of SMPs can also be
triggered by an electric or magnetic field, light or solution. As
well as polymers in general, SMPs also cover a wide propertyrange from stable to biodegradable, from soft to hard, and from
elastic to rigid, depending on the structural units that constitute
the SMP. SMPs include thermoplastic and thermo set
(covalently cross-linked) polymeric materials. SMPs are known
to be able to store up to three different shapes in memory. SMPs
have demonstrated recoverable strains of above 800%.
2.1 Electro-active SMPs
This SMP is used in this project. The use of electricity to
activate the shape-memory effect of polymers is desirable for
applications where it would not be possible to use heat and is
another active area of research. Some current efforts use
conducting SMP composites with carbon nanotubes short
carbon fibers (SCFs). Carbon black, metallic Ni powder. These
conducting SMPs are produced by chemically surfacemodifying multi-walled carbon nanotubes (MWNTs) in a mixed
solvent of nitric acid and sulfuric acid, with the purpose of
improving the interfacial bonding between the polymers and the
conductive fillers. The shape-memory effect in these types of
SMPs have been shown to be dependent on the filler content and
the degree of surface modification of the MWNTs, with the
surface modified versions exhibiting good energy conversion
efficiency and improved mechanical properties. Another
technique being investigated involves the use of surfacemodified super-paramagnetic nanoparticles. When introduced
into the polymer matrix, remote actuation of shape transitions is
possible.

Fig 2 SWNT

2.1.1 synthesis of shape memory polymers:


Preparation of Poly actide based urethane:
Fig.3 Diameter Distribution from Fluorescence Analysis
The Histogram for SG 65 material shows the very narrow
distribution of SWNT diameters possible with the CoMoCAT
process. 90% of the tubes have a diameter between 0.72 and
0.92 nm. 52% of the tubes are (6,5) chirality.
Two of the unique characteristics of the CoMoCAT process are
that it is readily scalable and its intrinsic high selectivity is
preserved as the reactor size is scaled up. These characteristics
impart the SWNT product of the CoMoCAT process the dual
benefit of low cost and high product quality. This supported
catalyst approach also offers the unique ability to provide a
substantial degree of chirality control during synthesis. SMPs

(i)Materials required:
llactide, 1,4Butanediol(BDO) , stannous octate
(Sn(Oct)2),Hexamthylene diisocyanate (HDI) , toluene which is
dried over Na wire and distilled before use ,Ethyl Acetate which
is dried over CaH2 before use.
(ii)Preparation of poly(l-lactide) diol (HO-PLA-OH)
l-lactide was recrystallized in ethyl acetate for three times. It
was then added to a glass container which had been flame-dried
and equipped with a magnetic stirring bar. A toluene solution of
1,4-Butanediol(BDO)
and

Page 63

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
6 h. The polymer was isolated by dissolving the reaction
mixture in chloroform followed by precipitation in ethanol.

Sn
Fig. 5 CARBON NANO TUBE CHICKEN WIRED
STRUCTURE
III. ADVANTAGES
According to the statistics published, there is a consumption of
about 40 billion liters of gasoline/ year for the usage of air
conditioners alone, assuming 80% of vehicles use AC. Even an
increase of 0.4 km/liter will save around $6 billion annually.
The results of a study shows that the fuel consumption of the
test vehicles with air conditioning systems in operation increases
with rising ambient air temperature and humidity, reaching a
value of about 18 percent on a typical Swiss summer day with
an air temperature of27 degrees and relative humidity of about
60%.
IV. CONCLUSION

Fig 4 Deformation under loading and unloading


(Oct)2(0.3% of the BDO, mol/mol) was then transferred. An
equal amount of toluene was then injected into the container.
The reaction vessel was immersed into a thermostatic oil bath
maintained at 125 C for 24 h. The reaction product was
precipitated into ethanol, filtered and dried at 40 C in vacuum
for 48 h.

Using the CNT sheets in the window pane isfollowed, then


around 20 billion liters of fuel can be saved in ayear
approximately and the efficiency of the vehicle will have an
increase of 5-7% from the normal value.
REFERENCES

(iii)Preparation of poly (llactide) polyurethane (PLAU)


A certain amount of the above prepared poly(llactide) diol
(PLA diol ) was dissolved in double volume of toluene and
heated at 75 C for 20 min. Sn(Oct)2 (1% of the PLA diol,
mol/mol) in dried toluene and a given amount of Hexamthylene
diisocyanate (HDI) were added to the solution. After stirring for
10 min at 75 C, 1,4butanediol (BDO), the mole number of
which was equal to the molar difference between HDI and PLA
diol, was added and the reaction mixture was stirred for another

[1] Sen, S.; Puri, Ishwar K (2004). "Flame synthesis of carbon


nanofibers and nanofibers composites containing
encapsulated metal particles". Nanotechnology15 (3): 264
268.
[2] Naha, Sayangdev, Sen, Swarnendu, De, Anindya K., and
Puri, Ishwar K. (2007)."A detailed model for the Flame
synthesis of carbon nanotubes and nanofibers".Proceedings
of The Combustion Institute31: 182129..

Page 64

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
[3] Yamada T, Namai T, Hata K, Futaba DN, Mizuno K, Fan J,
et al. (2006). "Size-selective growth of double-walled
carbon nanotube forests from engineered iron catalysts".
Nature Nanotechnology1: 131136.
[4] MacKenzie KJ, Dunens OM, Harris AT (2010). "An
updated review of synthesis parameters and growth
mechanisms for carbon nanotubes in fluidized
beds".Industrial & Engineering Chemical Research49:
532338.
[5] Jakubek LM, Marangoudakis S, Raingo J, Liu X,
Lipscombe D, Hurt RH (2009). "The inhibition of neuronal
calcium ion channels by trace levels of yttrium released
from carbon nanotubes". Biomaterials30: 63516357.
[6] Hou P-X, Liu C, Cheng H-M (2008). "Purification of
carbon nanotubes".Carbon46: 20032025.
[7] Ebbesen TW, Ajayan PM, Hiura H, Tanigaki K
(1994)."Purification of nanotubes".Nature367 (6463): 519..
[8] Xu Y-Q, Peng H, Hauge RH, Smalley RE (2005).
"Controlled multistep purification of single-walled carbon
nanotubes".Nano Letters5: 163168
[9] Meyer-Plath A, Orts-Gil G, Petrov S et al. (2012). "Plasmathermal
purification
and
annealing
of
carbon
nanotubes".Carbon50: 39343942..
[10] Arnold, Michael S.; Green, Alexander A.; Hulvat, James F.;
Stupp, Samuel I.; Hersam, Mark C. (2006). "Sorting carbon
nanotubes by electronic structure using density
differentiation".Nature Nanotechnology1 (1): 605.
[11] Tanaka, Takeshi; Jin, Hehua; Miyata, Yasumitsu; Fujii,
Shunjiro; Suga, Hiroshi; Naitoh, Yasuhisa; Minari, Takeo;
Miyadera, Tetsuhiko et al. (2009). "Simple and Scalable
Gel-Based Separation of Metallic and Semiconducting
Carbon Nanotubes"..
[12] T.Tanaka. "New, Simple Method for Separation of Metallic
and Semiconducting Carbon Nanotubes".
[13] Zheng, M.; Jagota, A; Strano, MS; Santos, AP; Barone, P;
Chou, SG; Diner, BA; Dresselhaus, MS et al. (2003).
"Structure-Based Carbon Nanotube Sorting by SequenceDependent DNA Assembly".Science302 (5650): 1545
1548.
[14] Lendlein, A., Kelch, S. (2002). "Shape-memory
polymers".Angew. Chem. Int. Ed.41: 20342057.
[15] Mohr, R. et al. (2006). "Initiation of shape-memory effect
by inductive heating of magnetic nanoparticles in
thermoplastic polymers" (free-download). Proc. Natl. Acad.
Sci. U.S.A.103 (10): 35403545..
[16] Lendlein, A. et al. (2005). "Light-induced shape-memory
polymers".Nature434 (7035): 879882.
[17] JinsongLeng, HaibaoLv, Yanju Liu and Shanyi Du. (2008).
"Comment on "Water-driven programable polyurethan

shape memory polymer: Demonstration and mechanism"".


Appl. Phys. Lett.92: 206105.b
[18] Toensmeier, P.A., "Shape memory polymers reshape
product design", Plastics Engineering. 2 April 2009
[19] Voit, W., T. Ware, et al. (2010)."High-Strain ShapeMemory Polymers."Advanced Functional Materials 20(1):
162-171.

Page 65

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

RSVP Protocol Used in Real Time Application


Networks
Dr.S.Ravi1, A. Ramasubba Reddy 2, Dr.V.Jeyalakshmi3
2

PG Student- M.Tech. VLSI and Embedded System


1, 3
Professor
Dept. Electronics and Communication Engineering
Dr.MGR Educational and Research Institute, University
Chennai, Tamil Nadu, India.
Abstract: RSVP is a receiver oriented reservation protocol being an Internet standard approved by Internet Engineering Task
Force [IETF].The goal of the Resource Reservation Protocol (RSVP) is to establish Quality of Service information within
routers and host computers of the Internet. High speed networks support use of dedicated resources through Resource
Reservation Protocol (RSVP). With RSVP, the network resources are reserved and released there by providing a mechanism to
achieve a good quality of service (QoS). This requests to reserve a path are transmitted in the network b/w the data senders and
receivers. This paper provides an analysis of the RSVP protocol used in peer-to-peer networks where each system works
simultaneously as client and server. This experimentation for Audio and video conferencing application in various scenarios
implemented in OPNET software. This RSVP protocol reduces the packet end-to-end delay.
Keywords: - RSVP, QoS, OPNET.
and a server. The reservation messages are generated by the
I. INTRODUCTION
hosts and depending upon the flow of data, some of the requests
Resource Reservation Protocol (RSVP) is a receiver are accepted. Consequent to the reservation of network
oriented resource reservation setup protocol designed for bandwidth, the network performance of the considered
Integrated Services Internet. RSVP has a number of attributes application improves. For the analysis of RSVP protocol, we use
that make it be adopted as an Internet standard approved by the metrics of the RSVP control traffic generated and the packet
Internet Engineering Task Force (IETF) [1]. These attributes end-to-end delay. Our simulation has been performed using the
include scalability, robustness, flexibility, dynamic group OPNET IT Guru Academic Edition v 9.1 (OPNET, 2011).
membership, and stability for multi cast sessions, support for
heterogeneous receivers, and varieties of reservation styles.
However, the RSVP designed for fixed network has been facing
a great challenge due to the participation of mobile hosts.
An inter network[2] is a collection of individual
networks, connected by intermediate networking devices, that
functions as a single large network. Internetworking refers to the
industry, products, and procedures that meet the challenge of
creating and administering internetworks. Fig:1 illustrates some
different kinds of network technologies that can be
interconnected by routers and other networking devices to create
an internetwork. Implementing a functional internetwork is no
Figure 1: Internetwork using different Network Technologies
simple task.
In this paper, we perform a comparative analysis of the
Using RSVP, the request to reserve the resources is
working of RSVP protocol in conjunction with multimedia
applications including audio and video conferencing. We use a generated by a host in the form of a message and sent to another
peer-to-peer based network in which each system acts as a client receiver host that in turn responds with another message. When

Page 66

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
a router receives the message, it may decide to reserve the
resources and communicate to other routers in order to
effectively handle the packets. The reservation of the resources
such as communication bandwidth for a data flow ensures
efficient delivery of data for that particular data flow thereby
improving the performance of the running application.
II. RESOURCE RESERVATION PROTOCOL OVERVIEW
The Resource Reservation Protocol (RSVP) is a
Transport Layer protocol designed to reserve resources across a
network for an integrated services Internet. RSVP operates over
an IPv4 or IPv6 Internet Layer and provides receiver initiated
setup of resource reservations for multicast or unicast data flows
with scaling and robustness. It does not transport application
data but is similar to a control protocol, like Internet Control
Message Protocol (ICMP) or Internet Group Management
Protocol (IGMP). RSVP is described in RFC 2205. RSVP can
be used by either hosts or routers to request or deliver specific
levels of quality of service (QoS) for application data streams or
flows. RSVP defines how applications place reservations and
how they can relinquish the reserved resources once the need for
them has ended.
RSVP reservation requests are defined in terms of a
filter specification (filter spec) and a flow specification (flow
spec) [3]. A filter spec is used to identify the data flow that is to
receive the QoS specified in a flow specification. A flow spec
defines the desired QoS in terms of a service class, which
comprises a Reservation Specification (RSpec), and a Traffic
Specification (TSpec). A RSpec defines the reservation
characteristics (i.e. the desired QoS) of the flow, for example,
the service rate the application requests. A TSpec defines the
traffic characteristics of the flow, for example, the peak data
rate. RSVP uses several messages in order to create, maintain
and release state information for a session between one or more
senders and one or more receivers as shown in Figure 2.
Path Setup: In RSVP, reservation requests travel from receivers
to the senders. Thus they flow in the opposite direction to the
user data flow for which such reservations are being requested.
Path messages are used by the sender to set up a route to be
followed by the reservation requests.

Path Error: A node that detects an error in a Path message,


generates and sends a PathErr message upstream towards the
sender that created the error.
Path Release: RSVP tear down messages are intended to speed
up the removal of path and reservation state information from
the nodes.
Reservation Setup: Resv messages carry reservation requests
(e.g. for bandwidth and buffers) used to set up reservation state
information in the nodes of the route established by the path setup message. They travel upstream from the receiver(s) to the
sender [4].

Figure: 2 RSVP messages.


Reservation Refresh: A reservation refresh is the result of either
a reservation state refresh timeout or a receiver request to
modify the reservation. Like path states, reservation states need
to be refreshed.
Reservation Release: ResvTear messages travel from the
receiver(s) to the sender and remove any reservation state
information associated with the receivers data flow.
Reservation Error: If a node detects an error in a Resv message,
it sends a ResvErr message downstream to the receiver that
generated the failed Resv message. Processing ResvErr
messages does not result in the removal of any reservation state.
Reservation Confirmation: Optionally, a receiver may ask for
confirmation of its reservation. A ResvConf message is used to
notify the receiver that the reservation request was successful. In
the simplest case, a ResvConf message is generated by the
sender.
Design goals of RSVP are [5]
Accommodate heterogeneous receivers.
Adapt to changing multicast group membership.
Allow receivers to switch channels.

Page 67

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Adapt to changes in the underlying multicast and unicast
routes.
Exploit resource needs different applications in order to use
network resources efficiently.
Make the design modular to accommodate heterogeneous
underlying technologies.
Control protocol overhead so that it doesnt grow linearly
(or worse) with the number of participants.

Types of Real Time Applications


Real-time communication, which generally means audio
and/or video, may be divided into playback applications and
interactive applications. For interactive applications, the end-toend delay is significant, e.g. for internet phone it should rather
not exceed 0.3s[6][7]. For playback application, where the
communication is only in one direction, delay as such is not
critical, but jitter may be[8] classifies real-time applications into
rigid and adaptive applications. Rigid applications have a fixed
playback point. Adaptive applications move the playback point
so that the signal is replayed as soon as possible while the data
loss rate is acceptable. Thus, adaptive playback applications
work well on moderately loaded datagram networks. The
bandwidth requirement may not be fixed, but some "rateadaptive" playback applications may change their coding
scheme according to network service available.
Quality of Service means providing consistent,
predictable data delivery service during periods of congestion.
Some of the characteristics that qualify a Quality of Service are:

Minimizing delivery delay.


Minimizing delay variations.
Providing consistent data throughput capacity.
III. PRESENT WORK

The objective of this experimentation is to do the Resource


ReSerVation protocol (RSVP) as a part of Integrated Services
approach to providing Quality of Service (QoS) to individual
applications or flows.
Two approaches have been developed to provide a range of
QoS. These are Integrated Service and Differentiated Services.
The RSVP follows the Integrated Service approach , where QoS

is provided to individual applications or flows. The


differentiated Services approach provides QoS to large classes
of data or aggregated traffic.
Before doing of RSVP protocol, first we have to do
Queuing network of tat application.
Queuing schemes [9] provide predictable network
service by providing dedicated Band width, controlled jitter, and
latency and improved packet loss characteristics. Each of
following schemes require customized configuration of output
interface queues. Queuing schemes are
First In First Out(FIFO)
Priority Queuing (PQ)
Custom Queuing(CQ)
Weighted Fair Queuing(WFQ)
In this application we have used only Weighted Fair
Queuing (WFQ). These Queuing model diagram of RSVP is
shown Fig 3.
In order to evaluate the performance of the RSVP
protocol, we used two different logical scenarios in OPNET IT
Guru Academic Edition Software. Both the scenarios contain
hosts (workstations) together with routers using the Open
Shortest Path First (OSPF) (IETF, 1998-b) routing protocol. The
two applications considered for experimentation are audio and
video conferencing with single application running at a time in a
physical scenario. Each physical scenario is further duplicated to
represent scenario with and without RSVP based
communication.
Router1 and Router2 are the nodes which presents the
two branches of organizations. Here in this scenario, users of
these two branches are communication each other. Those users
are provided with VOIP,FTP and VEDIO applications. Router1
contains three users and Router2 contains two users along with
one server. This server uses to save the data and this can be used
by both router1 users and router2 users for storing the data. Here
data travelling along the network is also stored in this FTP
server temporarily still the data reaches the destination, so this
will be helpful when there is data loss during the transformation
and the nodes can be retrieve plays main role in configuration
process and providing the application and maintaining the
quality of the network.
Following the network layer of scenario2. In this
scenario we have to add another two hosts or work stations,
these are VOIP_RSVP server caller, VOIP_RSVP server called.

Page 68

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
These VOIP_RSVP server caller is connecting with west router
and same as VOIP_RSVP server called is connected with the
east router.
The voice application uses the G.711 transmission
between peers, whereas the video conferencing application
transmits 10 frames per second with each frame containing
128*120 pixels. We use the shared explicit mode of reservation
style that allows multiple senders to share the same reservation.
The flow specification is set to 50,000 bytes/sec and buffer size
is 10,000 bytes, whereas 75% is allowed as the resolvable

bandwidth at each router and host. As shown in Fig. 3, the


(logical) scenario 1 contains two hosts, both of them are
workstations acting as peers since they transmit and receive data
simultaneously. The hosts are connected using a core network of
routers. These routers are of type ethernet4_slip8_gtwy_adv and
are inter-connected following the mesh topology.
As shown in Fig. 4, the (logical) scenario 2 contains
hosts, all of them are workstations acting as peers. In contrast to
scenario 1.

Figure: 3 Queuing model network scenario 1

Figure: 4 RSVP model network scenario 2

Page 69

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
IV. SIMULATION AND RESULTS
Following are the graphs of traffic received and sent in
the both scenario 1 and scenario 2. These two traffics must be
same for any network to be more efficient. See the Fg:5 for
Queuing
results
are
video
conferencing
traffic
received(bytes/sec), voice packet delay variation, and voice

packet
end
to
end
delay
(sec).
Packet Delay Variation is the variance among end-to-end delays
for voice packets received by this node.
Packet End-to-End Delay for a voice packet is
measured from the time it is created to the time it is received.
And Fig 6 and 7 are the voice packet end to end delay of RSVP
and
voice
packet
delay
variation.
Figure: 5 Queuing model of RSVP i.e voice packet delay, voice
packet end to end delay.

Figure:6 Time average in voice calling party, packet end to end


delay of RSVP

Figure: 7 voice packet delay variation

Page 70

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
V. CONCLUSION
This paper presents a performance analysis of the
RSVP protocol. We simulate two logical scenarios while
incorporating the voice and the video applications. The
scenarios differ in the number of hosts among which the
communication takes place. We use the peer-to-peer model for
network communication. The RSVP protocol is evaluated in
terms of the metrics of the control traffic sent and the packet
end-to-end delay.
Both for the voice application and video application, a
large number of RSVP control traffic is sent only if the amount
of data being transmitted conforms to the flow specification
given for RSVP. For scenarios with small number of hosts, a
large amount of data meets the requirement, thereby generating
a large amount of RSVP control traffic. RSVP therefore reserves
the resources and allows dedicated communication.
Consequently, the communication performance improves as the
packet end-to-end delay decreases. In contrast, for scenarios
with large amount of data, the RSVP protocol is unable to
perform well and the delay increases for voice application.

[7] Schwantag, Ursula: An Analysis Of The Applicability Of


RSVP. Diploma Thesis. University Of Oregon And
University Of Karlsruhe. June 1997.
[8] R. Braden, D. Clark, And S. Shenker Integratedservices In
The Internet Architecture: An Overview.Request For
Comments (Informational) RFC 1633,
[9] Internet Engineering Task Force, June 1994Salil Bhalla,
Kulwinder Singh Monga,Rahul Malhotra Optimization Of
Computer Networks Using Qos.

REFERENCES
[1] M. A. Khan, G. A. Mallah* And A. Karim Analysis Of
Resource Reservation Protocol (Rsvp) For P2p Networks.
[2] Vikas Gupta1, Baldev Raj2 Optimization Of Real-Time
Application Network Using RSVP ISSN: 22316612 Oct.
2013
[3] Braden R., Et Al. Resource Reservation Protocol (RSVP) -Version 1: Functional Specification. RFC 2205, IETF,
September, 1997.
[4] Mara E. Villapol , Jonathan Billington A Coloured Petri
Net Approach To Formalising And Analysing The
Resource Reservation Protocol
[5] Lixxia Zhang, Stephen Derring, Deborah Estrin, Scott
Shenkar, And Daniel Zappala RSVP: A New Resource
Reservation Protocol IEEE Sep 1993.
[6] Jan Lucenius , Research Scientist VTT Information
Technology The Application Of RSVP.

Page 71

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Database Intrusion Detection Using Role Based


Access Control System
Mrs. Antony Vigil1, Mrinalini Shridhar2, R Oviya3
1, 2, 3

Assistant Professor, Student SRM University

Abstract- In this paper, we propose a different approach for the database intrusion detection (IDS). Database Management
(DBMS) has become a key criteria in the information system (IS) storing valuable information of the system. We are urged to
protect it to the fullest without losing any bit of information. Intrusion detection, which gathers and analyses the information
system was one of the methods which protects the database the fullest with all sorts of rules. In this paper, we move into the Role
based Access Control (RBAC) system which controls the administered databases for finding out sensitive attributes of the system
dynamically. Role based Access Control is a method to restrict system access by authorized and unauthorized people directly. The
access is based on the roles of the individual users within the organization. Important roles like administrator, access sensitive
attributes and if their audit logs are mined, then some useful information regarding the attributes can be used. This will help to
decide the sensitivity of the attributes. Since the models of the database intrusion detection has proposed a lot of rules , it is time
to change the system to protect it more evidently with less rules and regulations which would be useful for detecting all sorts of
transactions.
Keywords: Database intrusion detection, Role based access control system, Administered database, Audit logs, Sensitive and
attributes.
I. INTRODUCTION
In past years, Database Management System (DBMS) have
become an indispensible part of the life of the organizers and the
users using it. Hence it was the primary priority to safeguard the
DBMS, no matter how easy or difficult it was. The motive of the
researches was first based on these ideas of protecting the
DBMS and to prevent the leakage of data. The past years,
Authentication user privileges , Auditing, Encryption and lots
of methods have been used to protect the data and the system.
Amending all the above methods, newer methods have come up
to protect the same for daily operations and decision making in
organizations. Database is a group or collection of data's which
may contain valuable and sensitive information about the
institution and organization, which is accessed by the people of
the organization internally and externally every day.
Any leak of information in these systems will devastate the
whole database system and the data's, leading to a great loss.
Hence the data need to be protected and secured. The recent
models of protection of DBMS were the dynamic threshold
method and the data mining method of Intrusion detection
system. Intrusion detection method is a process which analyses
the unauthorized access and malicious behaviors and finds
intrusion behaviors and attempts by detecting the state and
activity of an operating system to provide an effective means for
intrusion defend. In this paper, we will see how RBAC will help

us to protect the database along with the intrusion detection


with limited rules.
RBAC- Role based access control, also known as role
based security is a method to restrict access of just one user, and
also many users depending on the role of the users. The roles are
prioritized like Example: Administrators access sensitive
attributes and the DBMS and its attributes can be used. RBAC
is a rich technology for authentication privileges and controlling
the access of the information and data. It makes the
administration of the security (work) much easier and simpler,
though the process may be tedious and little vast. The possibility
of adding newer application inside the secured system is much
easier with the different access control mechanism. Extracting
the data from the protected information system is much easier
only by an authorized person. Talking about the sensitivity of
the attributes we will have to refine the audit log to extract the
data attributes.
In the past few years computer crime and security
survey conducted by the Computer Security Institute(CSI) have
seen a lot of drastic improvement in both the aspects ,but only
thing was that there need to be a lot of adjustments in the rules
given by each model. We are in the scoop of improving the
database system and protecting it. In 2005, about 45% of the
inquired entities have reported increased unauthorized access to
information due to the poor system management. In 2007,
financial application fraud was the leading cause and found it

Page 72

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
double as compared to the previous year also 59% of the
respondents outlined insider abuse as the security problem. In
2013, survey the number has dropped down and the security was
much more than the past few years. The statistics being, the
percentage threats due to insiders has been dropped to 20% and
the financial fraud which was a cause before were eliminated in
the upcoming years. Now only 10-20% are reported as
unauthorized users. This shows that the database security has
been improving day to day and researches have been conducted
every time a model is proposed before implementing to action.

USERS

TASK /JOB

ACCESS

CONSTRAINTS

II. ROLE BASED ACCESS CONTROL MODEL

SESSIONS

The Role based access control model proposes 3 relationships


between the attributes given. They are:
FIG 1. MODEL OF RBAC
a) USER-JOB: Which defines the relationship between the user
and the task defined in that system.
b) JOB-ACCESS: Which defines the relationship between the
job or the task of the person and the access to that particular
work.
c) JOB-JOB: Which defines the job to job relationship between
the users.

III. RELATED WORKS


RBAC supports three well known principles and hence we work
out our plan in 3 steps:
1. Principle of minimal authority
2. Divide and rule method of duties
3. Data abstraction

Now defining each attribute of the model we have the users of


an organization represents an organizer or an agent of that field.
The task or the job represents the responsibility or the
functioning of the user within the organization. The access
represents the approval or a permission to that particular task or
event of that organization. The sessions box represents the
overall relationship between the user and the task the
contribution both has in the RBAC model. It does not point
towards the access field as the access field is directed only by
the task the user performs. Constraints represent the limitations
or the boundary of each entity of that data that is, the user, task
or job, access as well as the relationship between them is also
restricted.
The sessions represent the Divide and rule
mechanism of the RBAC model. The fig 2 and 1 are interlinked
process and each step of the data flow diagram will implement
the following attributes of the user.

User
Data secured

Principle of minimal authority


Authorization access

Divide and rule method

Data abstraction

Data
FIG 2. DATA FLOW DIAGRAM

Page 73

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
The sensitivity of an attribute is based on the database
application. We have to divide the attributes into 3 divisions so
as to protect the attributes according to the sensitivity or the
position order they hold. Sensitivity refers to the position order a
data has to be protected. If the data are least sensitive we can
give it minimal protection. If the data are highly sensitive in the
attribute set we need to protect it to the fullest. In some
schematics we are not able to tell whether the data is sensitive or
not. To give a clear picture of the attributes we have taken the
Student Database Schema.

the contribution to the system. To maintain the account and the


staff system we need a main administrator. Hence the RBAC
system proposed in this E-R model .Thus E-R model is modified
as:
STAFF + ACCOUNT + SALARY TYPE =
ACCOUNTS ADMINISTRATOR

NAME*

TABLE 1.TEACHER'S SALARY DATABASE SCHEMA


TABLE NAME

ACCOUNT

SALARY TYPE

ADDRESS*

ATTRIBUTE NAME
STAFF
GETS

Name [i], Staff-id [j], Address


[d], Phone no [a]

STAFF

STAFF ID*

Account-id [b], Staff-id [c],


Status [g], Month [e], Year [f],
Amount [h]
Salary-type [k], LOP and
Deductions [l]

ACCOUNTS
Administrator

ACCOUNT

TABLE 2. TYPES OF SENSITIVE ATTRIBUTES


SENSITIVITY

ATTRIBUTE

Light
sensitivity

a, b, c, d, i, ,j ,k

Medium
sensitivity

e, f

II

High sensitivity

g, h, l

III

ACC ID*

WEIGHTS

AMT DEPOSIT***

STATUS**

FIG 3.RBAC MODEL USING E-R MODEL

The sensitivity of the attributes can also be given by the entityrelationship model [E-R]. But with relation to the RBAC model
, an administrator is required to control the database for its
sensitivity. It is a perception of the real world. It is the
diagrammatic representation of how the attributes are
considered. The * represents if the attributes are sensitive or
not. The model represents a collection of entities or data's and

a) Principal of minimal authority- Also known as the principle


of least privilege means that the access to the information
system or its resources for only its own legitimate purpose by
every user or the module. In simple words, we can say that an
authorized user can access that information system or the
resource only for their own privileged purpose. Privileged/
Authority refers to the right a user has or granting access to the
user to use a particular system. For example, the user defined in
its domain can access only its domain and its attributes. The
person accessing a bank account can go through only their bank
procedures and account. The system does not access or grant
permission to access other accounts. Similarly an admin user
accessing a computer can go into only the admin user account
all other password accounts are blocked for the admin user.
b)Divide and rule method of duties- it can also be termed as the
separation of duties among the users. It helps the task to be

Page 74

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
completed faster. A mutual exclusive role is achieved to
complete a particular set of task. RBAC brings this advantage of
time management. the database is secured as well as the data are
given to the authorized people easily with security.
c)Data abstraction- Data abstraction is a simple concept of
accessing the data whenever we want to but with the permission
of authorized people. It has different modes to it.
i)Public Mode- The access to the data by any user of
the domain, but limited to a particular organization. This
requires a common security where only the users of the
organization can access it.
ii)Private Mode- The access of the data is limited only
to the key user of that particular search of interest. That is only
the accountants can handle the accounts of the organization and
hence access to that particular class is given only to that
particular user. A manager accessing the accounts of the
company will be denied from accessing it.
iii)Protected Mode- The user in that particular domain
and the senior user that is one or maximum two users who has to
write to access that domain can access it with ease. Example
only the accountants and the chief of the company can check the
accounts of hat particular institution and make changes in that.
The others have no right to access these without their
permission. For the others the domain remains in blocked state.
IV. IMPLEMENTATION
RBAC is a complex system that involves a strategic process
prepared by an expertise. RBAC is best implemented by
applying a structured and detailed procedure. The use of divide
and rule method is very essential to implement these process.
Each task or step is broken down into sub tasks for the work and
implementation to be easier and more efficient. The steps
involved are:
DEVELOP PLANS

COMPILE

DEFINE ROLES

FIG 4.PROCESS
a)Develop Plans- To make best use of RBAC we can develop
and plan for the RBAC system into best work in an organization
or for a project's security of data. Example to extract the
maximum security from RBAC a development plan including a
project, etc. should be developed along with the deadline ,
budget etc.
b)Compile- This step involves the collection and putting
together of all data , files , projects, etc. so as to identify the
level of security needed to implement it. Sensitivity of the
attributes should be determined so as to segregate and compile
the system to one to provide the highest security possible.
c)Define Roles- As we have discussed that operation of database
system is first best used only by the key user or the important
user of that organizer or a system. Hence assigning a particular
role to that person for the easy access of the software to access
the data with ease and implement any kind of proper change
within the system.
d)Analyze- this is a main step for any kind of system to know
and to formulate RBAC. This would bring about the betterment
of the system so that the next stage of implementation would be
easier to execute. Any changes needed in the system should be
done in this stage so that no further disputes arise at the later
stage.
e)Integrate- Before any problem occurs in the system like
system failure, we need to transfer each application's security
system to a centralized security system so as to provide a
secured companywide information access. this would be the last
step of the process and would be the final stage without making
any changes.
e)Implement- To put forth whatever we have executed these
many steps without any errors or any types of problems. These
are the best ways to protect a data from the external user.
Thus refining the system and protecting it according to
the steps followed would give a better result. Always the
principle of divide and rule method is followed in RBAC which
is the key principle of the system.
V. PROCESS USING A FORMULAE

ANALYSE

Each datum is a process streamline flow of information which is


guarded by security. These syntaxes along with the formula help

INTERGRATE

IMPLEMENT

Page 75

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
in the security purpose. This formula was implemented in the
Web Based technology, now it is time to implement it in
Database to ensure its safety.
P1=>| (staff)P| name(P).X | staff id(P).X | phone(P).X
P2=>| (account)P | account id(P).X | amount
deposited(P).X | status(P) .Y
P=>| P1||P2
Syntax:
P=>0
no process
|P|P
composition of the process
| O(P).X output value of the process, X is the
outcome.
| I(P).Y
input value/ getting input from the
user of the process, Y is the input variable.
| !!I(P).Y repetition of the input variables.

To calculate the sensitivity or to know the sensitivity of the


attributes, automatic capitalization would be invoked to
represent the highest sensitive attributes in the given set of data
or the formula generated at the end of the process typed. The
least sensitive are given in small letters. The medium sensitive
might be in italic letters. The ones inside brackets represents that
it is a secured data and hence it is the start of the process and
that the data must be protected fully. Hence we cannot find the
sensitivity of the attribute at the mid stage of the process. The
same process with a change to denote the sensitivity is
represented as :
P=>| {(staff)P | name(P).X | staff id(P).X |
phone(P).X} || {(account)P | account id(P).X |
AMOUNT DEPOSITED(P).X | STATUS(P).Y}
The other way is:
P1=>| (staff)P| name(P).X | staff id(P).X | phone(P).X
P2=>| (account)P | account id(P).X | AMOUNT
DEPOSITED(P).X|STATUS(P).Y
P=>| P1||P2

P=> run the process


| D(P)
main data or the attributes
| read(P)
read the data or the attributes
| change(P) change the data or the attributes
P=>enable(R).D
gives permission to R to
access a data
P=>disable(R)>D gives permission to R to
disable the data or remove or stop the process till R.
For the above E-R diagram of the process ,the interaction
between the staff salary and the account we can create a formula
based on the process.
P=>| {(staff)P | name(P).X | staff id(P).X |
phone(P).X} || {(account)P | account id(P).X |
amount deposited(P).X | status(P).Y}

The other way of representing it is to split the process:


Where staff and the account have no sensitivity and hence it's
just an attribute of the system, where as the other attributename, staff id, phone, account, account id, amount deposited are
the inputs and the output is the status.
The two processes can be divided and then later combined to
form a single equation. If the process needs to be changed or
read or any other kind of operations, they can be done by the
given formula which would be useful for the later run.

Thus this formula would be easy for the generation of large sets
of data and to secure the data and hence even if there is a small
change in the capitalization or the attributes or the brackets or
any syntax mistake there would be an error generated in the
system which would spoil the whole set of data. This is done for
just a small set of data. We can proceed this for a huge one. An
outsider seeing this would not understand the type of data or the
importance of the data and hence would hesitate to meddle with
it.
VI. CONCLUSION
Intrusion detection mechanism helps to secure the data in an
organization. In this paper we have discussed in detail how the
database could be secured by using Role Based Access Control
System. The key benefits of RBAC are high efficiency and low
maintenance cost for any type of organization be it big or small.
Also RBAC system could be designed and used to improve the
operational performance and strategic business value. This
system could streamline and automate any business procedures,
thus providing high/ better/ faster benefits to the user. It also
helps to maintain privacy and confidentially of the employees in
any organization. Thus we can conclude that mission to protect
any key business process is a main vision of RBAC system in
database intrusion detection.

Page 76

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
REFERENCES
[1] Intrusion detection database system with dynamic threshold
value By Khomlal sinha and Tripti Sharma
[2] Database Intrusion Detection using Weighted
Sequence Mining Abhinav Srivastava1, Shamik Sural1 and A.K.
Majumdar2
[3] J. Han, M. Kamber, Data Mining: Concepts and
Techniques, Morgan Kaufmann Publishers (2001).
[4] U. Fayyad, G. P. Shapiro, P. Smyth, The KDD Process for
Extracting Useful Knowledge from Volumes of Data,
Communications of the ACM, pp. 27-34 (1996).
[5] R. Bace, P. Mell, Intrusion Detection System, NIST Special
Publication on Intrusion Detection System (2001).
[6] A. Srivastava, S. Sural, A.K. Majumdar, Weighted
Intratransactional Rule Mining for Database Intrusion
Detection, Lecture Notes in Artificial Intelligence, Springer
Verlag, Proceedings of Pacific-Asia Conference in Knowledge
Discovery and Data Mining, pp. 611-620 (2006).
[7] W. Lee, S.J. Stolfo, Data Mining Approaches for Intrusion
Detection, Proceedings of the USENIX Security Symposium,
pp. 79-94 (1998).
[8] D. Barbara, J. Couto, S. Jajodia, N. Wu, ADAM: A Testbed
for Exploring the Use of Data Mining in Intrusion Detection,
ACM SIGMOD, pp. 15-24 (2001).
[7] C. Y. Chung, M. Gertz, K. Levitt, DEMIDS: A Misuse
Detection System for Database Systems, IFIP TC-11 WG 11.5
Working Conference on Integrity and Internal Control in
Information System, pp. 159-178 (1999).
[8] V.C.S. Lee, J.A. Stankovic, S.H. Son, Intrusion Detection in
Real-time Database Systems Via Time Signatures, Real Time
Technology and Application Symposium, pp. 124 (2000).
[9] Intrusion detection database system with dynamic threshold
value By Khomlal sinha and Tripti Sharma
[10] Database Intrusion Detection using Weighted
Sequence Mining Abhinav Srivastava1, Shamik Sural1 and A.K.
Majumdar2
[11] S.Y. Lee, W.L. Low, P.Y. Wong, Learning Fingerprints for
a Database Intrusion Detection System, Proceedings of the
European Symposium on Research in Computer Security,
pp. 264-280 (2002).

Page 77

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Surface Roughness analysis of drilling on GFRP


composites by experimental investigation and
predictive modeling
Prasanna Ragothaman1, Harihar Karthikkeyan2
1,2

Department of Mechanical Engineering ,Sri Ramakrishna Engineering College,Coimbatore,India

Abstract: Glass Fiber Reinforced Plastics composites have an increased application in recent days, due to its enhanced
structural properties, Mechanical and thermal properties. Drilling of holes in GFRP becomes almost unavoidable in
fabrication. The heterogeneous nature of this kind of materials makes complications during machining operation. However,
drilling is a common machining practice for assembly of components. The quality of holes produced in the GFRP material is
severely affected by surface roughness, Circularity and Delamination . The objective of the study is to apply the full factorial
design, ANOVA and Fuzzy logic model to achieve an improved hole quality considering the minimum surface roughness
through proper selection of drilling parameters. The regression method is employed in the Experimental investigation and
Mathematical modelling of drilling of GFRP material using HSS drill bits and the fuzzy logic model for the validation of the
mathematical model.
Index terms: GFRP, ANOVA, Fuzzy logic, aircraft fuselage, Full factorial Method, Drilling, Surface Roughness.
I. INTRODUCTION
Glass fiber Reinforced Plastics (GFRP) are widely being used
in the automotive, machine tool industry, aerospace
components, sporting equipments [1] because of their
particular mechanical and physical properties such as specific
strength and high specific stiffness. An aircraft fuselage
structure around 100,000 holes is required for joining purpose
[2, 3]. About 60% of the rejections are happening in aircraft
industry due to the defects in the holes [4]. Many of these
problems are due to the use of non-optimal cutting tool
designs, rapid tool wear and cutting parameters [5, 6]. Among
the defects caused by drilling with tool wear, Delamination
appears to be the most critical [7].The surface finish of the
work piece is an important attribute of hole quality in any
drilling operation. During machining many factors affect the
surface finish. Many theoretical models have concluded that
the effect of spindle speed on surface roughness is minimal
[8]. In practice, however spindle speed has been found to be
an important factor [9]. The quality of drilling surfaces
depends on the cutting parameters and tool wear, while
changing the cutting parameters causes to tool wear
[10].Researchers have attempted to model the surface
roughness prediction using multiple regression, mathematical
modeling based on physics of process, fuzzy logic
[11].Machining operation being highly nonlinear in nature,
soft computing techniques have been found to be very
effective for modeling [12].The influence of process
parameters such as spindle speed, lubrication and feed rate on
surface finish were investigated during the experimentation of
Metal matrix composites. The experiments were conducted
according to the full factorial design .The percentage of
contribution of highest influential factors can be determined

using analysis of variance(ANOVA) a statistical tool, used in


design of experiments[13,14]. Fuzzy logic is a mathematical
formalism for representing human knowledge involving vague
concepts and a natural but effective method for systematically
formulating cost effective solutions of complex problems [15].
A model was developed for surface roughness on drilling of
GFRP composites using fuzzy logic [16]. The primary
objective of this study is to quantify the influence of process
input parameters on surface roughness by formulating a
mathematical model and validating using Fuzzy logic model.
II. DESIGN OF EXPERIMENT
Design of experiment is the design of all information gathering exercises where variation is present, whether under
the full control experimenter or not. The cutting speed, feed
rate and thickness of GFRP plate are the three parameters
under investigation in the present study. A full factorial
experimental design with a total number of 27 holes drilled
into the GFRP specimen to investigate the hole quality on
Surface Roughness. The full factorial design is the most
efficient way of conducting the experiment for that three
factors and each factor at three levels of experiments is used.
Hence as per Levelsfactor (factors to power of levels) formula =
Levelsfactors ,N = 33 = 27, N- number of experiments.
Table 1:Assignment levels for process parameters
Levels
Factors
1
2
3
Speed,s(rpm)
280
900
1800
Feed,f(mm/rev)
0.18
0.71
1.40
GFRP Plate
5
10
15
thickness,t(mm)

Page 78

www. ijraset.com
SJ Impact Factor-3.995
3.995

Special Issue-1,
Issue October 2014
ISSN: 2321-9653
2321

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
surface roughness measured using roughness
roughn
tester [Mitutoyo
TR-200].Fig.
200].Fig. 2 shows the measurement of hole quality
characteristics using roughness tester. Point angle was
measured before every drill for 27 experiments using Digital
Profile Projector [OPTOMECH, 10x magnification].
IV . RESULTS AND DISCUSSION
Fig 1: Fabricated GFRP plate
III. SPECIMEN PREPARATION
The Glass fiber reinforced composite used is fabricated using
hand lay-up
up technique [12]. The composition type is Glass
fibers (fiber length=20-30mm) reinforced with isopathalic
resin with 30% reinforcement .The material was fabricated
and then cut into pieces of 22cm x 11cm for all the three
thicknesses of plate (Fig.1)
A. Methodology

A. ANOVA
The Analysis of variance is extensively used to analyze the
experimental results. ANOVA tests the significance of group
difference between two or more groups. The normal
probability plot represents that all the points on the
th normal
plot lie close to the straight line (main line) or not. Versus fits
plots represents that how far deviation occur from the normal
distribution. An interaction plot is occurs when the change in
response from the one level of a factor to another level
lev differs
from change in response at the same two level second factor.
A main effect plot is present when different levels of an input
affect the responses directly.
B. ANOVA FOR SURFACE ROUGHNESS
Fig.4 Represent that all the points lie closer to the regression
line, this implies that the data are fairly normal and there is a
no deviation from the normal. Histogram graph shows the
skewness. The Equation No. 1 represents that feed has much
effect on Ra. The main effect plot for Surface Roughness has
been shown in the Fig 5. The plot shows that Ra decrease with
low cutting speed and low feed rate for 15 mm plate, as well
as the initial (without wear in drill bit ) point angle has less
effect on Ra. Table 2 Shows that the analysis of variance of
second order model with 95% confidence interval for the
Surface roughness experiments. Parameter A gives 44.2%
contribution to the Ra.

Fig 2: Surface Roughness Tester

Fig 3: Experimental Setup


Experiments were carried out in high speed radial drilling
machine using HSS drill of 10mm diameter. Experiments
were carried out according to full factorial design. It provide
a powerful and efficient method for designing processes that
operate consistently
tly and optimally over a variety of
conditions. The selected levels of process parameters were
given in Table 1.Fig. 3 shows the photographic view of the
experimental setup. Further, the hole quality characteristics

Page 79

Fig.4 Residual plot for Ra

www. ijraset.com
SJ Impact Factor-3.995
3.995

Special Issue-1,
Issue October 2014
ISSN: 2321-9653
2321

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Source

DF

SS

MS

Regression

77.44

25.81

2.05

0.135

Residual
Error

23

289.54

12.59

Total

26

366.99

Fig.6 Represent that high feed rate and low speed have less
effect on Ra while drilling on 5mm thickness plate. When
drilling on 10mm thickness of plate with cutting parameters of
low speed and feed rate shows surface roughness is
minimized. For 15mm plate high speed and high feed rate has
less effect on Ra. From Fig.5 shows that when decreasing the
point angle , the effect
ect of surface roughness is increased .
Decreasing the point angle causes tool wear . Fig 7 shows the
predicted and measured hole characteristics at different
drilling process parameter conditions. The result significantly
shows that the values relatively follow the similar trend
pattern of the measured value and predicted values from the
developed regression model.

Fig 5 Main Effects plot for Ra

V. FUZZY LOGIC MODEL


Fuzzy logic refers to a logical system that generalizes the
classical two-value
value logic for reasoning under uncertainty. It is
a system of computing and approximate reasoning based on a
collection of theories and technologies that employ fuzzy sets,
which are classes of objects without sharp boundaries. Fuzzy
logic is the best in capturing the ambiguity in
i input. Fuzzy
logic has become popular in the recent years, due to the fact
that it is possible to add human expertise to the process.
Nevertheless, in the case where the nonlinear model and all
the parameters of a process are known, a fuzzy system may be
used.

Fig 6 :Interaction plot for Ra


C. Mathematical model for Surface Roughness
The models were based on the Box-Behnkn
Behnkn design method.
The developed second order mathematical model for surface
roughness.
Surface Roughness
= 4.87 - 0.00086 (s) + 2.49 (f) + 0.249 (t)-------------(1)
Table 2 : Analysis of variance
Predictors
Coef
SE
Coef

Constant

4.873

2.352

2.07

0.05

8.56e-4

1.09e-3

-0.78

0.442

2.494

1.367

1.49

0.151

0.2487

0.1673

2.07

0.017

S = 3.54808 R-Sq = 21.1% R-Sq(adj)


Sq(adj) = 10.8%

A .Development of fuzzy logic model


The surface roughness and circularity error in drilling of
GFRP is assumed as a function of three input variables
viz.plate thickness, spindle speed, and feed rate. The Fuzzy
logic prediction model is developed using Fuzzy Logic
Toolbox available in Matlab version 7.10(R2010a).In this
work Mamdani type Fuzzy Inference Systems(FIS) is used for
modeling. The steps followed in developing The fuzzy logic
model are described below.
B. Fuzzification of I/O variables:
ables:
The input and output variables are fuzzified into different
fuzzy sets. The triangular membership function is used for
simplicity yet computationally efficient. It is easy to use and
requires only three parameters to define.The input variables
plate thickness [5-15
15 mm] ,spindle speed [280-1800
[280
rpm] and

Page 80

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
feed rate [0.18-1.4 mm/rev] are fuzzified into three fuzzy sets
viz.Low (L),Medium(M),and High(H) as shown in the Fig.11
(a,b,c).The output variable i.e. The surface roughness and
circularity error are divided into nine fuzzy sets as Very Very
Low(VVL),VeryLow(VL),Low(L),Medium1(M1),Medium2
(M2), Medium3 (M3),High (H),Very High (VH),Very Very
High (VVH) as shown in Fig.11 (d) to increase the resolution
and accuracy of prediction.
C . Evaluation of IF-THEN rule
The three input variables are fuzzified into three fuzzy sets
each, the size of rule base becomes 27(3x3x3).For generating
the Fuzzy rules, the level of the variable having more
membership grade on a particular fuzzy set is considered.
With appropriate level of all the input variables representing
the corresponding fuzzy set, the surface roughness values are
used for 27 data sets of fuzzy rule base. Since all the parts in
the antecedents are compulsory for getting the response value,
the AND (min) operator is used to combine antecedents parts
of each rule.
The implication method min is used to correlate the rule
consequent with its antecedent. For example, the first rule of
the FIS can be written as

E .Defuzzification
The aggregate output of all the rules which is in the form of
fuzzy set is converted into a numerical value (crisp number)
that represents the response variable for the given data sets. In
the present work, the centroid defuzzification method is used
for this purpose. It is the most popular method used in most of
the fuzzy logic applications. It is based on the centroid
calculation and returns center of area under the curve.The
predicted values of surface roughness are compared with the
experimental output, prediction model output and fuzzy
output. The comparison of prediction performance in fuzzy
logic output, prediction model output with the experimental
results is given in the Table 3.
20
15
Experimental
Value

10

Rule 1: if Thickness is Low and Speed is Low and Feed rate


is Low then surface roughness is Very Very low (VVL).

Predicted
Value

Fuzzified
Value
1 5 9 13 17 21 25

D .Aggregation of Rules

Hole number

The aggregation of all the rule outputs is implemented using


max method, the commonly used method for combining the
effect of all the rules. In this method the output of each rule is
combined into single fuzzy set whose membership function
value is used to clip the output membership function. It
returns the highest value of the membership functions of all
the rules.

Fig.9 Correlation between Experimental Ra with predicted Ra


and Fuzzified Ra
Fig.9 indicates that the outputs from the experiments,
Prediction model, Fuzzy are in good correlation with each
other.

Page 81

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Table 3: Surface roughness values for Experimental output, Predicted output and Fuzzy output

S.
No

Plate
thickness
t (mm)

Speed
s(rpm)

Feed
f(mm/rev)

Point angle
()

Surface Roughness, Ra (m)


Experimental
Predicted
output
output

Fuzzy
output

280

0.18

10732'07"

3.19

6.3224

2.15

5
5
5
5
5
5
5
5
10
10
10
10
10
10
10
10
10
15
15
15
15
15
15
15
15
15

280
280
900
900
900
1800
1800
1800
280
280
280
900
900
900
1800
1800
1800
280
280
280
900
900
900
1800
1800
1800

0.71
1.40
0.18
0.71
1.40
0.18
0.71
1.40
0.18
0.71
1.40
0.18
0.71
1.40
0.18
0.71
1.40
0.18
0.71
1.40
0.18
0.71
1.40
0.18
0.71
1.40

10729'47"
10725'32"
10720'44"
10717'58"
10715'20"
10719'31"
10712'25"
10708'28"
10702'57"
10647'11"
10639'47"
10630'09"
10628'42"
10624'50"
10621'20"
10619'26"
10616'32"
10632'07"
10558'10"
10552'37"
10544'12"
10536'04"
10514'39"
10522'42"
10508'35"
10458'49"

11.98
6.49
4.09
9.16
8.79
12.96
7.27
3.33
5.64
8.98
11.42
1.76
7.12
8.58
7.72
10.07
8.97
8.37
10.48
15.75
5.43
18.25
11.43
4.74
6.64
8.55

7.6421
9.3602
5.7892
7.1089
8.8270
5.0152
6.3349
8.0530
7.5674
8.8871
10.6052
7.0342
8.3539
10.0720
6.2602
7.5799
9.2980
8.8124
10.1321
11.8502
8.2792
9.5989
11.3170
7.5052
8.8249
10.5430

10.00
5.75
3.62
7.87
7.87
12.1
5.75
2.50
3.62
7.87
10.00
2.19
5.75
7.87
5.75
10.00
7.87
7.87
10.00
14.30
3.62
16.4
10.00
3.62
5.75
7.87

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

Page 82

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Prediction model and the fuzzy model are in good correlation
with each other.
2)From analysis of variance and from the fuzzy model, the
results indicated that low feed rate, high spindle speed and
5mm thickness of GFRP plate gives better Surface
Roughness.
3)It was observed that the surface roughness increases with
the decreasing of point angle.

Fig 10 :Surface roughness vs Speed

4)Further investigations are needed to enhance the hole


quality characteristics considering different tool materials and
tool diameters, considering machine vibration, etc during
drilling of GFRP composites.
REFERENCES

Fig 11 :Surface roughness vs Thickness

Fig 12:Surface roughness vs Feed rate


The variation of surface roughness with different
combinations of input variables is studied using the output
surface FIS. Fig 10,11 12 shows the Functional dependence
of surface roughness(Ra) with Plate thickness ,feed rate and
Spindle speed .It can be observed that the surface roughness
increase with increase in plate thickness or increase in spindle
speed or increase in feed rate. And it is also observed that
surface roughness is decreases with small plate thickness,
medium spindle speed and small feed rate.
VI .CONCLUSION
This experimental investigation presents the surface roughness
characteristics of drilling on GFRP composites. A simple
regression prediction model was developed based on the
function of process variables and the following conclusions
were made
1)Surface Roughness was analyzed as a function of process
input variables. Validation was done with a developed fuzzy
rule based model. The results obtained from experiments,

[1] Park, J. N., Cho, G. J.A Study of the Cutting


Characteristics of the Glass Fiber Reinforced Plastics by Drill
Tools, International Journal of Precision Engineering and
Manufacturing, vol. 8 (2007) 11-15
[2] VijayanKrishnaraj, Member, IAENG, Effects of Drill
Points on Glass Fiber Reinforced Plastic Composite While
Drilling at High Speed, Proceedings of the World Congress on
Engineering 2008 Vol II, WCEE 2008, July 2-4(2008)
London.
[3] Sonbatry El, Khashaba U.A, Machaly T, Factors affecting
the machinability of GFRP/epoxy composites, Comp
Structures, 63 (2004) 329-338.
[4] Montgomery, D.C.,. Design and Analysis of Experiments:
RSM and Designs. John Wiley and Sons. New York, USA,
2005.
[5] Konig W, WulfCh, Gra P and Willercheid H, Machining
of fiber reinforced plastics, Annals CIRP, 34 (2) (1985) 537548.
[6] Komaduri R, Machining of fiber-reinforced Composites,
Mechanical Engineering, 115 (4), (1993) 58-66.
[7] A.M. Abrao et.al,. Drilling of fiber reinforced plastics: A
review, Journal of Materials Processing Technology 186
(2007) 17.
[8] Abrao A M, Faria PE, Campus Rubio J., C., Reis P,
PauloDavim J Drilling of fiber reinforced plastics: A Review.
J Materl. Process Technology 186 (2007)
[9] CaprinoG, Tagliaferi V Damage development in drilling
glass fiber reinforced plastics. Int J Mach tools Manuf (6):
(1995) 817-829.
[10]Hocheng, H.and H. Puw. On drilling characteristic fiber
reinforced Thermoset &Thermoplastics. Int J Mach tools
Manuf ,32 (1992)583-592.

Page 83

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
[11]M.Chandrasekaran, M.Muralidhar, C.M.Krishna and
U.S.Dixit, Application of soft computing techniques in
machining performance prediction and optimization:a
literature
review,Int J Adv Manuf Technol,Vol.46(2010)
445-464.
[12]M.Chandrasekaran and D.Devarasiddappa ,Development
of Predictive Model For Surface Roughness in End Milling
of Al-SiC Metal matrix Composites using Fuzzy logic,
Engineering and Technology 68 (2012) 1271-1276
[13] C.Y.Hsu,C.S.Chen,C.C.Tsao,Free abrasive wire saw
machining of ceramics, Int J Adv
Manuf Technology
40 (2009) 503-511.
[14] Bala Murugan Gopalsamy,Biswanath Mondal,Sukamal
Ghosh,Optimisation of machining
parameters for hard
machining:grey relational theory approach and ANOVA,The
International journal of Advanced Manufacturing Technology
45 (2009) 1068-1086.
[15]Vikram Banerjee et.al,Design space exploration of
mamdani and sugeno inference systemsfor fuzzy logic based
illumination controller, International journal of VLSI and
Embedded system-IJVES (2012) 97-101.
[16]B.latha and B.S.Senthilkumar, Modeling and Analysis of
Surface Roughness Parameters in Drilling GFRP Composites
Using Fuzzy Logic, Materials and Manufacturing Processes
25 (8) (2010) 817-827.

Page 84

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Analysis of Output DC Current Injection in


100kW grid connected VACON 8000 Solar
inverter
Sneha Sunny George1, Robins Anto2, Sreenath B3
1

PG Scholar, Dept of EEE, 2Head of Department, 3Asst Professor, Dept of EEE


1 ,3
Amal Jyothi College of Engineering, Kanjirappally, Kottayam
2
Mar Baselios Christian College of Engineering and Technology, Kuttikanam, Peermade
Abstract Solar energy technologies have gained much importance in the recent scenario due to their ability to produce
clean, reliable, useful power. Grid connected Photovoltaic system requires conversion from DC to AC to harness the useful
energy produced. A Photovoltaic inverter directly connected to the grid can cause, besides the generation of several current
harmonics, a DC current component injection. Excessive DC current injection into the AC network can result in problems
such as increased corrosion in underground equipment and transformer saturation. The paper aims at evaluating the output
DC-current injection in grid connected inverter used for a 100kW solar power plant installed at Amal Jyothi College of
Engineering, Koovapally, through experimental analysis.
Keywords: Grid connected inverter, DC offset current
I.

INTRODUCTION

With the increase in energy crisis concerns growing during


day by day, much recognition is being gained in the potential
of solar energy as a sustainable energy source . Solar energy
adds flexibility to the energy resource mix by decreasing the
dependence on fossil fuels, but the greatest barrier to the
technological expansion in this field is the costs of devices
used for converting sun`s energy in the form of radiation into
useful electrical energy, limited space and energy. Even though
there has been a massive downward tendency in the price of
PV modules, the price of grid connected inverters still remains
high thereby increasing the overall cost. The efficiency of the
plant plays a crucial role in the profit obtained from sustainable
energy resources being harnessed. The major benefit of
designing a reliable, stable, efficient and lower cost
photovoltaic power electronics system is the availability of
reliable and quality power without relying on the utility grid. It
also avoids the major investment in transmission and
distribution. To the nation, the major benefit lies in the fact that
it reduces greenhouse gas emissions, responding to the
increasing energy demands by establishing a new, highprofiled industry.Therefore, it is required to minimize the
losses and improve the efficiency of power electronic devices
used. Use of multilevel inverters has increased the quality of
waveforms and thereby increasing efficiency of the system.
H-bridge multilevel inverters are more suitable for renewable
energy harvesting due to the presence of separate DC sources.
II.

GRID CONNECTED INVERTER AND DC


INJECTIONS

Grid connected inverters are used to convert the DC power


thus obtained into AC power for further utilization. They are
directly fed solar electricity to the grid. As it does not have the

battery component, the cost of the system is low. The main


quality requirements / factors affecting these power converters
are total harmonic distortion (THD) level, DC current injection
and power factor, the Impulse Withstanding ratio (or BIL),
High Frequency Noise / Electromagnetic compatibility (EMC),
Voltage Fluctuations and Flicker of Inverter System.
Therefore, inverters connecting a PV system and the public
grid are purposefully designed, allowing energy transfers to
and from the public grid. [1-3]
Due to approximate short circuit characteristics of AC
network, a little DC voltage component can accidently be
produced by grid connected inverters which can create large
DC current injections. If output transformers are not used,
these inverters must prevent excessive DC current injection,
which may cause detrimental effects on the network
components, in particular the network transformers which can
saturate, resulting in irritant tripping. This may also increase
the losses and reduce the lifetime of the transformers, if not
tripped. Moreover, the existence of the DC current component
can induce metering errors and malfunction of protection
relays and can create an adverse effect on the overall
functioning of the solar power plant.
Therefore, there are stringent regulations in many countries
to prevent the network from the large DC current injection.
Since most Indian standards published by BIS are aligned to
IEC standards, DC injections up to 1% is being proposed by
the BIS in the Indian standard keeping with IEC 61727. The Hbridge or Multi Level inverter eliminates the DC component
of the current by adding switches on the DC side to clamp the
voltage during the zero voltage periods. This method could be
also applied by clamping in the AC side. Both methods could
not guarantee elimination of DC component as the unbalance

Page 85

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
due to forward power electronic switch voltages and PWM
control can not be removed. [4-6]
III.

EXPERIMENTAL RESULT

The main objective of the work is to conduct an analysis


study of output DC injection in grid connected 100kW
VACON 8000 solar inverter installed at the 100kW solar plant
in Amal Jyothi College of Engineering, Kanjirappally, India.
Technical analysis of the plant is done to evaluate the effect of
environmental and climatic conditions on the performance of
the system. The analysis also evaluates the effects DC offset to
variations in its operating conditions. The International Energy
Agency (IEA) Photovoltaic Power Systems Program describes,
in its IEC Standard 61724, the parameters used to assess the
performance of solar PV systems.

and rainymonths ie in the months of February, march, April


and May. From, the analysis of the available data, it is found
that the panel has different behavior for varying
insolation,temperature etc. From the experimental Datas
collected, the efficiency of the inverter has reached up to 87%
during high radiation time i.e.the inverter is never operating
near at its full capacity and the average DC to AC conversion
is below 90%.The inverter efficiency very slowly declines after
peak value is reached.PV system at its best is operating in the
20 to 40% range of rated output and hence is operating in the
87 to 91% efficiency range during the sunniest periods. Since
the inverter is kept in a mechanical room under the roof and the
temperature differences was not as drastic as they would be for
inverters located outside.

Fig 2. Solar insolation vs Time on a rainy day (a) and clear day
(b)

Fig 1: Block diagram of 100kW solar plant


The block diagram shown above represents the entire grid
connected solar power plant installed at Amal Jyothi College of
Engineering. 100kW solar panels are used to trap solar
radiations. The energy obtained is converted to useful AC
supply using a VACON built inverter.The energy harnessed
from the sun is used to meet the requirements of an entire 7storey building block. When the energy harnessed is not
sufficient the required amount is taken from the KSEB grid.
Since Kerala is placed in the equatorial region, it has high
solar insolation and temperature. The normal ambient
temperature varies from 23-33degC. The variation in solar
insolation and temperature affects the panel PV panel
performance. Rise in temperature results in degradation of
efficiency and power output of the solar panel. The solar
insolation falling in the earth`s atmosphere has direct or beam
radiation,diffused radiation and albedo or reflected radiation. In
bright sunshine days, the beam and albedo radiations are
greater . But that during cloudier days the diffused components
are more. The data collection was made for sunny

From the analysis done, on a clear day with bright sunshine


the panel receives daily average solar radiation of about
4.56kW/m2 to 5.24kW/m2. During the rainy days the solar
radiation is about 3.13kW/m2 to 4.3kW/m2. Thus from the
data collection it can be classified into two groups. First, high
solar radiation groups that is available from January to mid
April. The average solar insolation available over the region
from the online satellite data the solar radiation available is in
the range of 5 to 6kW/m2 for sunny hours and is around
4kW/m2 for rainy hours. The plot describes the solar radiation
for sunny and rainy days.
The main sources of DC injections are power supply,
computer, network faults, geomagnetic phenomenon,
cycloconverters, lighting circuits/ dimmers, embedded
generators, AC and DC drives and photovoltaic grid inverters.
Due to approximately short circuit characteristics of an AC
network under a DC voltage excitation, a little DC voltage
component that can be accidentally produced by the inverter
will produce large DC current injection. This causes
detrimental effects on the network components, in particular
the network transformers which can saturate, resulting in
irritant tripping. This may also increase the losses in and
reduce the lifetime of the transformers, if not tripped.

Page 86

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Moreover, the existence of the DC current component can
induce metering errors and malfunction of protection relays.
The effect of DC currents on the accuracy of Domestic
Electricity watthour meters is both an issue in relation to the
type of meter used and its method of connection to the supply
network. As a consequence, it is believed that the effect of DC
components acting upon watt hour meters merits further
investigation, (best undertaken by direct testing due to the
reluctance of manufacturers to discuss operation).There are
thus stringent regulations in many countries to prevent the
network from the large DC current injection.The solar
insolation Vs output DC injection graph for a period of
3months is shown in the following graph.

360kWh/day at 82.26% efficiency for a clear sky and about


200kWh/day at 80.23% efficiency for a rainy day.

Fig 3. Efficiency Vs output power on a rainy day & clear day

Fig 4. solar insolation vs DC offset on a rainy day & clear


day
The VACON 8000 SOLAR inverters use special digital
control techniques to limit the DC offset in the output obtained
from the inverter. When solar insolation is below the required
value ( i.e.DC output voltage of solar panels is less than 340V)
or when a fault occurs the inverter shutdowns and starts only
5mins after the fault condition is restored. This results in an
error the value (357.67 %)of DC offset measured using
FLUKE power analyzing meter. During March - mid April
2014 time period, the DC o_set varies between 0.04 (during
high solar insolation) -0.19(During low solar-insolation)
percent and during mid April- May 2014 time period,the DC
offset varies between 0.03 during high solar insolation to 0.15
during low solar radiation time period. The maximum allowed
DC offset in India is 1 % of the output obtained.
On analytic calculation of the PV inverter efficiency, it
was found that between during the course of the experiment
conducted between March to May on a sunny / clear day the
efficiency of the 100 kW inverter was found to be 82.6%.
Similarly for a rainy day the average efficiency of the inverter
is calculated to be 80.23%. This can even drop if the solar
insolation considerably drops. The daily average PV inverter
output generation and efficiency can be noted to be

The plots show the relation of DC inverter input power and


inverter efficiency for clear and rainy day. The inverter is
found to have a serious defect of frequent shutdown if the
voltage drops to a low value that lasts only for a few seconds.
The estimated fault is due to an error in one of the parameters
coded in the software installed in the inverter. This defect is to
be corrected and rectified otherwise it will result it complete
failure of the system.Solar insolation or irradiance was
measured using the pyranometer at an interval of 30 minutes
along with other details collected. The total energy harnessed
by the installed inverter upto 16th may 2014 is 123992kwh.
IV.

CONCLUSION

The overall system efficiency can get effected if DC


current injections are not limited to, the standards specified by
Indian standards and IEEE standards. The installed control
technique in the 100kW inverter limits the DC injection to
standard values
unless tripped. Performance evaluation
conducted shows relation of solar insolation on DC offset and
output power on the efficiency of the system
REFERENCES
[1] Yanqing Li, Cheng Chen, Qing Xie, Research of An
Improved Grid-connected PV Generation Inverter Control
System, 2010 International Conference on Power System
Technology, pp.1-6, 0ctober 2010
[2] E. Koutroulis, F. Blaabjerg, Methodology for the optimal
design of transformerless grid-connected PV inverters, IET
Power Electron., Vol. 5, Iss. 8, pp. 14911499, 2012, June
2012
[3] Angelina Tomova, TU Sofia, Grid connected pv inverter
topologies: an overview, Phd Seminar, DERlab Young
researchers, Glasgow, UK, April, 2011
[4] Berba. F, Atkinson David, Armstrong. M, Minimisation of
DC current component in transformerless Grid-connected PV
inverter application, Environment and Electrical Engineering

Page 87

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
(EEEIC), 2011 10th International Conference, pp.1-4 May
2011
[5] V. Salas, E. Olas , M. Alonso , F. Chenlo and A. Barrado,
DC current injection into the network from PV grid
inverters, Photovoltaic Energy Conversion, Conference
Record of the 2006 IEEE 4th World Conference, pp. 2371
2374, May 2006
[6] Ashraf Ahmed, Ran Li, Precise Detection and Elimination
of Grid Injected DC from Single Phase Inverters,
International Journal of Precision Engineering and
Manufacturing, Vol. 13, No. 8, pp. 1341-1347, August
2012
[7] H. Wilk, D. Ruoss, and P. Toggweiler, "Report - Innovative
electrical concepts," International Energy Agency
Photovoltaic Power Systems, IEA PVPS, www.ieapvps.org, 2002
[8] Soeren Baekhoej Kjaer, John K. Pedersen, Frede Blaabjerg,
A Review of Single-Phase Grid-Connected Inverters for
Photovoltaic Modules IEEE Trans. On Ind. Appl., Vol.
41, No. 5, Page(S): 1292-1306, September-October 2005
[9] Claude Morris, Report on , Grid-connected
Transformerless Single-phase Photovoltaic Inverters: An
Evaluation on DC Current Injection and PV Array Voltage
Fluctuation, Murdoch University perth Westen AustraliaSchool of Engineering and Energy, 2009
[10] Guo Xiaoqiang,Wu weiyang, Gu Herong and San
Guocheng, DC injection control for grid connected
inverter based on virtual capacitor concept, International
Conference on Electrical Machines and Systems, 2008.
ICEMS 2008, Page(s):2327 2330, October 2008
[11] R Sharma, Removal Of Dc Offset Current from
Transformerless Pv Inverters Connected to Utility,
Australasia Universities Power and Control Engineering
Conference Proceedings, page(s): 136-144, October 1992
[12] Kitamura. A, Yamamoto. F, Matsuda. H, Akhmad. K,
Hamakawa,Yoshihiro,Test results on DC injection pheno
menon of gridconnected PV system at Rokko test centerP
hotovoltaic Specialists Conference, 1996, Conference
Record of the Twenty Fifth IEEE, Page(s): 1377 - 1379 ,
1996
[13] H. Wilk, D. Ruoss, and P. Toggweiler, "Report Innovative electrical concepts," International Energy
Agency Photovoltaic Power Systems, IEA PVPS ,2002,
www.iea-pvps.org
[14] Yogi Goswami, Frank Kreith, Jan F.Krieder , Principles of
Solar Engineering, Second edition, 1999.
[15] Singla and Vijay Kumar Garg, "Modeling of solar
photovoltaic module & effect of insolation variation using
matlab/simulink," International Journal of Advanced
Engineering Technology, Vol IV Issue III Article 2 pp.0509 July-Sept,2013

[16] Photovoltaic Inverter, WWW.pvresources.com/balance of


system/inverters.aspx
[17] Report on "CP4742 Grid Connected Renewable Energy
System Technical Guidelines," pp. 1-18, October 2009
[18] Matthew Armstrong, "Auto-Calibrating DC Link Current
Sensing Technique for Transformerless, Grid Connected,
H-Bridge Inverter Systems," IEEE Transaction on power
electronics Vol. 21 No. 5, 2006.
[19] V. Salas, M. Alonso and F. Chenlo, "Overview of the
legislation of DC injection in the network for low voltage
small grid-connected PV systems in Spain and other
countries", Renewable and Sustainable Energy Reviews,
vol. 12, pp. 575-583, 2008.
[20] Carrasco, J. M., Franquelo, L. G., and Alfonso, N. M.,
"Power Electronic systems for the grid integration of
renewable energy sources: A survey," IEEE Transactions
on Industrial Electronics, Vol. 53, No. 4, pp. 1002-1016,
2006.
[21] Report on "An investigation of dc injection levels into low
voltage ac powersystems" Distributed generation coordinating group, june 2005
[22] Instruction book for 10-200kW Standalone Drivers,
Vacon 8000 Solar
[23] Guidlines for measurement, data exchange and analysis of
PV system performance
[24] Frank Vignola, Fotis Mavromatakis and Jim Krumsick,
"Performance
of
PV
inverter",www.solardat.uoregon.edu/download/papers/perf
ormance of inverters.pdf
[25] S. Pless, M. Deru, P. Torcellini, and S. Hayter, "Procedure
for Measuring and Reporting the Performance of
Photovoltaic Systems in Buildings, Tech[25]nicalReport
NREL/TP-550-38603, October 2005
[26] G.Chicco, R.Napoli and F.Spertino, "Experimental
evaluation of the performance of grid-connected
photovoltaic systems",
Proc. IEEE Melecon
2004,Dubrovnik, Croatia 3, PP. 1011-1016 May 12-15,
2004.
[27] B. Marion, J. Adelstein, and K. Boyle, H. Hayden, B.
Hammond, T. Fletcher, B. Canada, and D. Narang, D.
Shugar, H.Wenger, A. Kimber, and L. Mitchell and G.
Rich and T. Townsend (2005), "Performance Parameters
for Grid- Connected PV Systems, 31st IEEE
Photovoltaics Specialists Conference and Exhibition Lake
Buena Vista, Florida, January 3-7, 2005
[28] Fluke 434-II/435-II/437-II Three Phase Energy and Power
Quality Analyzer User Manual, Fluke Corporation,
January 2012.
[29]
IEC
standards
for
solar
photovoltaic
system,http://www.pvresources.com

Page 88

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Study and Implementation of 3 Stage Quantum


Cryptography in Optical Networks
T.Godhavari1 , Libi Balakrishnan2, Srikrishnan3
2

PG Student- M.Tech. Communication system


1,3
Assitant Professor
Dept. Electronics and Communication Engineering
Dr.MGR Educational and Research Institute, University
Chennai, Tamil Nadu, India.
Abstract: This paper presents a quantum protocol based on public key cryptography for secure transmission of data over a
public channel. The security of the protocol derives from the fact that Alice and Bob each use secret keys in the multiple
exchange of the qubit. Unlike the BB84 protocol and its many variants where the qubits are transmitted in only one
direction and classical information exchanged thereafter, the communication in the proposed protocol remains quantum in
each stage. In the BB84 protocol, each transmitted qubit is in one of four different states; in the proposed protocol, the
transmitted qubit can be in any arbitrary state. Disparate and heterogeneous networks will be a growing reality in the future.
Additionally, some of the regulatory, national interest, and security requirements might force a geographic boundary
between networks.
Keywords: Optical Network, OBS,WDM Quantum computing
I.

INTRODUCTION

The Internet is rapidly becoming a network of


networks as a logical outcome of the growth of a global
information economy where geographically or functionally
distinct networks owned by functionally distinct networks
owned by identities can cooperate to provide high speed,
high performance, and cost effective service, on demand ,to
their customers. We obtain the highest level of
interconnection at the optical level. Optical switching
technologies can be categorized into optical circuit switching,
optical packet switching, and optical burst switching (OBS).
Optical circuit switching, also known as lambda switching,
can only switch at the wavelength level, and is not suitable for
bursty Internet traffic. Optical packet switching which can
switch at the packet level with a fine granularity is not
practical in the foreseeable future. The two main obstacles are
the lack of random access optical buffers, and optical
synchronization of the packet header and payload.
OBS is considered the most promising form of
optical switching technology. OBS can provide a cost
effective means of interconnecting heterogeneous networks
regardless of the lower-level protocols used in these networks
For example, an OBS network is able to transport 10 Gigabit
per second Ethernet traffic between two sub-networks without
the need to interpret lower level protocols or to make two
geographically distant wireless networks to act as an
integrated whole without protocol translations. Unfortunately,

OBS networks suffer from security vulnerabilities . Although


IPSec can be used to secure IP networks, OBS networks can
provide security services to traffic that do not necessarily have
an IP layer, as illustrated in Figure 1.

Fig.1 Illustration of OBS network


This will likely be the case for the majority of traffic
served by the OBS layer. For example, native Ethernet traffic
can be transported directly over OBS networks. There is no
single security measure that can accommodate the security
needs of different modalities of traffic that interface with the
OBS networks. It is clear that the security of communication
within the OBS network has to be sufficiently addressed in

Page 89

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
order for OBS to fulfil its promise. In addition, as computing
power increases in the future, classical cryptography and key
management schemes based on computational complexity
become increasingly susceptible to brute force and
cryptanalytic attacks.
On the other hand, quantum cryptography uses
the quantum mechanism to provide security, which is
theoretically unbreakable. Given the optical modality Fig. 1.
Illustration of optical burst switching (OBS) network.of all
information within the OBS network, introducing quantum
cryptography in OBS networks appears to be a natural choice.
Since the OBS network itself allows a one-to-one
correspondence between a header and its associated burst, the
same relationship could be exploited to tie the same key to the
header and the burst. The quantum-based methodology will
allow a secure distribution of keys which could be potentially
used to encrypt and decrypt each burst with a unique key.
However, it must be stressed that classical cryptography and
key distribution schemes will co-exist with quantum-based
techniques for a long time. Therefore, we propose an
integrated security framework for OBS networks which
exploits the strengths of both classical and quantum
cryptography schemes, and allows a seamless migration to
quantum techniques as as the technology evolves. In addition,
by embedding security components in the OBS native router
architecture and incorporating quantum techniques for key
distribution, the proposed approach can achieve a robust level
of security while combining the strengths of both quantum
and classical technologies. The integrated framework will
make it possible to offer different levels of security for
different applications.
The proposed security architecture is also consistent
with the potential use of quantum data encryption in the
future, as one can envisage the possibility of using a quantum
technique to encrypt such as by using a Vernam cipher to
make the encryption theoretically unbreakable. The rest of the
paper is organized as follows. Section 2 provides the
background of OBS networks. In Section 3, we describe
security vulnerabilities in OBS networks, discuss the
embedded security services to secure the OBS network.
The rest of the paper is organized as follows. Section 2
provides the background of OBS networks. In Section 3, we
describe security vulnerabilities in OBS networks, discuss the
embedded security services to secure the OBS networks, and
propose the integrated secure OBS router architecture which
allows both classical and quantum cryptography techniques..
II.
OPTICAL BURST
SWITCHING (OBS)BACKGROUND

channel ahead of the data burst. The OBS routers will set up a
light path for the duration of the data burst according to the
information carried in the burst header. Data bursts can stay in
the optical domain and pass through OBS routers
transparently. This eliminates the need for optical buffers in
such networks. In addition, since burst headers and data bursts
are sent on separate WDM channels, there is no stringent
synchronization requirement. Figure 1 illustrates an OBS
network interconnecting heterogeneous networks. OBS
ingress edge routers are responsible for assembling packets
into data bursts according to the egress edge router addresses
and possibly quality-of-service (QoS) levels. A burst is
formed when it either reaches the pre-defined maximum burst
size, or the burst assembly time reaches the timeout value.
Adaptive burst assembly schemes can be used as well. Once a
burst is formed, the ingress edge router generates a burst
header which is sent on a separate control channel. The burst
header specifies the length of the burst, and the offset time
between the burst header and the data burst. The data burst is
then launched on one of the WDM data channels. When the
burst header reaches the OBS core router, it is converted to
electronic signal and processed electronically. Since burst
headers carry complete information about data bursts, the
OBS core router can make efficient scheduling decisions in
selecting the outgoing WDM channels for data bursts by
simply processing burst headers. If at least one outgoing
WDM channel is available for the duration of the burst, a
channel will be selected to carry the data burst. Otherwise, the
data burst will be dropped. Before the data burst reaches the
OBS core router, the optical interconnects in the OBS core
router will be configured to route the optical data burst to the
desired output channel. The data burst can traverse the OBS
core network as an optical entity transparently without
encountering O/E/O conversion. When data bursts reach the
egress edge router, data bursts will be disassembled back to
packets and forwarded to proper network interfaces.
Note
that
burst
assembly/disassembly
functionality is only provided at OBS edge routers. There is
no burst reassembly in the OBS core network. There is a oneto-one correspondence between the burst header and its
associated burst. Burst headers are responsible for setting up
optical data paths for their data bursts. Data bursts will simply
follow the light paths set up by burst headers and are
transparent to OBS core routers.
III.
PROPOSED EMBEDDED
SECURITY SERVICES AND INTEGRATED SECURE
OBS ROUTER ARCHITECTURE
3.1.Security vulnerabilities in OBS networks

In OBS networks, data are aggregated into variable


size data bursts, and are transported directly over wavelength
division multiplexing (WDM) links. A burst header is
generated for each data burst, and is sent on a separate control

OBS networks show great promise in providing cost effective


interconnection solutions to the ever growing Internet.
However, OBS network is not free of security concerns . In

Page 90

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
this section, the need to bring security measures to OBS
networks is discussed.
Orphan Bursts: The burst header is responsible for making the
WDM channel reservation for its corresponding burst. If the
scheduling request is rejected at one of the OBS core routers,
there will be no valid optical path set up for the arriving burst.
Since the burst has been launched, it is going to arrive at the
input of the core router in any case. At this point, the burst is
no longer connected with its header and becomes an orphan
burst as shown in Figure 2(a). As a result, orphan data bursts
can be tapped off by some undesirable party, compromising
its security.

circulate in the OBS network, delaying its delivery to the final


destination.

FIG2(a)

3.2 Embedded Security Services


In the section, we propose to embed security services
which integrate classical and quantum cryptography in the
OBS network architecture, as opposed to a layer on top of it.

Redirection of Data Bursts: The one-to-one correspondence


between the burst header and its associated burst is implied by
the offset time carried in the burst header. Such one-to-one
correspondence can be violated by injecting a malicious
header corresponding to the same burst, as shown in Figure
2(b). As a result, the route and the destination for the burst can
be altered by the malicious header, even though a legitimate
path has been set up by the authentic header.
FIG:2(b)

Replay: Replay attack can be launched by capturing a


legitimate but expired burst and transmitting at a later time, or
by injecting a expired burst header to cause the optical burst to

Fig. 2. (a) Example of an orphan burst, (b) example


of violation of one-to-one correspondence in redirected burst.
Denial of Service: OBS core routers make scheduling
decisions based on the availability of their outgoing WDM
channels. When a burst is scheduled, the core router will
mark the WDM channel busy for the duration of the burst. In
the case where no idle WDM channel can be found for the
upcoming burst, the burst is discarded. Note that all
scheduling decisions are made by processing burst
information carried in burst headers on-the-fly. The OBS core
routers have no ability to verify if indeed the scheduled optical
burst arrived at the designated time. This can be used to
launch a denial-of-service attack by simply injecting
malicious burst headers, causing the core routers to mark
WDM channels busy and thus blocking real traffic passing
through the OBS network. As we can see, an OBS network is
under severe security threats. Effective security measures
must be implemented in order to make the OBS network a
viable solution for the future Internet.

End-to-end data burst confidentiality: In OBS networks, data


bursts assembled at the ingress edge router stay in the optical
domain in the OBS core network, and are only disassembled
at egress edge router. Since data bursts switch transparently
across the OBS core routers, the end-to-end confidentiality of
data bursts within the OBS domain can be provided by
encrypting data bursts at the ingress edge router
and decrypting at the egress edge router. An effective
encryption
scheme for securing data bursts can be
implemented using the advanced encryption standard (AES)
since it can function at high speed while also providing a high
degree of cryptographic strength. The keys can be transferred
using either classical techniques, or quantum-based key
distribution schemes.
Per-hop burst header authentication: Unlike data bursts,
which retain optical modality in the core OBS network, burst
headers are converted back to an electronic form and are
processed at every OBS core
router along the path. Therefore, per hop burst header
authentication is needed to ensure that no malicious headers
are injected into the network. Authenticating burst headers at
each hop can mitigate several active attacks such as
misdirection of data bursts, replay, and
denial of service.

Page 91

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Burst integrity with burst retransmission: In OBS networks,
when there is no outgoing WDM channel available, the burst
will be dropped. In order to ensure the integrity of burst
transmission, we propose to implement the following
mechanism. In case a burst
is dropped due to lack of WDM resources, the burst integrity
service will trigger burst drop notification with optional burst
retransmission at the ingress edge router. Burst integrity
service also ensures that no injection or replay occurs during
burst transportation. Such service is dependent upon direct
access to the burst transmission control, and can only be
implemented as an embedded service.
Integrated classical and quantum cryptography:
Classical cryptography relies on the assumption that
performing certain mathematical functions is intrinsically hard
using available computing resources. However, as computing
power will inevitably increase in the future, such an
assumption is increasingly questionable. In contrast, quantum
cryptography, or quantum key distribution (QKD) built upon
the principles of quantum mechanics is theoretically
unbreakable since observing the state of a transmitted photon
will corrupt its state. However, quantum cryptography still
faces technical challenges and will not completely replace
classical cryptography in the near future. Therefore, we
propose to provide a security framework which entails both
classical and quantum components.
3.3 Integrated Classical And Quantum Cryptography Services
Supervisory security protocol: The supervisory protocol
manages security in the OBS network on a per user basis.
Specifically, it assigns keys to users and stores their hash
values and sets up the sequence that needs to be followed to
authenticate the users by password authenticated key
exchange (PAKE) or some other procedure. Once the users
have identified themselves for a session, a session key is
generated either by a classical or QKD techniques
for different levels of security guarantees. Such a service will
affect the burst assembly process, and has to be implemented
as an embedded service in the OBS network architecture. The
supervisory security protocol is essential for the prevention of
man-in-the middle attacks.
3.3 Integrated Secure OBS Router Architecture
In this section, we show how to embed the proposed
security services as part of the native OBS network
architecture. The integrated router architecture to support both
classical and quantum cryptography is also presented.
sQ-channel for quantum key distribution: The proposed
realization of QKD in OBS networks is as follows. As
mentioned earlier, OBS preserves the photonic modality of

information within its domain. We additionally introduce the


constraint of optical passivity within the OBS boundary,
specifically, so far as the channel that carries the quantum key
Fig. 3. Creating a Q-channel between edge routers.
information (called the Q-channel in this paper) is the photon
on the Q-channel on an end-to-end basis. Since WDM
technology is used for interconnecting the edge and the core
routers, one (or several) of these channels (wavelengths)
would carry the photon whose polarization would convey
information regarding the key. Figure 3 shows the creation of
a Q-channel between a pair of edge routers. The support for
Q-channels in OBS routers is further explained below.

Fig 3
Secure edge router architecture:
The OBS edge router aggregates traffic into bursts
based on destination edge router addresses, and possibly QoS
parameters. The basic operation of an edge router can be
found in Reference [10]. We extend the basic OBS edge
router architecture to support embedded OBS security services
as shown in Figure 4. At the point of ingress direction, the
assembled bursts and their corresponding headers are
encrypted before transmission onto the optical link. At the
point of egress direction, the received burst headers are
authenticated before their corresponding bursts are decrypted
and disassembled. The key management functions include
both classical and quantum components. The classical key
distribution protocol uses the control channel, while the QKD
is via Q-channels. The burst integrity control interacts with the
burst assembly process in the burst transmitter and retransmits
bursts as necessary.
Secure core router architecture:
OBS core routers electronically process the burst
headers sent on the control channel while allowing optical
bursts to pass transparently [10]. The integrated secure OBS
core router architecture shown in Figure 5 supports Q
channels for QKD, as well as classical key distribution
protocols. The key manager in the core router architecture is
for burst header authentication, and is transparent to the burst
encryption key exchanged on an end-to-end basis. The burst
scheduling process is only executed when the burst header is

Page 92

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
authenticated. When bursts cannot be scheduled due to lack of
available outgoing WDM channels, the burst scheduling
process interacts with the burst integrity control unit to inform
the ingress router, and trigger burst retransmission.
High performance electronics such as field programmable
gate arrays (FPGAs) can be used to implement the proposed
embedded security services in the secure edge and core
routers, in much the same way as the burst assembly and burst
scheduling blocks are implemented

Fig 4.Integrated secure OBS edge router architecture


IV.

QUANTUM CRYPTOGRAPHY FOR


ENHANCED SECURITY

4.1.Quantum Cryptography Background


It is proven that should the length of a random key
equal the length of the message (in other words, if the rate at
which the key can be transported equals the data speed), the
encryption performed on the message through a simple
technique such as the exclusive OR operation will lead to a
theoretically unbreakable cipher Since there is no secure way
of sending the random key over a public channel, the use of
quantum cryptography can be envisaged as matching the
performance of the theoretically unbreakable cipher. The first
quantum-based scheme for exchanging secure keys was made
by Bennett and Brassard in 1984 and it is called the BB84
protocol , which is the most popular QKD method. QKD is
effective because of the no-cloning theorem that identical
copies of an arbitrary unknown quantum state cannot be
created. The BB84 protocol and its variants use qubits
(quantum bits) in one pass and this is followed by two

additional passes of classical data transmission. If Eve tries to


differentiate between two non-orthogonal states, it is not
possible to achieve information gain without collapsing the
state of at least one of them . Proofs of the security of
quantum cryptography are given variously in References
pactical issues have been considered in References, and
optical implementations are discussed in References . The
issue of using attenuated lasers rather than single photon
sources is considered .. In short, quantum cryptography is
ideally suited for OBS since it is fundamentally based on the
quantum properties of a photon. Besides leading to a
theoretically unbreakable encryption scheme, the quantumbased encryption technology is well matched for use in an
end-to-end photonic environment, which the OBS
environment typifies.
4.2. BB84 Quantum Cryptography Protocol and Siphoning
Attacks
We first describe how BB84 quantum cryptography
protocol works. Unlike classical states, a quantum state is a
superposition of several mutually exclusive component states.
The weights of the component states are complex and their
squared magnitude represents the probability of obtaining that
specific component state. The quantum state X, if it is a two
component state, or a qubit, will be written as: |X_ = a |0_ + b
|1_ where |a|2 + |b|2 =1. Suppose, Alice and Bob each has two
polarizers, with 0/90 degrees and with 45/135 degrees. If
Alice and Bob use the same basis frames, then they can
communicate different binary states with each transmission.
The two bases may be represented two bases may be
represented graphically as + and x, respectively.
We assume that Alice sends the string 0101100 using
the two bases. Since Bob does not know the bases used by
Alice, he chooses random bases as shown in Figure 6(b) and
makes measurements. Bob sends the chosen basis vectors to
Alice who can now estimate as to which measurement bases
chosen by Bob were correct; this is communicated by Alice to
Bob through a classical communication channel. Bob discards
un-matched bits, and the resultant bits . Since only the
polarizers at locations 1, 3, 4, 6, 7 correspond to the choices
made by Alice, Bob obtains the raw key of sub-string 00100.
The steps of BB84 protocol are summarized as follows:
Step 1: Alice randomly chooses polarizers to generate photons
and sends them to Bob.
Step 2: Bob receives those photons with randomly chosen
polarizers.
Step 3: Alice and Bob match their bases and discard the data
for un-matched polarizers.
However, BB84 is susceptible to siphoning attacks.
The unconditional security of BB84 and its variants can only
be guaranteed if ones light source emits nothing but single
photons. Since this is not possible with current light sources,
eavesdropping attacks are possible.

Page 93

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
In particular, the eavesdropper siphons off individual photons
and measures them to determine what the legitimate receiver
has obtained. To reduce the probability that pulses will
contain multiple photons, current implementations of BB84
and its variants limit the intensity of each pulse and reduce the
bit rate at which they are sent. But the weaker a pulse is, the
less distance it can travel, and a slower bit rate reduces the
speed at which keys can be distributed. The problem of
siphoning attack plagues all variants of the BB84 protocol
and, therefore, it is essential to have a new quantum
cryptography protocol where the siphoned photons do not
reveal any information about the transmitted bit.

Fig. 7. Illustration of recommended quantum cryptography


protocol for security services in OBS networks .and UB
commutate, UBUA(X) = UAUB(X))

4.3. 3-Stage Quantum Cryptography


Protocol for Secure Optical Burst Switching
Quantum cryptography allows one to go beyond the classical
paradigm and, therefore, overcome the fundamental
limitations that the classical techniques suffer from. However,
it also faces new challenges related to performance in the
presence of noise and certain limitations of the single-photon
generators. Our proposed integrated secure OBS architecture
is fully compatible with the well-known BB84 protocol.
However, to deal with the technical challenge of siphoning
attack on the practical multi-photon sources in the BB84
protocol, we propose to use a new 3-stage quantum
cryptography protocol for the secure OBS framework. Unlike
BB84 and its variants, the 3-stage quantum cryptography
protocol is immune to siphoning attacks and therefore,
multiple photons can be safely used in the quantum key
communication.
3-stage quantum cryptography protocol is
based on random rotations which can better protect duplicate
copies of the photons than in non-single qubit transmissions of
the BB84 protocol. This also means that the new protocol can
use attenuated pulse lasers
rather than single-photon sources in the quantum key
exchange, which will potentially extend the transmission
distance. The 3-stage quantum cryptography protocol for
security services in OBS is described as follows.
Consider transferring state X from Alice to Bob. The
state X is one of two orthogonal states and it may represent 0
and 1 by prior agreement of the parties. To transmit the
quantum cryptographic key, Alice and
Bob apply secret transformations UA and UB that are
commutative. The protocol can be summarized as follows:
Step 1: Alice applies a unitary transformation UA on quantum
information X and sends the qubits to Bob.
Step 2: Bob applies UB on the received qubits UA(X), which
gives UBUA(X) and sends it back to Alice.
Step 3: Alice applies UA (transpose of the complex conjugate
of UA) on the received qubits to get UAUBUA(X) =
UAUAUB(X) = UB(X) (since UA& UB are commutative)

Fig 7.Illustration of recommended quantum cryptography


protocol .
and sends it back to Bob. Bob then applies UB on UB(X) to
get the quantum information X. The use of random
transformations, which Alice and Bob can change from one
qubit to another, guarantees that from the perspective of the
eavesdropper, the probability of collapsing into |0_ and |1_
states has equal probability, which is desirable for
cryptographic security. An example of the proposed new
protocol is illustrated in Figure 7. As we can see, while the
actual quantum state of X is never exposed on the link, Bob is
able to restore X and receives key 0 successfully.
The commutative of the rotation operator
R() =
cos
sin
sin
cos
is clear from the relation
R() R() =

(cos
(sin
x
(cos
( sin

cos ( + )
sin( + )

sin )
cos )
sin )
cos )
sin( + )
cos ( + )

unlike the BB84 protocol which is vulnerable to siphoning of


photons in an attenuated pulsed laser system, the proposed 3stage protocol is immune to such an attack since the actual
quantum state of the key is never revealed in the
communication. This property is of significant importance in
terms of using quantum cryptography in a practical network
environment where an optical path can potentially be extended
beyond trusted routers.

Page 94

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
V.

IMPLEMENTATION

In this section, we discuss the implementation


aspects of the protocol and practical realization of the rotation
operators, which are crucial to providing secure data transfers.
The section also highlights the use of transformations that
apply on multiple qubits simultaneously. One possible
implementation is to apply Pauli transformations. They are
convenient to use, and entail less precision requirements. The
only condition for applying any new transformation in the
operation of the three-stage protocol is that the
transformations should map into the |0> and |1> states with
equal probability so that the requirement for cryptographic
security remains intact. The simplest group consist of the
basic single-qubit operators X, Y, Z.

X=

1
Y

For the same qubits with different polarization for each bit has
done and measured the qubits status .The output is shown
below.The measured values is different in both the cases

-i

0
VI.
i

Z= 0

-1

-1

QCAD is a windows-based environment for quantum


computing simulation which helps in designing circuits and
simulating them. QCAD can simulate the designed the circuits
and show results (states of qubits).
Here Alice is sending four qubits 0110 to
Bob.Alice is sending 0 bit in 0 degree polarization & 1 bit in
90 degree polarization. After that each bit undergoes Pauli
transformation(x) .Then the status is measured. Bob will use
the same polarization & transformation on each bit. After
measuring the status it is found that both the measured values
are same.
The output obtained after doing this in QCAD is shown
below.

CONCLUSION

Practical implementations of BB84 protocol are not secure


with the presence of Eve. In contrast, the proposed
implementation of the three-stage protocol allows multiple
photons to be used in the secure communication, even with
the presence of Eve.This paper has proposed an approach to
embed a security framework in the native OBS network
architecture, providing a means to secure the future Internet
from the ground up. The proposed embedded security
architecture allows the most suited classical and quantum
cryptography techniques to be deployed, making it possible to
offer robust security. While the proposed integrated security
framework is fully compatible with the well-known BB84
quantum cryptography protocol, we recommend a new 3-stage
quantum cryptography protocol based on random rotations of
the polarization vector for the OBS security framework.
Compared to the BB84 protocol, the 3-stage quantum
cryptography protocol for security services in OBS networks
has the following advantages: (1) it does not require single
photon sources as required in the BB84 protocol (since
practical photon sources produce many photons some of
which may be siphoned off to break the protocol). Instead,
multiple photons can be used in communication, increasing
potential transmission distances, and reducing the protocols
sensitivity to noise; (2) while the BB84 protocol has one hop
quantum communication followed by two hops of
communications through classical channels, all three hops of
communication in the new protocol are quantum, providing
more security; (3) the newprotocol never reveals the actual
quantum state of the key on the communication link, allowing
the protocol to be
extended beyond trusted routers.

Page 95

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
REFERENCES
[1] Farahmand F, Jue J. Supporting QoS with look-ahead
window contention resolution in optical burst switched
networks.ProceedingsoftheIEEEGlobalTelecommunicatio
ns (GLOBECOM), San Francisco, CA, December 2003;
2699-- 2703.
[2] Qiao C, Wei W, Liu X. Extending generalized
multiprotocol
label
switching
(GMPLS)
for
polymorphous, agile, and transparent optical networks
(PATON). IEEE Communications Magazine 2006;
44(12): 104--114.
[3] Phuritatkul J, Ji Y, Zhang Y. Blocking probability of a
preemption-based bandwidth-allocation scheme for
service differentiation in OBS networks. IEEE/OSA
Journal of Lightwave Technology 2006.
[4] Chen Y, Turner J, Mo P. Optimal burst scheduling in
optical burst switched networks. IEEE/OSA Journal of
Lightwave Technology 2007.
[5] OMahony MJ, Politi C, Klonidis D, Nejabati R,
Simeonidou D. Future optical networks. IEEE/OSA
Journal of Lightwave Technology 2006; 24: 4684--4696.
[6] Chen Y, Verma PK. Secure optical burst switching (SOBS)- --framework and research directions. IEEE
Communications Magazine 2008; 46(8): 40--45.
[7] Stallings W. Cryptography and Network Security:
Principles and Practice (4th edn), Prentice Hall: NJ, 2006,
[8] ChenY,TurnerJ,ZhaiZ.Designandimplementationofanultra
fast pipelined wavelength scheduler for optical burst
switching. Photonic Network Communications 2007; 14:
317--326.
[9] Wang L, Chen Y, Thaker M. Virtual burst assembly at
ingress edge routers---a solution to out-of-order delivery
in optical burst switching (OBS) networks. Proceedings
of the IEEE
[10] Devetak I, Winter A. Relating quantum privacy and
quantum coherence: an operational approach. Physical
Review Letters 2004;.
[11] Kak S. A three-stage quantum cryptography protocol.
Foundations of Physics Letters 2006; 19: 293--296.
[12] Yuhua Chen,Pramode K Verma. Embedded security
frame work for integrated classical &
quantum
cryptography services in optical burst switching networks
Security and communication networks -2009
[13] Sayonha Mandal ,James Sluss. Implementation of secure
quantum protocol using multiple photons for
communication-2012.

Page 96

www.ijraset.com
SJ Impact Factor: 3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering Technology(IJRASET)

Evasive Security Using ACLs on


Threads
Using firewall-like rules
to prevent malware
Saifur Rahman Mohsin
Executive
Raaj Construction
Trichy, Tamil Nadu, India
Abstract This document describes a new architecture for security systems, which greatly improves system
performance and at the same time enforces veritable security to it. Conventional anti-virus systems that exist in the market
today consume massive amount of memory (both physical as well as permanent) and cannot be relied upon by power users
who need to tap into the full potential of their devices. Also, these systems cannot be employed in thin clients, which have
limitations in quality and capacity of hardware. The document describes a better architecture for the detection and prevention
of malware with an escalated improvement in overall performance of the system. It also discusses new concepts that may be
ported into existing security systems to improve the efficiency of those systems.
Index TermsComputer Security, Just-in-time Detection, Malware Prevention, Security Architecture
I. INTRODUCTION
Antivirus systems have been in use for a long time now
since the numbers of virus has been increasing in the past few
years. Different anti-virus systems provide a variety of
distinctive features that offer to make systems highly secure
from malware. However, the fundamental way they function is
always consistent. Conventional anti-viruses scan recursively
burrowing into file systems as well as connected peripherals
identifying files that contain malware signatures either in plain
text or obfuscated formats. The problem is that this process is
a high energy consuming process and also requires a bulk of
memory in order to function. These systems also require to be
updated frequently to update the malware definitions (i.e. the
signatures) of new malware that are created every hour.
This paper describes a more efficient architecture to
overcome the problems of the existing conventional systems.
This is done by looking at malware in their most primitive
form to understand the basic process that any malware involves
itself in--from hooking itself into the operating system to
executing stealthily. We also look at as how anti-virus systems
detect these malicious files so that we can create a more
intuitive and real-time system that handles malware effectively
before it attacks the system.
II. HOW VIRUSES WORK
Malware come in many different forms. It ranges from
commons viruses, to malicious scripts, worms, Trojans,
rootkits, etc. A virus is anything that can cause havoc to the
confidentiality,
integrity
or
availability
of
a
system.
Regardless of whether a given malware is a virus, worm, or
Trojan, it always requires a thread to execute. Like any
program, it consists of several code statements and also has a
single point of entry to execute. This means that like
any program, a virus requests the operating system for
memory and processor cycles (as shown in Fig. 1) to execute.
It also contains a process table and is associated with a
Process ID (pid) and handle ID.

Fig. 1. How a process begins to execute

Most viruses are targeted towards a certain kind of resource.


The intended resource may range from private information,
confidential data, illegal use of computational power, or even
to serve ads (in case of adware). The code that is written in a
virus program must be well hidden from the anti-virus system
as well as other programmers who might reverse engineer the
code for their own deeds. A good virus therefore is encrypted
in a format that makes it hard for anti-virus systems to neither
detect nor reverse engineering to be feasible. This process is
known as obfuscation and most good viruses that are existent
today are well obfuscated.

In addition to obfuscation, most viruses are attached to


legitimate files so that they are more concealed. There is a
higher chance of a virus being installed as a sub component of
an infected program rather than as an individual program
itself. This ensures that the virus program does not appear to
be malicious thereby preventing anti-virus (AV) systems to
flag it. Such a program that is highly invisible to an AV is
considered to be Fully UnDetectable (FUD). There are several
FUD viruses in existence today. These are no longer just

Page 97

www.ijraset.com
SJ Impact Factor: 3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering Technology(IJRASET)


targeted towards computers but are aimed towards national
infrastructure like power grids, nuclear plants, automation
plants, etc. Therefore, the conventional AV systems do not
provide the necessary security that is required and hence we
must look for better solutions.
Our goal is to develop such a system and which detects and
informs the end user about the existence of these new threads
(or processes) which the user never intended to execute. A
user may decide whether he really wants to execute a given
process and the system can mark these choices as trusted
processes (in a similar fashion as websites are marked as
trusted by firewalls and browsers). Everything else can be
blocked (these will include the threads created by malicious
programs), as it seems as an unnecessary overhead for the
system to execute these processes regardless of whether these
are malicious or not.

website / hostname must be blocked or allowed. Such a list


specifies the limitation of the access and is known as an
Access Control List (ACL). We implement this concept into
anti-virus detection systems to realize a more effective
architecture.
The efficiency of this lies in the fact that there is no
scanning mechanism to search for viruses. Instead, the
detection is done just at the moment a program requests the
OS (As shown in Fig. 2) to provide resources for it. This
means that we are performing a Just-In-Time Detection (JITD) and that is why this architecture is more effective
compared to the existing ones that are in use today.

III. HOW ANTI-VIRUS SOFTWARE WORKS


There are several AVs available in the market today. A lot of
them offer special features, which have a certain edge over
other systems. Regardless of these features, a typical AV
system consists of few common components A database of
virus signatures, a scanner and few auxiliary modules to make
it easy for the user to customize the way the AV system
behaves. A definition repository (a file or database) contains
several code signatures that may be used to identify if a file is
infected / infectious. These signatures need to be frequently
updated as time progresses and new viruses are manufactured
every hour. This becomes an overhead to the user to update
the system every now and then. Also, the AV system tends to
become larger and larger as time progresses due to the
vastness of the definition file and the enormity of the new
signatures that keep getting added. To complicate this further,
the AV system itself contains a scanner module, which sweeps
through the file system, testing each file if it may contain plain
text or obfuscated forms of the virus signature. The entire
search process requires a substantial amount of memory and
slows down the overall performance of the system. It has been
observed that a system is much faster when an AV system is
not present than otherwise.
Thus, it is possible to conclude that an anti-virus system,
which uses the conventional architecture, cannot be a realtime system and the success rate and efficiency depends on the
number of virus signatures present as well as the kind of
obfuscation algorithms that it can detect. Our goal is to
describe a better architecture that prevents this overhead and
also serves a higher success rate. Such a system is described in
the next section.
IV. THE IMPROVED ARCHITECTURE
It has already been described in Section II that any malware
requires a space in the process table in order to run and
therefore has an entry point. Exploiting this fact, we can
design an architecture that ensures that each process that starts
must be filtered using a set of rules that can be pre-defined as
well as user-defined. This is very much alike to the system
used by firewalls--where a list is used to decide whether a

Fig. 2. How the proposed system intervenes process startup

In order to increase the efficiency much further, we need


not rescan a process if it has already been scanned. However,
this causes a risk because a user may decide to update the clean
file with a malicious one. For this reason, we take the hash of
the file contents and store it in the ACL along with the process
name. There is no need to store the path as the hash ensures
that a particular file is always uniquely identified. Every time
the process starts our system will ensure that the present hash is
identical to the stored hash in ACL. If so, it will not scan the
file. If not, it should scan the file as it has been altered. This
technique is highly effective because it is extremely difficult
even for any skilled programmer to write a malware program
whose hash value exactly coincides with an existing program.
V. ACCESS CONTROL LISTS
ACLs have been in use in operating systems for decades.
They are generally used to specify file rights to provide
authority for who has access to a particular file. An ACL may
be a simple text file that contains mappings of process names
along with their permissions and hashes.
Processes that start are first checked as to whether an entry
for the process exists in the ACL. If it does, then it is checked
whether it has been modified using the computed hash. If it
does not match, the anti-malware system is invoked to process
the initiated process and determine whether it is safe to
execute or not.
We use a standard hash like MD5 or SHA-1 to uniquely
identify the file. The reason being that some files tend to
change when a user updates the file by replacing it. The
architecture explained her strongly enforces that files /

Page 98

www.ijraset.com
SJ Impact Factor: 3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering Technology(IJRASET)


programs need not be processed unless its data has been
modified. Hence, by using a computed hash we can easily
detect changes.

Fig. 4. Memory usage statistics of the proposed system against other


applicationsbrowser and another AV system
Fig. 3. How the proposed system blocks malicious processes

An ACL therefore contains program names, their computed


hashes and the access levels. If upon execution of a new
process, the process exists in ACL and is marked as a blocked
process then the system prevents the execution of the process
(See Fig. 3) and this prevents malicious code from infecting the
system.
VI. JUST-IN-TIME DETECTION
It has already been well explained in Section III that most
AVs are pretty slow. They not only take a lot of time to run but
also consumes a large amount of physical memory. This
causes a lot of lag in other programs and slows down the
overall performance of the system.
By detecting the processes exactly at the time of their
execution, we remove the need for a scanner to traverse
through the hard drives as well as connected peripherals. It
enables the program to run at a very low memory cost and high
efficiency with momentary bursts of memory usage when a
process is caught. We call this technique as Just-in-time
detection simply because it detects just before the time of
execution. This means that it is absolutely allowed for a
malicious file to sit in a hard drive as long as its not harmful.
The moment it is executed, our system kicks in to prevent it
from causing harm to the system.
VII. IMPLEMENTATION

initiation. By suspending the thread until the process


completes, it was possible to prevent the system to get
infected in processing time itself. Also, it was identified that
this architecture is self-preserving as it prevented even the
ACLs to be directly modified by a process. The
implementation was targeted towards the Windows operating
system, as it is the most widely used system and has more
viruses when compared to other systems. An analysis was also
taken (As shown in Fig. 4) which determined that our system
was highly efficient in memory consumption as it was
comparatively nothing to other AV systems or even other
programs. Hence, it was determined that this architecture is
the best approach for malware prevention as it saves a lot of
resources.
REFERENCES
[1] Fred B. Schneider, Cornell University Least Privilege and
More [IEEE Computer Society 1540-7993/03, 2003].
[2] "Managing Authorization and Access Control". Microsoft
Technet. 2005-11-03. Retrieved 2013-04-08.
[3] Cynthia E. Irvine, Naval Postgraduate School Teaching
Constructive Security [IEEE Computer Society 1540-7993/03,
2003 under Security & Privacy].
[4] "Access Control Lists". MSDN Library. 2012-10-26. Retrieved
2013-04-08.
[5] Elias Levy, Architect & former CTO, Symantec Approaching
Zero [IEEE Computer Society 1540-7993, 2003 under Attack
Trends, 2004].

Using C# an implementation was made using the Diagnosis


namespace in the .NET framework to intercept process

Page 99

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Sinter Coolers
Ramireddy. Pavankalyan Reddy1, Telukutla. Harika Sivani2
Dept. of Electrical and ElectronicsEengineering
Lakkireddy Balireddy College of Engineering
Mylavaram, Krishna district
Andhrapradesh, India
Abstract At present, distributed generation (DG) has been a research focus all over the world. As a kind of DG system,
cogeneration system utilizing waste heat from sintering-cooling process plays an important role in modern iron and steel
enterprises
I. INTRODUCTION

The frequently used and most worrying thing now a days


is global warming which actually we know as it is the increase
of earths average temperature due to green house gases which
trap heat that would otherwise escape from earth but recent
studies specifying that waste heat produced from industries
(large scale industries like steel making plants, oil refinery
industries etc.,) is more speedily deteriorating the environment
now a days than above said green house gases so we are
converting that waste heat produced from steel making
industries into electricity in order to reduce the heat even in a
small quantity. Most of our steel plants are now using sinter
plants or sinter coolers to convert iron into steel and these are
producing the exhaust steam in a larger quantity.
II. SINTER PLANT
Sintering is an agglomeration process of fine mineral
particles into a porous mass by incipient fusion caused by heat
produced by combustion within the mass itself. Iron ore fines,
coke breeze, limestone and dolomite along with recycled
metallurgical wastes are converted into agglomerated mass at
the Sinter Plant, which forms 70-80% of iron bearing charge
in the Blast Furnace. The vertical speed of sintering depends
on the suction that is created under the grate. At VSP, two
exhausters are provided for each machine to create a suction
of 1500 mm water column under the grate.
There are several types of sintering machines based on
their construction and working, they are

a) Belt type
b) Stepping type
c) Air Draft type
d) Box type and so on.
Smelting is the term related to metallurgy and we use blast
furnaces for smelting. We can call blast furnaces differently in
different relations like bloomeries for iron, blowing houses for
tin, smelt mills for lead, sinter plants for base metals like steel,
copper, iron ultimately.
Iron ore cannot be directly charged in a blast furnace. In
the early 20th century sinter technology was developed for
converting ore fines into lumpy material chargeable in blast
furnaces, though it took time to gain acceptance in the iron ore
making domain but now places an important role in generating
steel, metallurgical waste generated in steel plants to enhance
blast furnace operation.
III. WASTE HEAT RECOVERY IN SINTER PLANT
In sinter plant sensible heat can be recovered both
from the exhaust gases of the sinter machine and off-air of the
sinter cooler. Heat recovery can be in different forms.
Hot air steams from both sinter machine and sinter cooler
can be used for the generation of steam with the
installation of recovery boilers. This steam can be used to
generate power or can be used as process steam. For
increased heat recovery efficiency, a high temperature
exhaust section should be separated from a low
temperature exhaust section and heat should be recovered
only from high temperature exhaust section.
Sinter machine exhaust section can be recirculated to the
sinter machine, either after going through a heat recovery
boiler or without it.
Heat recovered from the sinter cooler can be re-circulated
to the sinter machine or can be used for pre heating the
combustion air in the ignition hood, for pre heating of the
raw mix to sinter machine. It can be used to produce hot
water for district heating.
A. Features

Page 100

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Waste gas heat of a sintering plant is recovered as


steam or electric energy. The heat recovery
efficiency is 60% for waste gas from cooler and 34%
for waste gas from sintering machine proper.
Waste gas heat recovery from sintering machine
proper also leads to the reduction of coke
consumption.
Applicable whether the cooler is of a circular type or
linear type.
CO2 emissions can be reduced, leading to a
possibility of employing this system in a CDM
project.

temperature and water convert this into steam. The water tube
boiler which has water in its tubes heated by this hot
recovered air and this water will be converted into steam. This
steam drives the turbine and give mechanical energy which is
input to the generator and this generator will give electricity.

IV. SINTER PLANT COOLER WASTE HEAT


RECOVERY SYSTEM
Fig. 2 Recovery of waste heat emitted
VI. ADVANTAGES AND DISADVANTAGES

Fig. 1 Block diagram of sinter cooler plant


This is a system for recovering the sinter coolers hightemperature exhaust gas as steam which can be used for power
generation. Furthermore reuse of the exhaust heat as the
thermal source of sintered ore production will improve the
productivity of sinter machines.
The main principle involved in this system is converting
heat into steam then we use the normal generation process
where the turbine rotates giving mechanical energy as input to
the generator in order to get electricity as output. The system
recovers sensible heat from hot air emitted by the cooling
process of two sinter coolers located downstream of two sinter
machines. The heat is captured by heat recovery hoods and
then routed to a heat recovery boiler to generate super-heated
steam, which is converted to electricity by a turbine connected
to a generator.
V. RECOVERY OF WASTE HEAT EMITTED BY THE
COOLING PROCESS INTO STEAM
As like heat recover ventilators this system will work i.e.
whenever the heat recovery hoods take heat from sinter cooler
this will be directly given to the boiler which on high

A. Advantages
1) Reduction in pollution: A number of toxic combustible
wastes such as carbon monoxide gas, sour gas, carbon black
off gases, oil sludge, Acrylo nitrile and other plastic chemicals
etc. releasing to atmosphere if/when burnt in the incinerators
serves dual purpose i.e. recovers heat and reduces the
environmental pollution levels.
2) Reduction in equipment sizes: Waste heat recovery
reduces the fuel consumption, which leads to reduction in the
flue gas produced. This results in reduction in equipment sizes
of all flue gas handling equipment such as fans, stacks, ducts,
burners, etc.
3) Reduction in auxiliary energy consumption: Reduction
in equipment sizes gives additional benefits in the form of
reduction in auxiliary energy consumption like electricity for
fans, pumps etc.
Recovery of waste heat has a direct effect on the efficiency
of the process. This is reflected by reduction in the utility
consumption & costs, and process cost.
B. Disadvantages
1) Capital cost: The capital cost to implement a waste heat
recovery system may outweigh the benefit gained in heat
recovered. It is necessary to put a cost to the heat being offset.
2) Quality of heat: Often waste heat is of low quality
(temperature). It can be difficult to efficiently utilize the
quantity of low quality heat contained in a waste heat medium.
Heat exchangers tend to be larger to recover significant
quantities which increases capital cost.
3) Maintenance of Equipment: Additional equipment
requires additional maintenance cost.
CONCLUSION

Page 101

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
To meet the increasing world demand for energy, the rate
of depletion of non-renewable energy sources must be reduced
while developing alternative renewable sources. This can be
achieved by increasing the overall thermal efficiency of
conventional power plants. One way to do this is by waste
heat recovery. Most of the techniques currently available
recover waste heat in the form of thermal energy which is then
converted to electricity in a conventional steam power plant.
Another approach which has received little attention so far is
direct conversion of thermal waste energy into electricity.
soo In this article, a configuration of waste heat recovery
system is described
we studied the composition and characteristics of waste heat
resources and found out a typical process of energy recovery,
conversion and utilization. Waste heat constitute almost 20%
in global warming where as in this maximum amount of heat

is from large scale industries and power plants. As 70% of our


steel plants containing sinter plants and the circulation system
is for waste heat, which has been emitted only to the
atmosphere. The system is expected to promote energy
efficiency by utilizing waste heat, thereby reducing CO2
emissions. It will enhance environmental effects because
cooling air is used in a closed cycle without emitting a high
concentration of dust into the atmosphere and power shortages
can also be overcome.
ACKNOWLEDGMENT
The preferred spelling of the word acknowledgment in
America is without an e after the g. Avoid the stilted
expression, One of us (R. B. G.) thanks . . . Instead, try
R. B. G. thanks. Put applicable sponsor acknowledgments
here; DO NOT place them on the first page of your paper or as
a footnote.

Page 102

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)

Alternate Energy In The Tractions


L.Saravanan1, P.Siva Sankari2, J.P.Anish Jobin3
123

Department of Electrical & Electronics Engineering, Jayaram College of Engineering & Technology, Tamil Nadu, India

Abstract: The fuel requirements of the world are increasing at an alarming rate and the source demand has been running ahead
of supply. Due to the population and development activities increases, the requirement of the fuel will also increase. So we need
to look for the alternative of conventional sources and the best alternative of conventional sources are the non-conventional
sources of energy which are also called renewable source of energy. In this paper, the alternate potential in tractions AET is
investigated. The innovative method of generating wind energy in a fast moving traction is one of the best methods and it
eliminate lot of problems facing in railways today.
Key Words: Fuel demand, Population growth, Conventional sources, Non-conventional sources, AET, Wind energy in traction.
I. INTRODUCTION
The wind is a free, clean, and inexhaustible energy
source. By generating wind energy in a fast moving traction in
one of the new innovation in wind power production and also
alternate energy to traction. By placing the wind turbine on the
traction it will very efficient method to produce the energy
which has been used for inter purpose also. The main aim of
this innovation is to provide a method and a system for
generating electricity by using high wind pressure generated in
moving vehicles, using free renewable input namely air. Wind
energy is cheap, non-polluting, and capable of providing enough
electricity.
II. WIND ENERGY
The wind has been used to power sailing ships for
many centuries. Many countries owed their prosperity to their
skill in sailing. The New World was explored by wind powered
ships. Indeed, wind was almost the only source of power for
ships until Watt invented the steam engine in the 18th Century.
Denmark was the first country to use the wind for generation of
electricity. The Danes were using a 23 m diameter wind turbine
in 1890 to generate electricity. By 1910, several hundred units
with capacities of 5 to 25 kW were in operation in Denmark.
Other countries also continued wind research for a longer period
of time.

The worldwide wind capacity reached 282275


Megawatt, out of which 44609 Megawatt were added in 2012,
more than ever before. Wind power showed a growth rate of 19,
2%, the lowest rate in more than a decade. Altogether, 100
countries and regions used wind power for electricity
generation. Since many years, the wind industry has been driven
by the Big Five markets: China, USA, Germany, Spain, and
India. These countries have represented the largest share of
wind power during the last few decades.
The available potential of wind energy in India is
45000 megawatt out of which 1367mw has been exploited till
august 2012. Table provides the top five wind energy generator
countries in the world. Wind power is one of the most efficient
alternate energy sources.
There has been good deal of development in wind
turbine technology over the last decade with many new
companies joining the fray. Wind turbines have become larger,
efficiencies and availabilities have improved and wind farm
concept has become popular. The economics of wind energy is
already strong, despite the relative immaturity of the industry.
The downward trend in wind energy costs is predicted to
continue.
As the world market in wind turbines continues to
boom, wind turbine prices will continue to fall. India now ranks
as a "wind superpower" having a net potential of about 45000
MW only from 13 identified states.
III. WIND ENERGY IN TRACTION
In this method the wind turbine has been placed on the
sides of the traction. By this way we can eliminate the aero
dynamical problems cause in traction. When train runs above

Chart -1: Top Wind generation countries

Page 103

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
the rail wind turbine starts to rotate due to kinetic energy of the
wind.

Figure -1: Wind Turbine positioned on sides of the traction

The speed of rotation of wind turbine depends on the


speed of train. When train moves at high speed the wind flow
also cross the turbine blades at high speed, hence it makes the
large power output. But if very high speed flows of train danger
to wind turbines because it damage the wind turbines. The wind
turbines are covered by protection shield, the speed of wind also
continuously measured by Propeller-type wind-speed sensor or
Cup-type wind-speed sensor. The opposing force produced in
this method is more than normal diesel engine train. It has been
eliminated by made some modification in the traction design.
By using a limited number of blocks for passengers, by using a
light weight metals for construction. The amount of power
production depends on wind turbine capacity, train speed and
other some factors. Because of the method we also want to done
some modification around the area near to train roots.

been not necessary. The diameter of the rotor and the maximum
wind speed determine the amount of power that can be
produced.
In above specified figure 2, the windmills are placed
above the traction. This is another method of producing wind
energy in traction. The small size windmills are placed above
the traction. The electric train run over railroad tracks, the
alternative form of wind energy
produced by train is very
unique. If the wind is properly directed towards the wind
turbine blades, optimum electricity may be generated. The
desired direction of wind is obtained by a means for channeling
wind, in the direction of the wind turbine.
IV. AERODYNAMICS
Aerodynamics is the science and study of the physical
laws of the behavior of objects in an air flow and the forces that
are produced by air flows. The shape of the aerodynamic profile
is decisive for blade performance. Even minor alterations in the
shape of the profile can greatly alter the power curve and noise
level. Therefore a blade designer does not merely sit down and
outline the shape when designing a new blade.
The aerodynamic profile is formed with a rear side, is
much more curved than the front side facing the wind.

Figure -3: Wind flow across blade

Figure -2: Wind Turbine positioned above roof of the traction

The size of system depends on how plan to use the


power that is generated. Small wind turbines can range in size
from 20 watts to 100 kilowatts (kw) with a 20-500 watt system
being used to charge batteries 5 to 15 kw. Normally wind
systems consist of a rotor or blades, a generator mounted on a
frame, a tower, the necessary wiring and the balance of system
components: controllers, inverters, and possibly batteries.
Through the spinning Blades, the rotor trap the kinetic energy of
the wind and convert it into rotary motion to drive the generator,
which produces electricity. But in this method the tower has

Two portions of air molecules side by side in the air


flow moving towards the profile at point A will separate and
pass around the profile and will once again be side by side at
point B after passing the profiles trailing edge. As the rear side
is more curved than the front side on a wind turbine blade, this
means that the air flowing over the rear side has to travel a
longer distance from point A to B than the air flowing over the
front side.
Therefore this air flow over the rear side must have a
higher velocity if these two different portions of air shall be
reunited at point B. Greater velocity produces a pressure drop
on the rear side of the blade, and it is this pressure drop that
produces the lift. The highest speed is obtained at the rounded
front edge of the blade.
V. POWER PRODUCTION
5.1 Wind energy into electric power

Page 104

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
Atmospheric pressure is differences accelerate and
impart kinetic energy into the air. Wind energy conversion
machines (WEC) convert wind energy into electrical or
mechanical forms.

K.E. 12 (mass ) (velocity) 2


Power

mass time
density area time
velocity
time
Power 12 (density) area (velocity)3

AV3
2
Figure -4: Traction facing aerodynamic problems

Example:
V = 10 m/s
A = (2 m)2 = 4 m2
= 1.2 kg/m3, P=2400 w Theoretical solution.

The wind turbine is position above the traction reduce


the speed of train and it over come by placing it on the sides
traction.

5.2 Power production


A train moving at 125mph would generate a wind
speed equivalent to 60 feet/second. Wind blowing with such
speed will let a normal wind power generator harness about
3500w of power. If a train is about 656 feet long, running at the
pace of 187mph, and it moves along a 0.62 mile railway track in
about 18 seconds, the power generated in this small period by
the turbine laid on the tracks will be 2.6kW. The kinetic energy
of the wind is the source of the driving force of a wind turbine.
That kinetic energy can be depicted by the formula
E = f. mspec .v3
E = the kinetic energy
mspec =the specific mass (weight) of air
v = the velocity of the moving air (the wind)
f = a calculating factor without any physic meaning
The power in the wind is proportional to:
a) The area of windmill being swept by the wind
b) The cube of the wind speed
c) The air density - which varies with altitude.
The formula used for calculating the power in the wind is
shown below:
Power = (density of air x swept area x velocity cubed)/2
P = . (A) (V)3
Where,
P is power in watts (W)
is the air density in kilograms per cubic meter (kg/m3)
A is the swept rotor area in square meters (m2) & V is the wind
speed in meters per second (m/s).
VI. PROBLEMS TO FACE
The major problem occurring in this system is some
alternation has want to made in train design the train not able to
move on the overflows or bridges, because mass of windmill
producing opposing force toward the train. Similar the train
subjected to meet some problems related to aerodynamically.

VII. GROWTH OF WIND ENERGY


These methods are implementing to produce energy
and induced new way for producing clean energy in train. There
are 14,300 trains operating daily on 63,000 route kilometers of
railway in India. This technique would be capable of producing
1,481,000 megawatt (MW) of power in India alone. But some
changes are needed in Indian rail roots, in tractions.

Chart -2: Growth of wind power production

It required long time to achieve above specified range


of power production.
VIII. CONCLUSIONS
By the process of implementing AET we are able to
produce an alternate fuel, thus we are not only finding up a new
way of energy but also the way to protect the natural world from
fossil fuel (pollution). Thus using this new concept and project
we can expect a greener and pollution free tomorrow. The
whole project demands to call wind energy not only used from
supplying power to consumer but also alternative fuel in
transportation also. This project is still in research stages but if

Page 105

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering


Technology(IJRASET)
it well implemented, it goes to very big milestone in the
engineering history.

[1]

[2]

[3]

[4]

[5]

[6]

[7]
[8]
[9]

[10]

REFERENCES
Stephane Sanquer, Christian Barre, Marc Dufresne de Virel
and Louis-Marie Cleon (2004), Effect of cross winds on
high-speed trains: development of new experimental
methodology, Journal of Wind Engineering and Industrial
Aerodynamics, 92(2004), 535-545.
Wilson, R.E. and Lissaman, P.B.S. (1974), Applied
Aerodynamics of Wind Power Machines, Oregon State
University, NTIS PB 238594.
Friends of Earth, Briefing Anaerobic Digestion (2004)
Retrieved 17.08.07. Anaerobic digestion Briefing Paper,
www.foe.co.uk.
E. Muljadi and C.P. Butterfield, Pitch-Controlled
Variable-Speed Wind Turbine Generation, 1999 IEEE
Industry Applications Society Annual Meeting, Phoenix,
Arizona, October 3-7, 1999.
Grauers, A., Direct driven generators .Technology and
development trends, Proceedings of the Nordic Wind
Power Conference 2000 (NWPC 2000), Trondheim,
Norway, Mars 13-14, 2000.
Shyam Lal Verma, Ajay Bangar, Ankit Soni, Shravan
Gajendragadkar-2012 Renewable and Non-Conventional
Energy Sources and Engineering System ISSN No. 2231
6477, Volume-1, Issue-3.
BJ Furman, K Youssefi - Wind Power and Wind Turbines
WWW/K-12/airplane/short.html.
Emrah Kulunk- Aerodynamics of Wind Turbines, New
Mexico Institute of Mining and Technology USA.
Ramler, J. R. and R. M. Donovan: Wind Turbines for
Electric Utilities: Development Status and Economics,
Report DOE/NASA/1028-79/23, NASA TM-79170,
AIAA-79-0965, June 1979.
WWEA World Wind Energy Association Report 2012.
WWW.WWINDEA.ORG.

Page 106

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering

A REVIEW ON MODIFIED ANTI FORENSIC


TECHNIQUE FOR REMOVING DETECTABLE
TRACES FORM DIGITAL IMAGES
1

M.GOWTHAM RAJU.

N.PUSHPALATHA,

M.Tech (DECS) Student,


Assistant professor
Department of ECE, AITS
Department of ECE, AITS
Annamacharya Institute of Technology and Sciences, Tirupati, India-517520
1

mgr434@gmail.com
pushpalatha_nainaru@rediffmail.com

cameras produce instant images which can be viewed


without delay of waiting for film processing. it does not
require external development they can be store easily. And
there should not be taken any time delay. Images can be
processed in different ways. They are processed as jpeg
images, in some other cases they are processed in bit mat
format. When they are used in bitmap format it does need
to use without any information of past processing. to know
about the past processing information it is desirable to
know the artifacts of image. These techniques are capable
of finding the earlier processing information. Therefore
forensic researches need to examine the authenticity of
images to find how much the trust can be put up on the
techniques and this can also be used to find out the
drawback of this techniques. Person with good knowledge
in image processing can do undetectable manipulation. it is
also desirable to find the draw backs of these techniques.
For this purpose research has to develop both forensic and
anti-forensic techniques to understand the weaknesses.
Consider the situation that already tried to remove the
artifacts of compression. The forensic experts can easily
find out the existing techniques such as quantized
estimation. It is useful when image processing unit receives
compression details and quantization table used for
processing and compression. Some of the existing
techniques like detection of blocking signature estimation
of quantization table this allow the mismatches and
forgeries in jpeg blocks by finding the evidence of
compression. To solve this problem of image forensic the
research has to develop tools that are capable of fooling the
existing methodologies. Even though the existing methods
have advantages some limitations too. The main drawback
of these techniques is that they do not report for the risk
that new technique may be design and used to conceal the
traces of manipulations. As mention earlier it may possible
for an image forger to generate undetectable compression
and other image forgeries. This modified anti-forensic
technique approach is presented which is capable of hiding
the traces of earlier processing including both compression
and filtering. This concept is that adding specially designed
noise to the images blocks will help to hide the proof of
tampering.

Abstract: The increasing attractiveness and trust on


digital photography has given rise to new acceptability
issues in the field of image forensics. There are many
advantages to using digital images. Digital cameras
produce immediate images, allowing the photographer
to outlook the images and immediately decide whether
the photographs are sufficient without the
postponement of waiting for the film and prints to be
processed. It does not require external developing or
reproduction. Furthermore, digital images are easily
stored. No conventional "original image" is prepared
here like traditional camera. Therefore when forensic
researchers analyze the images they dont have access
to the original image to compare. Fraud through
conventional photograph is relatively difficult,
requiring technical expertise. Whereas significant
features of digital photography is the ease and the
decreased cost in altering the image. Manipulation of
digital images is simpler. With some fundamental
software, digitally-recorded image can easily be edited.
The most of the alterations include borrowing, cloning,
removal and switching parts of a digital image. A
number of techniques are available to verify the
authenticity of images. But the fact is that number of
image tampering is also increasing. The forensic
researchers need to find new techniques to detect the
tampering. For this purpose they have to find the new
anti-forensic techniques and solutions for them. In this
paper a new anti-forensic technique is considered,
which is capable of removing the evidences of
compression and filtering. It is done by adding a
specially designed noise called tailored noise to the
image after processing. This method can be used to
cover the history of processing in addition to that it can
be also used to remove the signature traces of filtering.
Keywords: Digital forensic, jpeg compression, image
coefficients, image history, filtering, Quantization, DCT
coefficients.

Introduction
Digital images become very popular for transferring visual
information. And there are many advantages using these
images instead of traditional camera film. The digital

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International
Journal for Research inwhich
Applied
Science & Engineering
we refer to as anti-forensic dither, to it value
1. RELATED TO PROJECT WORK:
according
to
the
equation
Z=Y+D
The segment length is equal to the length of the
quantization interval the probability that the quantized
coefficient value is qk is given by.

1.1. ANTI FORENSIC OF DIGITAL IMAGE


COMPRESSION:
As society has become increasingly reliant upon digital
image to communicate visual information, a number of
forensic techniques have developed. Among the most
successful of these are techniques that make use of an
images compression history and its associate compression
finger prints. Anti-forensic techniques capable of fooling
forensic Algorithms this paper represents set of antiforensic techniques designed to remove forensically
significant indicators of compression of an image. in this
technique first distributes the image transform coefficients
before compression then adding anti-forensic transform
coefficients of compressed image so that distribution
matches estimation one. When we use these frame work of
anti-forensic techniques specially targeted at erased finger
prints left by both JPEG and wavelet based coders.
1.1.1. ANTI-FORENSIC FRAMEWORK:
All image compression techniques are subbing band
coders, which are themselves a subset of transform coders.
Transform coders are mathematically applying to the
signals of compressing the transforms coefficients. Sub
band coders are transform coders that decompose the signal
in to different frequency bands. By applying two
dimensional invertible transform, such as DCT to as image
as a whole that has been segmented into a series of disjoint
sets. Each quantized transform coefficient value can be
directly related to its corresponding original transform
coefficient value by equation.
(1)

(2)

The anti-forensic dithers distribution is given by the


formula
P (D=d)

(3)

1.1.2. JPEG ANTI-FORENSICS:


Brief over view of JPEG compression then present our
anti-forensic technique designed to remove compression
finger prints from JPEG compressed image DCT
coefficients. For gray scale image, JPEG compression
begins by segmenting an image into a series of non over
lapping 8x8 pixel blocks then computing the two
dimensional DCT of each block. Dividing each coefficient
value by its corresponding entry in predetermined
quantization matrix rounding the resulting value to the
nearest integer. First image transformed from the RGB to
the YCBCrcolorspace. After this can been performed,
compression continues as if each color layer were an
independent gray scale image.
1.1.3. DCT
Removal:

If the image was divided into segment during compression,


another compression finger print may arise. Because of the
loss

Coefficient

Quantization

Fingerprint

Anti-forensic frame work which we


outlined in section 2.we begins by modeling the
distribution of coefficients values with in a particular ac
sub band using the Laplace distribution.
(4)
Using this model and the quantization rule
described above the coefficient values of an ac sub band of
DCT coefficients with in a JPEG compressed image will be
distributed according to the discrete Laplace distribution.

P(Y=y)={

Fig1: anti forensic of digital image compression


When the anti-forensically modify each quantized
transform coefficient by adding specially designed noise,

if y=kQi,j

(5)

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering

Fig2: Histogram of perturbed DCT coefficient values from


a DCT sub band in which all coefficients were quantized to
zero during JPEG compression.
Wavelet-Based Compression Overview:
Through several wavelet-based image compression
techniques exists such as SPIHT,EZW,and most popularly
JPEG 2000.they all operate in a similar fashion and leave
behind similar compression finger prints.JPEG 2000 begins
compression by first segmenting an image into fixed sized
non over lapping rectangular blocks known as tiles while
other operate on the image as a whole.
Two dimensional DWT of the image or
each image tile is computed these sub bands of the wave let
coefficient.
Because of these sub bands corresponding
to either high or low frequency DWT coefficients in each
spatial dimension, the four sub bands are referred to using
the notation LL, LH, HL, and HH.
Image compression techniques achieve loss compression
through different processes they each introduce DWT
coefficient quantization finger prints into an image
Quantization and de quantization process causes DWT
coefficient in image compression in the multiples of their
respective sub bands.

Fig3: Top: Histogram of wavelet coefficient from an


uncompressed image.
Bottom: wavelet coefficient from same image after SPIHT
compression.

As a result only the n most significant bits of


each DWT coefficients are retained. This is equivalent to
applying the quantization rule. Where X is a DWT
coefficient from an uncompressed imager y is the
corresponding DWT coefficient in its SPIHT compressed
counterpart.

Fig4: Top: peg compressed image using quality factor.


Bottom: Anti forensically modified version of same image.
2. UNDETECTABLE IMAGE TAMPERING
THROUGH JPEG COMPRESSION
Number of digital image forensic techniques have been
developed which are capable of identifying an images
origin, tracing its processing history, and detecting image
forgeries. Though these techniques are capable of
identifying standard image manipulation, they do not
address the possibility t be that anti forensic operations may
be designed and used to hide evidence of image tampering
.we propose anti-forensic operation capable of removing
blocking artifacts from a previously JPEG compressed
image. We can show that by help of this operation along
another anti-forensic operation we are able to fool forensic

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International
Journal for Research in AppliedA measure
Science
& Engineering
methods designed to detect evidence of JPEG compression
of blocking artifacts strength is
in decoded images, determine an images origin, detect
double JPEG compression, and identify cut and paste
image forgeries.
A digital image forgery has resulted in an
environment where the authenticity of digital images
cannot be trusted. Many of these digital forensic techniques
rely on detecting artifacts left in image by JPEG
compression. Because most of the digital cameras make use
of proprietary quantization tables, an image compression
history can be used to help identify the camera used to
capture it. These techniques are quite adept at detecting
standard image manipulation, they do not account for the
possibility that anti-forensic operation designed to hide
traces of image manipulation may applied to an image.
Recent work as shown such operations can be constructed
to successfully fool existing image forensic techniques.
Back Ground:
When an image is subjected to JPEG compression, it is first
segmented into 8X8 pixel blocks. The DCT of each block
is computed and resulting set of DCT coefficients are
quantized by dividing each coefficient by its corresponding
entry in a quantization table then rounding the result to the
nearest integer. The set of quantized coefficients read into a
single bit stream and lossless encoded. so decompressed
begins by bit stream of quantized DCT coefficients and
reforming into a set of 8X8 pixel blocks.
As a result two forensically significant
artifacts are left in an image by JPEG compression. That is
DCT coefficient quantization artifact sand blocking
artifacts. Blocking artifacts are the discontinuities which
occur across 8X8 pixel block boundaries because of
JPEGs loss nature anti forensic technique capable of
removing DCT coefficient artifacts from a previously
compressed image.

obtained by calculating the difference between the


histograms of Z and Z values denoted by H1 and H2
respectively, using the equation.
K=|HI (Z= n) HII (Z= n)|.
The values of K lying above a fixed detection threshold
indicate the presence of blocking artifacts.

Fig5: Histogram of DCT coefficients from an image before


compression (top left), after JPEG compression (top right),
and after addition anti-forensic dither to the coefficients of
the JPEG compressed image.
2.2. IMAGE TAMPERING THROUGH ANTIFORENSIC:
We show that anti-forensic dither and our proposed antiforensic de blocking operation can be used to deceive
several existing image forensic algorithms that rely on
detecting JPEG compression artifacts.

2.1. ANTI-FORENSIC DEBLOCKING OPERATION


JPEG blocking artifacts must be removed from an image
after anti-forensic dither has been applied to its DCT
coefficients. Number of de blocking algorithms proposed
since the introduction of JPEG compression, these are all
suited for anti-forensic purposes. To be successful it must
remove all visual and statistical traces of block anti-facts.
We found that light smoothing an image followed by
adding low-power white Gaussian noise. Able to remove
statistical traces of JPEG blocking artifacts without causing
the image DCT coefficient distribution to deviate from the
Laplace distribution. in the anti-forensically de blocked
image according to the equation.

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in image


Applied
Science & Engineering
after being double JPEG compressed using a quality
factor of 75 followed by 85(center),and the image after
being JPEG compressed using a quality factor of
75,followed by the application of anti forensic dither, then
recompressed using a quality factor of 85(right).
3. PROPOSED METHOD:

To the best knowledge increased in the field of antiforensics. Most of the methods of this an forensics is
to find out the process that which the image
compression is takes places, such of that methods
involves in like JPEG detection and quantization table
estimation.in this method of anti-forensic the JPEG
compression of an image history also produces the
information of camera used to produce an image.

Fig 6:Result of the proposed anti-forensic de blocking


algorithm applied to a typical image after it has been
JPEG compression using a quality factor of 90 (far left),
70(center left), 30(center right),and 10 (far right) followed
by the addition of anti-forensic dither to its DCT
coefficients.

Although it can be used to discover the forged areas


along with in the picture.in case of image
compression this technique is also developed to use as
evidence of image manipulation.so in this anti
forensic technique traces left by compression and
other processing are discussed

2.3. Hiding Traces of Double JPEG compression:


An image forger may wish to remove evidence of
corresponding a previously JPEG compressed image. Such
image forger wishes to alter a previously compressed
image, and then save the altered image as JPEG. Several
methods have been proposed to detect recompression of
JPEG compressed image commonly known as double
JPEG compression.
2.4. Falsifying an Images Origin:
In some scenarios, an image forger may wish to falsify the
origin of digital image simply altering the Mata data tags
associated with an images originating device is insufficient
to accomplish this because several origin identifying
features are intrinsically contained with a digital image.
Anti-forensic dither of an images DCT coefficient, then recompressing the image using quantization tables associated
with another device. by doing an image in this manner, we
are able to insert the quantization signature associated with
a different camera into an image while preventing the
occurrence of double JPEG compression artifacts that may
alert forensic investigators of such a forgery.

4. CONCLUSION:

By the above two existing methods, one of the


method of anti-forensic method of digital image
compression it has increasingly up on digital images
to communicate and this method is considered anti
forensics method is fooling forensic algorithms. This
technique is designed to remove forensically
significant indicators of compression of an image.
First developing frame work its design the antiforensic techniques to remove compression finger
prints from image transform coefficients. This anti
forensic dither to the transform coefficient of
compressed image distribution matches the estimated
one. When we use this frame work it specifically
targeted at erasing compression finger prints left by
both JPEG and wavelet based coders. These
techniques are capable of removing forensically
detectable traces of image compression without
significantly impacting an images visual quality.
The second method of undetectable image tampering
through JPEG compression anti forensics digital
forensics are developed which are capable of
identifying an images origin. These techniques are
capable of identifying standard image manipulations.

Fig 7: Histogram of (3, 3) DCT coefficients from an image


JPEG compressed once using a quality factor of 85(left),

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International
Journal for Research in[9]Applied
Science & Engineering
H. Farid, Digital image ballistics from JPEG
This anti forensic technique capable of removing
quantization, Tech. Rep.TR2006-583, Dept. of Computer
Science, Dartmouth College, 2006.
[10] A.C. Popes cu and H. Farid, Statistical tools for
digital forensics, in 6th International Workshop on
Information Hiding, Toronto, Canada,
2004.
[11] T. Pevny and J. Fridrich, Detection of doublecompression in JPEG images for applications in
steganography, IEEE Trans. InformationForensics and
Security, vol. 3, no. 2, pp. 247258, June 2008.
[12] M. Kirchner and R. Bohme, Hiding traces of
resampling in digital images, IEEE Trans. Information
Forensics and Security, vol. 3, no.4, pp. 582592, Dec.
2008.

blocking artifacts from previously JPEG compression


image.in this method we are able to fool forensic
methods to designed to detect evidence of JPEG
compression in decoded images, determine an
images origin.
When comparing above two existing methods, the
anti-forensic method of removing detectable traces
from digital images has advanced technique increases
attractive ness and more over trust in the digital
images it has capable of removing evidences of
compression and filtering of in digital images history
processing.by adding tailored noise in the image
processing we can find out the where the images is
tampered and compressed, weather its fake or original
this can be used in the medical department as well as
in the police department cases. This method is to be
used to cover history of processing and it can be also
used to remove the signature traces of filtering.
REFERENCES
[1]M.chen, J.fridrich, M.goljan and Lukas, Determining
image origin and integrity using sensor noise IEEE
trans.inf.forensic security, vol.3, no.1, pp.74-90,
march.2008.
[2]A.swaminathan,M.Wu,andK.R>Liu,Digital
image
forensics
via
intrinsic
finger
prints,IEEEtrans.inf.forensicssecurity, vol.3, no.1, pp.101117, mar.2008.
[3] M.Kirchner and R.Bohme,Hiding traces of resampling
in digital images,IEEE trans.inf.forensics security, vol.3,
no.4, pp.582-592, Dec.2008.
[4] I. Ascribes, S. Bayram, N.Memon, M. Ram Kumar, and
B. Sankur, A classifier design for detecting image
manipulations, in Proc. IEEE Int.Conf. Image Process.
Oct. 2004, vol. 4, pp. 26452648.
[5] M. C. Stamm and K. J. R. Liu, Forensic detection of
image manipulation using statistical intrinsic fingerprints,
IEEE Trans. Inf. ForensicsSecurity, vol. 5, no. 3, pp. 492
506, Sep. 2010.
[6] Z. Fan and R. de Queiroz, Identification of bitmap
compression history: JPEG detection and quantizer
estimation, IEEE Trans. ImageProcess. vol. 12, no. 2, pp.
230235, Feb. 2003.
[7] M. C. Stamm and K. J. R. Liu, Wavelet-based image
compressionanti-forensics, in Proc. IEEE Int. Conf. Image
Process., Sept. 2010,
pp. 17371740.
[8] W. S. Lin, S. K. Tajo, H. V. Zhao, and K. J. Ray Liu,
Digital image source coder forensics via intrinsic
fingerprints, IEEE Trans. InformationForensics and
Security, vol. 4, no. 3, pp. 460475, Sept. 2009.

www. ijraset.com
SJ Impact Factor-3.995

Special Issue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Applied Science & Engineering

SJ Impact Factor-3.95
www. ijraset.com

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


UNDER-WATER WELDING USING ROBOTIC TECHNOLOGY
V.Prasanth1, S.Sukesh Kumar2, Dr.R.Gnanaguru3
1 Final year Mechanical, Narasus Sarathy Institute of Technology,Salem.
Email: prasanthvbala@gmail.com
2 Final year Mechanical, Narasus Sarathy Institute of Technology,Salem.
Email: sukeshkumaran007@gmail.com
3 Professor & Head / Mechanical, Narasus Sarathy Institute of Technology,Salem.
Email: nsithodmech@gmail.com

ABSTRACT

In some intricate situations, the Robot based


welding processes have replaced human welders.
During the last few years, the automation of welding
process for pipe structures have gained significant
momentum with an objective to improve the
productivity and accuracy in the areas involving
marine applications, etc. Various research studies in
the welding environment have shown that
productivity improvement is a major thrust area of
welding industry. The welders in todays world are
under tremendous pressure to meet two major
challenges. These are: the higher weld quality and;
reduced manufacturing cost.

Welding in offshore and marine application is


an area of research and understanding, where many
problems are still unsolved. The important application
of the off shore welding is the ship building and pipeline
construction. Since underwater welding is done at the
elevated pressure, the care must be ensured to improve
the welders safety. Hence the robotic technology is
recommended to overcome the problem relating to the
life threat of the welders. In this paper, a brief
description of the robot, designed for the underwater
welding is made. The problems in underwater welding
have also been discussed in context to the existing
welding techniques. Finally, the scope of further
research has been recommended.

Another emerging area in marine application


welding systems is the underwater welding
technique. Underwater Welding (UW) has been
significantly used for over five decades. However, its
use has not reached to a significant level in a welding
environment due to number of factors. The
underwater welding process came into existence with
the development of water-proof electrodes in 1940s
(Keats, 2005). It is the process of welding at elevated
pressures, normally underwater. It may be carried out
in the water itself or dry inside a specially
constructed positive pressure envelope, thereby
providing a special environment. Under water is also
known as hyperbaric welding when used in a dry
environment, and underwater welding when in a
wet environment. The application areas of this
welding technique include a wide variety of
structures, such as repair ships, offshore oil

Keywords: Underwater Welding, Welders Safety,


Robotic Technology.

I. INTRODUCTION
The recent developments in the manufacturing world
have led to a revolutionary change in the design and
development of various systems. Developments in
welding technology are one of such changes.
Welding processes have been used extensively as a
joining technique, used in design and fabrication of
various structures like naval ships, airplanes,
automobiles, bridges, pressure vessels, etc. Welding
has emerged as a better option in contrast to other
joining techniques in terms of joint efficiency,
mechanical properties with a greater application
impact.

107

SJ Impact Factor-3.95
www. ijraset.com

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


platforms, and pipelines. Steel is the most common
material welded (Cary and Helzer, 2005). Various
researchers have defined the concept of underwater
welding in different ways (Haddad and Farmer, 1985;
Oates, 1996; Schmidt, 1996; Khanna, 2004; and Cary
and Helzer, 2005).

includes the self-shielded flux cored arc welding.


From economics point of view, the wet welding
technique with coated electrodes is considered. The
cooling rate in wet welds is much higher than in
those obtained in dry welding. In the temperature
range from 800 to 500 C it can change from 415 to
56 C/s (Steen, 1991). Underwater wet welds are also
known to contain high amounts of porosity. Porosity
may be formed by molecular hydrogen, carbon
monoxide or water vapour (Irie et al., 1997;
Cavaliere et al., 2006; and Cavaliere et al., 2008).
Pores are present to some extent in all wet welds. The
main factors affecting this phenomenon are (Shida et
al., 1997; Irie et al., 1997; Cavaliere et al.,2006; and
Cavaliere et al., 2008): water depth, electrode
covering and arc stability.

II. IMPORTANCE OF UNDERWATER WELDING


IN MARINE APPLICATIONS
In practice, the use of underwater wet
welding for offshore repairs has been limited mainly
because of porosity and low toughness in the
resulting welds. With appropriate consumable design,
however, it is possible to reduce porosity and to
enhance
weld
metal
toughness
through
microstructural refinement. Hence, welding in
offshore and marine application is an important area
of research and needs considerable attention and
understanding where, many problems are still
unsolved. In the present review, a brief understanding
of the problems in underwater welding will be
discussed in context to the existing welding
techniques. Detailed description of a few advanced
welding techniques has also been made. Finally, the
scope of further research would be recommended.

Special precaution should be taken to produce


underwater arc to protect it from surrounding water.
Wet welding does not need any complicated
experiment set up, its economical and can be
immediately applied in case of emergency and
accident as it does not need water to be evacuated.
However, difficulties in welding operation due to
lack of visibility in water, presence of sea current,
ground swells in shallow water and inferior weld
qualities (increased porosities, reduced ductility,
greater hardness in the heat affected zone, hydrogen
pick up from the environment) are the notable
disadvantages of wet welding technique.

III. CLASSIFICATION OF UNDERWATER


WELDING
Underwater welding may be divided into two main
types, wet and dry welding (Oates, 1996).There are
many welding types in each case. Wet type is
considered here in this robotic welding.

V. SHIELDED METAL ARC WELDING


Shielded Metal Arc Welding (SMAW) is among the
most widely used welding processes. During the
process, the flux covering the electrode melts during
welding. This forms the gas and slag to shield the arc
and molten weld pool. Figure 1 shows the schematic
of shielded metal arc welding process. The slag must
be chipped off the weld bead after welding. The flux
also provides a method of adding scavengers,
deoxidizers, and alloying elements to the weld metal.
For underwater wet welding with Shielded Metal Arc
Welding (SMAW) technique, direct current is used
and usually polarity is straight (Khanna, 2004).
Electrodes are usually water proofed. Furthermore, it
is flux coated which causes generation of bubble
during welding and displaces water from the welding
arc and weld pool area. Hence, the flux composition

IV. WET WELDING


This type of welding process is carried out at ambient
water pressure in which, there exists a relationship
between the welder and the diver in the water. This is
carried out by means of a water-proof stick electrode,
with no physical barrier between water and welding
arc (Oates, 1996).
In wet welding technique, the complex structures
may also be welded (Oates, 1996; Shida et al., 1997;
Khanna, 2004; and Kruusing, 2004). One of the most
commonly used wet welding techniques is the
Shielded Metal Arc Welding (SMAW) process and
the Flux Cored Arc Welding (FCAW) process. It also

108

SJ Impact Factor-3.95
www. ijraset.com

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


and depth of flux coating should be optimized to
ensure adequate protection. Electrodes for shielded
metal arc welding are classified by AWS as E6013
and E7014 (Khanna, 2004). Versatility, simple
experiment set-up, economy in operation and finished
product quality are notable advantages of the
technique. However, during welding, all electrical
leads, lighting gear, electrode holder, gloves, etc.,
must be fully insulated and in good condition. Ferrite
electrodes with a coating based on iron oxide should
be used as they resist hydrogen cracking. Flux cored
arc welding is another technique which could not yet
competed with SMAW because of reported excessive
porosities and problems with underwater wire
feeding
system
(Oates,
1996).

VII. PARTS OF UNDERWATER ROBOTIC


WELDING
The robot that is designed for the
underwater welding is based on the submarine
design. The main parts of the robot is

Propeller
Electromagnetic Wheels
Welding Rod holder and Rod
Stepper Motors
ATmega 16 Microcontroller
Camera
Lights

a.) Propeller
A propeller is a mechanical device for
propelling a boat or aircraft, consisting of a revolving
shaft with two or more broad, angled blades attached
to it. There are 4 propellers used in this robot. Two
propellers face the front side and the other two
propellers face the top side. When the propellers in
the front rotate clockwise the robot moves forward
and vice versa for anticlockwise direction, and when
the propellers at the top rotate clockwise the robot
moves up and vice versa for the anticlockwise
direction. The side wards movement i.e. Right to left
is done by deflecting the propeller arm.

Fig1: Schematic of shielded metal arc welding


VI. ROBOTICS
Robot is a machine capable of carrying out a
complex series of actions automatically, especially
one programmable by a computer. The word robot
was introduced to the public by the Czech interwar
writer Karel apek in his play R.U.R. (Rossum's
Universal Robots), published in 1920.(Zunt,
Dominik, 2007) The play begins in a factory that uses
a chemical substitute for protoplasm to manufacture
living, simplified people called robots.

Fig2: NX CAD model of propeller


b.) Electromagnetic Wheels
An electromagnet is a soft metal core made
into a magnet by the passage of electric current
through a coil surrounding it. The DC current supply
is given to the wheels to magnetize and stick firmly
on the metal surface.

109

SJ Impact Factor-3.95
www. ijraset.com

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


PM and VR techniques to achieve maximum power
in a small package size.

Fig 3: NX CAD model of electromagnetic wheels


Fig 4: Stepper Motor
c.) Welding Rod Holder and Rod
e.) ATmega 16 Microcontroller
The positive side of the DC power supply is
given to the welding rod holder and whereas the
metal to be welded is given with the negative charge.
The rod used here is the consumable rod and the rod
is flux coated with the insulated material suitable for
the underwater welding.

ATMEGA 16 microcontroller is used in this


type of the robot. A microcontroller (sometimes
abbreviated C, uC or MCU) is a small computer on
a single integrated circuit containing a processor
core, memory, and programmable input/output
peripherals. Program memory in the form of NOR
flash or OTP ROM is also often included on chip, as
well as a typically small amount of RAM.
Microcontrollers are designed for embedded
applications, in contrast to the microprocessors used
in personal computers or other general purpose
applications.

d.) Stepper Motor


A stepper motor (or step motor) is a
brushless DC electric motor that divides a full
rotation into a number of equal steps. The motor's
position can then be commanded to move and hold at
one of these steps without any feedback sensor (an
open-loop controller), as long as the motor is
carefully sized to the application. The stepper motor
is connected to the H-Bridge and is interfaced to the
microcontroller.

Microcontrollers are used in automatically controlled


products and devices, such as automobile engine
control systems, implantable medical devices, remote
controls, office machines, appliances, power tools,
toys and other embedded systems. By reducing the
size and cost compared to a design that uses a
separate microprocessor, memory, and input/output
devices, microcontrollers make it economical to
digitally control even more devices and processes.

There are four main types of stepper motors: (Liptak,


Bela G., 2005)

Permanent magnet stepper


Hybrid synchronous stepper
Variable reluctance stepper
Lavet type stepping motor

Permanent magnet motors use a permanent magnet


(PM) in the rotor and operate on the attraction or
repulsion between the rotor PM and the stator
electromagnets. Variable reluctance (VR) motors
have a plain iron rotor and operate based on the
principle that minimum reluctance occurs with
minimum gap, hence the rotor points are attracted
toward the stator magnet poles. Hybrid stepper
motors are named because they use a combination of

Fig 5: ATMEGA 16 Microcontroller

110

SJ Impact Factor-3.95
www. ijraset.com

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


f.) Camera
Camera is a device for recording visual
images in the form of photographs, film, or video
signals. The camera here converts the image to the
co-ordinate system. This co-ordinate system helps to
find the weld area in this context. The camera here is
supported by the focus light system on the either side.

Fig 7: NX CAD model of robot

CONCLUSION
The working safety is an important factor
for the weld done in the underwater. The threat to the
welder is from various aspects during the weld. This
exclusive model of a robot will be a solution for the
problems related to the underwater welding. There is
also a numerous future scope for this system.

Fig 6: NX CAD model of camera


g.) Lights
Light is the natural agent that stimulates sight and
makes things visible. The light here supports the
camera to generate the better picture.

REFERENCES
[1] Anand and Khajuria(2013), welding processes in
marine applications: a review, International Journal
of mechanical engineering research and robotics,
Vol.2, Jan2013, ISSN 2278-0149

VIII. OPERATION OF THE UNDER-WATER


ROBOT WELD
The operation of the robot is controlled by
the micro controller. The microcontroller gets its
input from the computer system. The wired interface
is used because its unable to construct an wireless
design efficiently in the place having the elevated
pressure. The electromagnetic wheels and propellers
are actuated by the stepper motors. This stepper
motors are interfaced to the microcontroller using the
H bridges. The movement of the robot is done using
the propellers and electromagnetic wheels are used to
stick to the surface. The camera helps to find the area
of the weld and converts it into the coordinate
system. This co-ordinate system helps to monitor the
weld and make a weld accurately on the surface. The
figure below is the NX CAD model of the robot.

[2] Cary H B and Helzer S C (2005), Modern


Welding Technology, Upper Saddle River, Pearson
Education, New Jersey.
[3] Cavaliere P, Campanile G, Panella F and
Squillace A (2006), Effect of Welding Parameters
on Mechanical and Microstructural Properties of
AA6056 Joints Produced by Friction Stir Welding,
J. Mater. Process. Technol., Vol. 180, pp. 263-270.
[4] Cavaliere P, Squillace A and Panella F (2008),
Effect of Welding Parameters on Mechanical and
Microstructural Properties of AA6082 Joints
Produced by Friction Stir Welding, J. Mater.
Process. Technol., Vol. 200, pp. 364-372.

111

SJ Impact Factor-3.95
www. ijraset.com

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


[5] Haddad G N and Farmer A J (1985), Weld. J.,
Vol. 64, No. 12, pp. 339-342.
[6] Irie T, Ono Y, Matsushita H et al. (1997),
Proceedings of 16th OMAE, pp. 43-50.
[7] Keats D J (2005), Underwater Wet WeldingA
Welders Mate, Speciality Welds Ltd.
[8] Khanna O P (2004), A Textbook of Welding
Technology, Dhanpat Rai Publications (P) Ltd., New
Delhi, India.
[9] Kruusing A (2004), Optics and Lasers in
Engineering, Vol. 41, pp. 329-352.
[10] Liptak, Bela G. (2005). Instrument Engineers'
Handbook: Process Control and Optimization. CRC
Press. p. 2464. ISBN 978-0-8493-1081-2.
[11] Oates W A (Ed.) (1996), Welding Handbook,
Vol. 3, American Welding Society, Miami, USA.
[12] Schmidt H-P (1996), IEEE Transactions on
Plasma Science, Vol. 24, pp. 1229-1238.
[13] Shida T, Hirokawa M and Sato S (1997),
Welding Research Abroad, Vol. 43, No. 5, p. 36.
[14] Steen W M (Ed.) (1991), Laser Material
Processing, Springer Verlag, New York.
[15] Zunt, Dominik. "Who did actually invent the
word "robot" and what does it mean?". The Karel
apek website. Retrieved 2007-09-11.

112

www. ijraset.com
SJ Impact Factor-3.95

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


ELECTRIC ROAD CLEANER
AVINASH PRABU, S.KAUSHIK, PRIYANKA PRAKASH, HARITHA HARIDASAN, S.BALAMURUGAN,
S.VIGNESH
Avinashprabu1993@gmail.com,sudarshan.ampli@gmail.com,priyankasun@gmail.com,
harithaharidasan15@gmail.com, balasekar@ymail.com,vigneshsanman@gmail.com
Ph.no:9941777029, 9841350409
PRINCE SHRI VENKATESHWARA PADMAVATHY ENGG COLLEGE, PONMAR

ABSTRACT
Although conventional sweeping vehicles
already exist, they are designed to operate on a
small scale and use IC engines which are
proving to be more and more expensive and
polluting with increasing fuel costs and
increasing global warming. The whole system
comprises of three separate DC motor systems
each of which control the vacuum system, the
sweeping system and the overall propulsion
system of the vehicle. The amount of carbon

Cleanliness is the most important aspect of


every proper civilization. In this paper we
look into the aspect of electric vehicles being
used to maintain a citys cleanliness. We
focus on the use of dc motors to create a
vehicle that can be effectively used to
maintain road cleanliness. This paper is
aimed at designing a vehicle that can both
maintain operational efficiency and stick to
its task. The system comprises of dc motor
systems and separate speed and current
control circuits.

dioxide emitted is only 12.6 g/km while it


is 60 to 130g/km in other internal
combustion engine cars.

Keywords: Brushless DC motor (BLDC) ,


motor circuit (MC), sweeper circuit(SC),
vacuum circuit(VC), ultrasonic sensor
circuit(USC), Internal
combustion(IC),BatteryElectric
Vechicle(BEV).

II. SYSTEM DESIGN


PROPULSION SYSTEM
It is the system that is used to propel the
entire vehicle and comprises of a single
DC motor complete with a controller
circuit that automatically varies the speed
of the motor in accordance to control
inputs.

I. INTRODUCTION
With increasing vehicular traffic and road
usage levels, it is necessary for a machine that
can effectively maintain road related rubbish
to be developed. The electric road cleaner aims
to utilize the high torque that a DC motor can
provide simultaneous vehicular propulsion and
at the same time sweep and vacuum the road
thereby combining the work conventionally
done with different systems under one.

VACCUM SYSTEM
It is the heart of the cleaning process and
is used to suck in all the rubbish present on
the road that is suitably swept in by the
sweeper circuit.
SWEEPER SYSTEM
It is used to suitably push the rubbish
towards the vacuum suction point. It

113

www. ijraset.com
SJ Impact Factor-3.95

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


consists of two DC motors coupled to four
sweepers along with a stepper motor all of
which work together as a unit.

This battery system is used for propelling


the main motor to drive the vehicle. The
batteries produce an output of 120V and
comprise of 5-24V batteries connected in
series and have 6 rows of such batteries
and the switch over between the rows is
done by a micro-controller.

SENSOR SYSTEM
The sweeping system is connected to a
mount that can be raised or lowered
depending on the gradient of the road and
obstacles. The sensor and sweeper systems
work in tandem.

Sub-system Battery:
This is used to run the vacuum and
sweeping sub-systems and produce an
output of 12V and comprise of 5-12V
batteries connected in parallel and again
the discharging is controlled by a
dedicated micro-controller.

STEPPER MOTOR SYSTEM


It is used to lower or raise the sweepers
according to the sensor inputs. It is
controlled by a microcontroller and rotates
180 deg on each command pulse.

2. MOTOR
The motor used is a BLDC motor. It can
produce 10 HP output power with an input
voltage of 120V. The current rating is
70A, maximum speed is 3450rpm. The
motor gets its input from the controller.
The motor is directly coupled to the
wheels.

III. PROPULSION SYSTEM

3. CONTROLLER

The above shows the basic block diagram of a


BEV. The battery acts as the power source and
the controller modulates the power supplied to
the motor in accordance to the control inputs.

1. BATTERY SYSTEM
In place of an internal combustion engine,
the proposed car has a bank of batteries -the battery system. The battery system is
composed of two subsystems 1. For the
propulsion motor 2. For the sweeping and
vacuuming systems.

The controller used in this vehicle is a dcdc controller manufactured in the name
CURTIS 1231c-86XX. The above shows

Propulsion Battery:

114

www. ijraset.com
SJ Impact Factor-3.95

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


Gear
torque=motor
torque*(input
speed/output speed)*gear efficiency

the block diagram of the DC-DC


controller, it works akin to a chopper but
has inner feedback and reference loops.
The pedal press is converted into voltage
level by a suitable throttle potentiometer
which is given as the reference input to the
controller. It has five terminals namely,

Input speed=3000rpm
Output speed=1000rpm
Gear efficiency=0.85

B+: Positive terminal of the battery pack.

Gear torque=50*(3000/1000)*0.85

B-: Negative terminal of the battery pack.

=127.5Nm

M-: Motor ground connection.


A2: Armature winding ground.
A1 of the motor is connected to the battery
positive while S2 is connected to A2 and
S1 is connected to M-.KSI is switch that
connects the reference voltage to the
controller.

IV. SWEEPER SYSTEM


The sweeping system is a set of four
rubber sweepers that act to push the dust
inwards to the vacuum system. The
sweepers are actuated by two motors and
are connected via a gear so as to ensure
that enough torque can be produced for the
operation. The motors are rated at 0.5HP
and have can run over a range of input
voltages with the preferred one being 24V.

Controller characteristics:
Voltage (V): 96-144.
Current (A): 500.
2 MIN RATING (A): 500.
5 MIN RATING (A): 375.
1 HOUR RATING (A): 225.

The above shows the block diagram of the


sweeping system, the control input dictates
the output speed of the DC motor thereby
controlling the speed of the sweepers as
needed.

Output torque calculation:


P= (2*pi*N*T)/60.
P: Motor Output power(rated)=20 HP.

1. CHOPPER CIRCUIT

T: Output torque.
N: Operating speed of motor.
Re-arranging:
T= (60*P)/(2*pi*N).
T=(60*20*765)/(2*3.1415*3000).
T=50 NM(approx).

The chopper circuit is a buck-boost


converter that is used to vary the output

115

www. ijraset.com
SJ Impact Factor-3.95

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


voltage in accordance with the inputs from
the main control panel. By selecting the
appropriate option, the control unit can be
used to operate the chopper in buck/boost
mode thereby varying the output speed of
the motor.
1.1. OPERATION AS
CONVERTER

inductor L and via Tr2, directly back to


the supply negative terminal charging
up the magnetic field around L. Whilst
this is happening D2 cannot conduct as
its anode is being held at ground
potential by the heavily conducting Tr2.
For the duration of the on period, the
load is being supplied entirely by the
charge on the capacitor C, built up on
previous oscillator cycles. The gradual
discharge of C during the on period (and
its subsequent recharging) accounts for
the amount of high frequency ripple on
the output voltage, which is at a
potential of approximately VS + VL.

BUCK

In this mode Tr2 is turned off, and Tr1 is


switched on and off by a high frequency
square wave from the control unit. When
the gate of Tr1 is high, current flows
though L, charging its magnetic field,
charging C and supplying the load. The
Schottky diode D1 is turned off due to the
positive voltage on its cathode.

2. CONTROL CIRCUIT

1.2. OPERATION AS A BOOST


CONVERTER

The PWM wave is a square wave with


varying duty cycles, the IC is six Schmitt
trigger inverter circuit that produces high
frequency variably duty cycle square
output. This acts as the gate signal to the
switches aiding in the conduction.

In Boost Converter mode, Tr1 is turned


on continually and the high frequency
square wave applied to Tr2 gate. During
the on periods when Tr2 is conducting,
the input current flows through the

116

www. ijraset.com
SJ Impact Factor-3.95

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


thyristor switch we can control the output
voltage and thus the speed of the motor.

V. VACUUM SYSTEM

Vaccum-2:
The second vacuum system acts as a
suction/blower device based on the season.
In summer when the moisture on the roads
is minimal the setup is used as a vacuum
device. During rainy seasons, the setup
funnels hot air from the main motor
toward the ground thus drying up moisture
and enabling easier vacuuming and
preventing sand from sticking to the
ground. This is also a universal motor
coupled to the fan controlled by a chopper
and the field has a reversal circuit used to
control the direction of rotation.

The vacuum system is used to suck in the


dust present on the road that is swept
toward it by the sweeping system. The
vacuum system comprises of two different
vacuum pumps each of which operate
individually. The above diagram shows the
block diagram of the two individual
vacuum systems.

Vacuum-1:
This a universal motor driven vacuum
creator used to suck in the dust irrespective
of the season and road conditions. The
universal motor is coupled to the fan and is
driven by a chopper which is fed from a
lead acid battery source. The chopper used
is a BOOST chopper.
Boost Chopper:
The above shows the block diagram of the
reversal system. The motor has two
windings one for forward operation and
the other for reverse operation, by using a
DPDT switch to switch between the
windings, the direction of operation of the
universal motor can be reversed.
Universal motor:
Universal motors can rotate at a speed of
up to 20000 rpm. It is used for low torque
applications. The motor used in the
vacuum system is a 1400W 12v DC motor.

The above diagram is the boost chopper


used; by varying the firing pulse of the

117

www. ijraset.com
SJ Impact Factor-3.95

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


Calculation of the radius of the impeller:

raised

by

the

stepper

motor.

Suction Pressure=(1/2)*rho*V^2.
rho-Density of air=1.225.
Suction Pressure: 30 KPA.
V-Velocity of the fan blade.
Re-arranging:
V=Sqrt((2*Suction Pressure)/rho).
V=Sqrt((2*30*10^3)/1.225).

V=Sqrt(48,979.592).
V=221.31 M/SEC.
V=(R*W*2*pi)/60
Where,
R-Radius of fan blade.
W-Speed in RPM.
Re-arranging:
R= (60*V)/(W*2*pi).
R= (221.31*60)/(10000*2*3.1415).
R= 13278.6/62830.
R=0.211M.

The ultrasonic ping sensor provides an


easy method of distance measurement.
Interfacing to a microcontroller is a snap.
A single I/O pin is used to trigger an
ultrasonic burst (well above human
hearing) and then "listen" for the echo
return pulse. The sensor measures the time
required for the echo return, and returns
this value to the microcontroller as a
variable-width pulse via the same I/O pin.

VI. SWEEPER PROTECTION


SYSTEM:

The sensor provides precise, non-contact


distance measurements within a 2 cm to 3
m range. Ultrasonic measurements work in
any lighting condition, making this a good
choice to supplement infrared object
detectors. Simple pulse in/pulse out
communication requires just one I/O pin.
Burst indicator LED shows measurement
in progress.
The 5V pin of the PING is connected to
the 5V pin on the Arduino, the GND pin is
connected to the GND pin, and the SIG
(signal) pin is connected to digital pin 7 on
the Arduino.
The sensor output pin is connected to an
arduino board which converts the analog
signals from the sensor to digital values
which act as input to the microcontroller.
The microcontroller reads the output of the
arduino and rotates the stepper motor
according to the same. If the distance
measured by the sensor is lesser than or

1. PING SENSOR:
The ping sensor is an ultrasonic sensor
which measures the distance from the
vehicle and any obstacle. If the distance is
lesser than 2m, the sweeper system will be

118

www. ijraset.com
SJ Impact Factor-3.95

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


equal to 2m, the input to the controller will
1 otherwise the input will be 0.

VII. ADVANTAGES
The principal advantages of this system
involve its large adaptability and relative cost
efficiency. Absence of regular fuelling reduces
fuel costs and the design of the system with
only a electric energy source means that there
is no need for a separate oil fuel for the vehicle
and additional electricity for the other systems.

2. FLOW CHART FOR STEPPER


MOTOR CONTROL:

VIII. CONCLUSION
The proposed paper deals with the use of a electric
vehicle for sweeping and vacuuming purposes. An
electric vehicle has the dual advantage of both being
relatively eco-friendly as well as much more cost
efficient. Instead of individual systems for the sweeper
and cleaner using electric systems and operating the
vehicle on combustion engine, the systems use of pure
electric propulsion means the unit can be designed as a
whole. The entire vehicle can be modified and
redesigned as needed for operation. The vehicle can save
costs on fuel in the long run as well as the fact that
electricity remains the only future resource for mankind.
This vehicle can also reduce road pollution and thereby
help reduce the amount of accidents that occur due to
improper road maintenance.

3. ALGORITHM:
Step 1: start the process
Step 2: send trigger signals to the
microcontroller

IX. REFERENCES

Step 3: read the output of the sensor

[1] en.wikipedia.org
[2] http://www.evworld.com
[3] http://www.saxton.org
[4] http://www.fueleconomy.gov
[5] http://auto.howstuffworks.com

Step 4(A) : if the output of the sensor is 0,


goto step 4(B): Else goto step 5.
Step 5: initialize the data pointer

[6] SERBIAN JOURNAL OF ELECTRICAL


ENGINEERING

Step 6: push data 0C to the higher byte of


data pointer and 00 to lower byte.

Vol. 8, No. 2, May 2011, 127-146


[7] Kueck, J.D., J.R. Gray, R. C. Driver
and J. S. Hsu, Assessment of Available
Methods for Evaluating InService Motor Efficiency, Oak Ridge
National Laboratory, (Draft) January 1996.
[8] McCoy, Gilbert A. and John G.
Douglass, Energy Efficient Electric
Motor Selection Handbook, U.S.
Department
of Energy, DOE/GO-10096-290, August
1996.

Step 7: move the data to R0.


Step 8: initialize the out port and enable it.
Step 9: move the data in R0 to out port C0.
Step 10: give a delay of 5 seconds.
Step 11: goto step 1.

119

www. ijraset.com
SJ Impact Factor-3.95

Special Isue-1, October 2014


ISSN: 2321-9653

International Journal for Research in Aplied Science & Enginering Technology(IJRASET)


[9] von Jouanne, Annette, Alan Wallace,
Johnny Douglass, Craig Wohlgemuth, and
Gary Wainwright, A Laboratory
Assessment of In-Service Motor
Efficiency Testing Methods submitted for
publication at the IEEEInternational
Electric Machines and Drives Conference,
Milwaukee, WI, May 1997.
[10] U.S department of
energy,determination of electric motor
load and efficiency,USA, 2009
[11]chapter-21 digital potentiometers and
controllable filters by jack.r.smith
[12]Power Electronics by Dr.P.S.Bimbhra
[13] http://www.learnaboutelectronics.org/PSU/psu33.php.
[14]http://www.robotroom.com/PWM.html.
[15]google-images.
[16]google-books

120

You might also like