International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014

ISSN 2091-2730

2 www.ijergs.org

Table of Content
Topics Page no
Chief Editor Board 3-4
Message From Associate Editor 5
Research Papers Collection

6-244























International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

3 www.ijergs.org

CHIEF EDITOR BOARD
1. Dr Gokarna Shrestha, Professor, Tribhuwan University, Nepal
2. Dr Chandrasekhar Putcha, Outstanding Professor, University Of California, USA
3. Dr Shashi Kumar Gupta, , Professor, IIT Rurkee, India
4. Dr K R K Prasad, K.L.University, Professor Dean, India
5. Dr Kenneth Derucher, Professor and Former Dean, California State University,Chico, USA
6. Dr Azim Houshyar, Professor, Western Michigan University, Kalamazoo, Michigan, USA
7. Dr Sunil Saigal, Distinguished Professor, New Jersey Institute of Technology, Newark, USA
8. Dr Hota GangaRao, Distinguished Professor and Director, Center for Integration of Composites into
Infrastructure, West Virginia University, Morgantown, WV, USA
9. Dr Bilal M. Ayyub, professor and Director, Center for Technology and Systems Management,
University of Maryland College Park, Maryland, USA
10. Dr Sarâh BENZIANE, University Of Oran, Associate Professor, Algeria
11. Dr Mohamed Syed Fofanah, Head, Department of Industrial Technology & Director of Studies, Njala
University, Sierra Leone
12. Dr Radhakrishna Gopala Pillai, Honorary professor, Institute of Medical Sciences, Kirghistan
13. Dr P.V.Chalapati, Professor, K.L.University, India
14. Dr Ajaya Bhattarai, Tribhuwan University, Professor, Nepal
ASSOCIATE EDITOR IN CHIEF
1. Er. Pragyan Bhattarai , Research Engineer and program co-ordinator, Nepal
ADVISORY EDITORS
1. Mr Leela Mani Poudyal, Chief Secretary, Nepal government, Nepal
2. Mr Sukdev Bhattarai Khatry, Secretary, Central Government, Nepal
3. Mr Janak shah, Secretary, Central Government, Nepal
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

4 www.ijergs.org

4. Mr Mohodatta Timilsina, Executive Secretary, Central Government, Nepal
5. Dr. Manjusha Kulkarni, Asso. Professor, Pune University, India
6. Er. Ranipet Hafeez Basha (Phd Scholar), Vice President, Basha Research Corporation, Kumamoto, Japan
Technical Members
1. Miss Rekha Ghimire, Research Microbiologist, Nepal section representative, Nepal
2. Er. A.V. A Bharat Kumar, Research Engineer, India section representative and program co-ordinator, India
3. Er. Amir Juma, Research Engineer ,Uganda section representative, program co-ordinator, Uganda
4. Er. Maharshi Bhaswant, Research scholar( University of southern Queensland), Research Biologist, Australia


















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

5 www.ijergs.org

Message from Associate Editor In Chief
Let me first of all take this opportunity to wish all our readers a very happy, peaceful and
prosperous year ahead.
This is the Fourth Issue of the Second Volume of International Journal of Engineering Research
and General Science. A total of 58 research articles are published and I sincerely hope that each
one of these provides some significant stimulation to a reasonable segment of our community of
readers.
In this issue, we have focused mainly on the upcoming technology and research. We also welcome more research oriented
ideas in our upcoming Issues.
Author‘s response for this issue was really inspiring for us. We received many papers from many countries in this issue
than previous one but our technical team and editor members accepted very less number of research papers for the
publication. We have provided editors feedback for every rejected as well as accepted paper so that authors can work out
in the weakness more and we shall accept the paper in near future. We apologize for the inconvenient caused for rejected
Authors but I hope our editor feedback helps you discover more horizons for your research work.
I would like to take this opportunity to thank each and every writer for their contribution and would like to thank entire
International Journal of Engineering Research and General Science (IJERGS) technical team and editor member for their
hard work for the development of research in the world through IJERGS.
Last, but not the least my special thanks and gratitude needs to go to all our fellow friends and supporters. Your help is
greatly appreciated. I hope our reader will find our papers educational and entertaining as well. Our team have done good
job however, this issue may possibly have some drawbacks, and therefore, constructive suggestions for further
improvement shall be warmly welcomed.



Er. Pragyan Bhattarai,
Assistant Editor-in-Chief, P&R,
International Journal of Engineering Research and General Science
E-mail -Pragyan@ijergs.org
Contact no- +9779841549341




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

6 www.ijergs.org

Design of Magnetic Levitation Assisted Landing and Take-off mechanism of
Aircraft using Hammock Concept
Kumar Poudel
1

1
Hindustan Institute of Technology, Department of Aeronautical Engineering, Coimbatore
Email- kumarpoudelkx27@gmail.com

ABSTRACT – For safe, efficient landing and take-off of aircraft in future magnetic levitation assisted TOL could turn out to be
best alternative to conventional landing gear system. Thus in this paper design and working principle of the magnetic levitation
assisted TOL is being purposed using hammock concept. Hammocks used in this concept are slings made up of high strength fibre and
steel cables, often used in the construction of bridges. The hammock will be attached to a sledge at which the aircraft is placed during
TOL operation. The sledge is also provided with wheels and can be detached from the hammock for ground operations like taxiing,
hanger operations etc. There will be the provision of joining two sledges together vertically in order to increase the length of the
sledge for larger aircraft. The tracks based on the principle of electrodynamics suspension is used here to drive the hammock and
sledge unit during TOL operation and the source of power is electricity.
Keywords:- Magnetic levitation,Take-off and Landing, Halbach Arrays ,Hammocks, sledge, steel cables, Barricade.
INTRODUCTION
Magnetic Levitation system uses a magnetic force to levitate the aircraft on a rail and to accelerate it during take-off. When landing,
this system can be utilized to decelerate the aircraft. If an aircraft is assisted with magnetic levitation system during take-off and
landing excessive amount of impact force, vibration and shock will be produced. In conventional system hydraulic shock absorbers are
used for this purpose, which nearly consumed 7% weight of the total aircraft and complex hydraulic mechanism is required. Thus in
magnetic levitation system hammocks can be utilized for the purpose with which can reduce the weight and it is a good shock,
vibration and impact force absorber.
Generally used hammock is a sling made of fabric, rope, or netting, suspended between two points, used for swinging, sleeping, or
resting. It normally consists of one or more cloth panels, or a woven network of twine or thin rope stretched with ropes between two
firm anchor points such as trees or posts.
Usually in aircraft carries, emergency recovery system called barricade are widely used. It consist of upper and lower loading strap
joined together to arrest the motion of aircraft and it looks like the hammock. Similarly bridges are constructed through high strength
suspended cables which holds the entire weight of the bridge and the payload. Thus designing a magnetic levitation assisted sledge
mechanism with hammock for TOL operation could be reliable and cost effective mechanism.
The magnetic levitation system consist of a special arrangement of the permanent magnets which augments the magnetic field on one
side of the array and cancelling the other side nearly equal to zero, this special arrangement is known as halbach arrays and this array
concept was developed by Klaus Halbach of the Lawrence Berkeley National Laboratory in the 1980s for use in particle accelerator.

The figure 1 shows the linear Halbach Array:
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

7 www.ijergs.org


Fig 1.Linear Halbach Arrays

Methodology

A. Aircraft

For this project the conventional landing gear system has to be removed and the belly of the aircraft has to be redesigned, since the
aircraft will be carried on the sledge.

B. Basic design concept of magnetic levitation assisted sledge with hammock

Here the main components are Sledge, Hammock, and Electromagnetic Rail. The length and other specification of the sledge,
hammock and rail can be varied according to various factors such as length of the aircraft, weight etc. Thus for this project only the
basic conditions of TOL mechanism are considered. The length and various other specification used here are all assumption. The basic
design of this mechanism is represented schematically by the figure 2.







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

8 www.ijergs.org



High strength suspension cables sledge frame














Hammock


Fig. 2: Basic design concept of magnetic levitation sledge with hammock.

C. Designing the sledge

Sledge is the main portion where the entire aircraft will be supported. Hence the sledge will be assisted with the following elements:

At the starboard and portside of the sledge a latching mechanism will be provided which will attach and detach the sledge to the
hammock.
Similarly at forward and backward of the sledge similar latching mechanism will be provided whose purpose is to attach and detach
the sledge with another so that the length of the sledge could be increased or decreased depending upon the size of the aircraft.

S
e
p
a
r
a
t
i
o
n

B
l
o
c
k

o
f

t
h
e

s
l
e
d
g
e

f
r
o
m

h
a
m
m
o
c
k

S
e
p
a
r
a
t
i
o
n

B
l
o
c
k

o
f

t
h
e

s
l
e
d
g
e

f
r
o
m

h
a
m
m
o
c
k

Sledge, where
the aircraft will
be. (The sledge
is placed at the
aircraft centre
of gravity,
usually belly of
the aircraft.)

T
r
a
c
k


T
r
a
c
k

E
l
e
c
t
r
o
m
a
g
n
e
t
i
c

l
e
v
i
t
a
t
i
o
n

T
r
a
c
k


E
l
e
c
t
r
o
m
a
g
n
e
t
i
c

l
e
v
i
t
a
t
i
o
n

T
r
a
c
k


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

9 www.ijergs.org

The midsection of the sledge is provided with hydraulic actuators so that the section could be moved horizontally and vertically for the
increasing the precision and the frame of the sledge will remain stationary.
Various electrical sensors are implemented to ensure the functioning and safety condition of the sledge and other mechanism.
Electric motor driven wheel will be provided to the sledge system so that the aircraft could be moved from the track to the hanger or
for performing ground operation.
The wheels will be of retractable type, because fixed wheels increases drag.

The 3D view of the sledge with its components is shown in the figure 3.


Fig 3. 3D view of sledge


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

10 www.ijergs.org

D. Aerodynamics of sledge

In order to reduce the drag and excess noise aerodynamic cowling should be designed, this could also be made retractable during the
time of landing because for landing excess drag is more essential for braking effect.
E. Designing sledge slot

The sledge slot is nothing but the same sledge and it is added in addition to the main sledge in order to comfort the TOL operation of
the larger aircraft. Each sledge is unique and contains all components, they are similar but slotted according to length and breadth.
F. Designing the hammock

Barricade are the good example for designing the structural concept of the hammock. The hammock will be constructed with fibre
and cables having high tensile strength and good stiffness factor such as those used in the construction of bridges. The bunch of high
strength steel cables will be combined together to form a suspension cable so that the fail safe design could be achieved (i.e.) whole
system will not be affected due to failure of a single cable from bunch. The figure 4 illustrates the design of cable system for fail safe
and figure 5 is the picture of commercially available steel cable.








High strength steel cables





Fig 4. Suspension cable design for hammock

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

11 www.ijergs.org



Fig 5. Commercially used steel cable
Working Principle

The working principle can be defined according to various operational states of the components used in this mechanism.
Magnetic Levitation track
Magnetic levitation track provides levitation and traction to the entire setup. The magnetic levitation induct track is constructed
through the series of halbach arrays which can produce a flux density of more than 1T. At the operation speed of the sledge, the
levitation force of the induct track acts like a stiff spring. Thus more than 2cm clearance between the sledge and the track can be
produced. Since no friction force is acting on the system, the sledge can be accelerated to its maximum speed, which is the required
speed of the aircraft to produce lift. Hence lift produced by the wing takes off the aircraft and the sledge will be finally detached from
the aircraft.
Hammock
Here the hammocks acts as the shock absorbing agents or they can be called as the replacement of the hydraulic system. Though some
parts used here in this project are provided with hydraulic system for the purpose of safety and to increase the efficiency. Hammocks
are the slings connecting the track to the sledge. The sledge can be detached from the hammocks with the help of detach and attach
hinges. Hammocks play vital role during landing operation.
Sledge
Sledge is the main component used for holding the aircraft and it is the main function of the sledge. The provision of moving the
sledge horizontally and vertically with respect to sledge frame provides the precision landing and take-off of the aircraft and also plays
vital role in gust wind landing. According to the total length of the aircraft additional sledge slots can be attached or detached. And it
works similar to the concept of attaching and detaching the train compartments.
Attach/detach hinges
They are provided in starboard and port side for the hammock purpose. The hinge at forward and backward is for the provision of
addition of sledge slots.
The figure 6 and figure 7 shows the flowchart for take-off and landing operation.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

12 www.ijergs.org















Fig 6.Flow chart of take-off operation













Fig 7. Flow chart of landing operation
The sledge carries the aircraft from the hanger
The sledge is attached to the hammock
Electromagnetic levitation mechanism accelerates the
aircraft to the take-off speed.
Sledge detach from the aircraft, aircraft takes off
Passengers on board, ready to take-off
Landing approach
Aircraft lands on the sledge, induct track decelerates.
Sledge detach from hammock
Passenger arrival station
Aircraft to hanger
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

13 www.ijergs.org

CONCLUSIONS
This paper gives the idea of design of the hammock concept with magnetic levitation assistance for TOL operation of aircraft. This
method could increases the fuel efficiency of aircraft since take-off and landing is done through ground assisted power source, thus
smaller engines can be used, Removal of conventional landing gear could reduce 7% weight of the aircraft, less noise production so
that airports can be built nearer to the cities.
Finally I conclude that more than more, this method is the most cost effective method because of the use of less hydraulic mechanism
and can reduce runway length. This is the best alternative solution to conventional TOL mechanism. This could enhance the aviation
industries go green.
ACKNOWLEDGMENT
I would like to convey thanks my parents, to those who encouraged me, to the Hindustan institutions, the faculty and staff of
Hindustan Institute of technology, and to all my friends.

REFERENCES:
Richard F.Post; ―Magnetic Levitation for Moving Objects‖, U.S Patent No.5, 722,326.
GABRIEL ―out of the box‖ project, ―Possible Solutions to Take-off and land an Aircraft‖; version 1.3,GA NO.FP7-284884.
Barricade, http://en.wikipedia.org/wiki/Arresting gear; hammock, http://en.wikipedia.org/wiki/Hammock; Halbach Arrays,
en.wikipedia.org/wiki/Halbach array.
Klaus Halbach; ―Application of permanent magnets in accelerators and Electron Storage Rings, ―Journal of Applied Physics, Volume
57, pp.3605, 1985.
David Pope; ―Halbach Arrays Enter the Maglev Race‖, the industrial Physicist pp.12, 13.










International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

14 www.ijergs.org

Generation of Alternative Process Plans using TLBO Algorithm
Sreenivasulu Reddy. A
1
, Sreenath K
1
, Abdul Shafi M
1

1
Department of Mechanical Engineering, S V University College of Engineering, Tirupati-517502, India
Email- seetharamadasubcm@gmail.com

ABSTRACT – : Computer Aided Process Planning (CAPP)system is an important production activity in the
manufacturing industry to generate process plans that contains the required information of machining operations, machining
parameters (speeds, feeds and depth of cuts), machine tools, setups, cutting tools and accessories for producing a part as per given
part drawing. In this context, to generate the optimum process plans, one of the AI based meta heuristic algorithm is used i.e.,
Teaching–Learning Based Optimization (TLBO) to solve the process planning problem to minimize operation sequence cost and
machining time based on the natural phenomenon of teaching–learning process like in the class room.
Keywords: CAPP, TLBO, Optimized solution, Alternative process plans, Teacher phase, Learner phase.
INTRODUCTION
Computer aided Process planning (CAPP) deals with the selection of the machining operations sequence as per given
drawing and determination of conditions to produce the part [9].It includes the design data, selection of machining processes, selection
of machine tools, sequence of operations, setups, processing times and related costs. It explores operational details such as: sequence
of operations, speeds, feeds, depths of cut, material removal rates, and job routes [10]. Required inputs to the planning scheme
include: geometric features, dimensional sizes, tolerances and work materials. These inputs are analyzed and evaluated in order to
select an appropriate operations sequence based upon available machinery and workstations.Therefore the generation of consistent and
accurate process plans requires the establishment and maintenance of standard databases and the implementation of an effective and
efficient Artificial Intelligence (AI) heuristic algorithms like Genetic algorithm (GA), Simulated Annealing(SA), Ant Colony
Optimization (ACO) and TLBO algorithm are used to solve these problems.
LITERATURE REVIEW
Since last three decades many evolutionary and heuristic algorithms have been applied to process planning
problems. Usher and Sharma (1994) mentioned that several feasibility constraints which affects the sequencing of the machining
operations. These constraints are processed sequentially based on the precedence relationsof the design features. Usher and Bowden
(1996) proposed an application of a genetic algorithm (GA) for finding near-optimal solutions.In 2002 Li et al. developeda hybrid GA
and SA approach to solvethese problems for prismatic parts. Gopal Krishna and Mallikarjuna Rao (2006) and Sreeramulu et al. (2012)
presenteda developed meta-heuristic Ant Colony Optimizationalgorithm (ACO) as a global search technique for the quick
identification of the operations sequence. Recently, TLBO is a newly developed algorithm introduced by Rao et al.(2011) based on the
natural phenomena of teaching and learning process like in a classroom. Therefore it does not require any specific constraint process
parameters.And also they (2013) proposed to solve the job shop scheduling problems to minimize the make span using TLBO
algorithm. All the evolutionary algorithms require common controlling parameters like population size, number of generations etc.In
addition to these common parameters, they may require own algorithm-specific parameters. For example GA contains mutation and
cross over rate, PSO uses inertia weight.
TEACHING-LEARNING-BASED OPTIMIZATION ALGORITHM
In TLBO Algorithmteacher and learners are the two vital components. This describes two basic modes of the
learning, through teacher (known as teacher phase) and interacting with the other learners (known as learner phase). Teacher is usually
considered as a highly learned person who trains learners so that they can have better results in terms of their marks or grades.
Moreover, learners also learn from the interaction among themselves which also helps in improving their results. TLBO is population
based method. In this optimization algorithm a group of learners is considered as population and different design variables are
considered as different subjects offered to the learners and learners‘ result is analogous to the fitness value of the optimization
problem. In the entire population the best solution is considered as the teacher. TLBO algorithm mainly working of two phases,
namely teacher phase and learner phase.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

15 www.ijergs.org

Teacher Phase
Teacher phase is the first phase of TLBO algorithm. In this phase teacher will try to improve mean of class.A good
teacher is one who brings his or her learners up to his or her level in terms of knowledge. But in practice this is not possible and a
teacher can only move the mean of a class up to some extent depending on the capability of the class. This follows a random process
depending on many factors. Generate the random population according to the population size and number of generations [6].

Calculate the mean of the population, which will give the mean for the particular subject as M,D = [m1, m2, …….mD]. The
best solution willact as a teacher for that iteration Xteacher = Xf(X)=min.The teacher will try to shift the mean from MD towards X
teacher which will act as a new mean for the iteration. So,Mnew, D =X teacher D.

The difference between two meansis expressed as
Difference D = r
i
(Mnew, D–TFMD) (1)
Where, r
i
is the random number in the range [0, 1], the value of Teaching Factor (TF) is considered 1 or 2. The obtained difference is
added to the current solution to update its values using
X new,D = Xold, D + Difference D. (2)
Accept Xnew if itgives better function value.
Learner Phase
A learner interacts randomly with other learners for enhancing his or her knowledge [4]. Randomly select two learners Xi and
Xj.
X'new,D= Xold,D+ r
i
(Xi-Xj) if f (Xi) < f (Xj)
X'new,D= Xold,D+ r
i
(Xj- Xi) if f (Xi) >f (Xj)
Termination criterion: Stop if the maximum generation number is achieved; otherwise repeat from Step Teacher phase.
PROCESS PLANNING METHODOLOGY
In this algorithm the operation sequences are considered as learners and operations acts as subjects. The operation
sequences are generated randomly according to the procedure of the algorithm. Calculate the time and cost for the generated
sequences and identify the best teacher. In teacher phase update the solutions (from ―equation 2‖) and again calculate the time and
cost. The flow chart of the TLBO Algorithm is as shown in figure 3.

The operation sequences aregeneratedto develop a feasible and optimal sequenceof operations for a part based on
the technical requirements, including part specifications in the design, the givenmanufacturing resources, and certain objectives related
to cost or time. The following formulas are used to calculate total time and manufacturing costs [8].

1. Machine cost (MC), MC is the total costs of the machines used in a process plan and it can be computed as:
MC = ( )
1
[ [ ]. ]. * [ ]
n
i
Machine Oper i Mac id Cost machining time of Oper i
÷
=
¿

Where Oper (i) = operation I, MCI is the machine cost index for the machine and Mac-id is the machine used for the operations.
2. Tool cost (TC), TC is the total costs of the cutting tools used in a process plan and it can be computed as :
| | | | ( )
1
. . *
n
i
TC Tool OPer i Tool id Cost machining time of Oper i
÷
=
( =
¸ ¸
¿

Where TCI is the tool cost index for the tool and Tool-id is the tool used for the operation.
3. Number of set-up changes (NSC), the number of set-ups (NS) and the set-up cost (SC).
| | | | ( ) | | | | ( ) ( )
1
1
1
. , 1 . . , 1 .
n
i
NSC Oper i Mac id Oper i Mac id Oper i TAD id Oper i TAD id
÷
1 ÷ ÷ ÷ ÷
=
= O O + O +
¿

The correspondence NS and SC can be computed as:
NS = 1+NSC
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

16 www.ijergs.org

1
NS
i
SC SCI
=
=
¿
, Where ( ) ( )
1 2
1 1 0
, , ,
0 0
X Y X Y
X Y X Y
X Y otherwise
= = = ¦ ¦
O = O =
´ ´
=
¹ ¹

And SCI is the set – up cost index.
4. Number of Machine Changes (NMC) and Machine Change Cost (MCC), NMC and MCC can be computed as:
NMC =
¿
÷
=
+ O
1
1
1
) _ ]. 1 [ , _ ]. [ (
n
i
id Mac i Oper id Max i Oper
MCC =
¿
=
NMC
i
MCCI
1

Where MCCI is the machine change cost index.
5. Number of Tool Changes (NTC) and Tool Change Cost (TCC) are computed as:
NTC=
)) _ ]. 1 [ , _ ]. [ ( ), _ ]. 1 [ , _ ]. [ ( (
1 1
1
1
2
id Tool i Oper id Tool i Oper id Mac i Oper id Mac i Oper
n
i
+ O + O O
¿
÷
=

TCC =
¿
=
NTC
i
TCCI
1

Where TCCI is the tool change cost index.
6. Total Weighted Cost (TWC)
TWC = TCC MCC SC TC MC + + + +



Case study


In this paper the process plans are generated for a prismatic part drawing based on manufacturing time and related cost. The
part details,costs, precedence relations and number of generations are given as input to the algorithm. The output contains the process
plans and their costs, machining times, setups. Part drawing details are shown in Fig.1 and Table.1 respectively.











International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

17 www.ijergs.org


Fig.1. Part Drawing Fig.2. Precedence relation of the part drawing
Operations Information
Table.1 Operations information for part drawing
F ID Feature Operations Dimensions
1. Surface Milling L=150,H=90,W=150
2. Pocket Shaping L=150,H=40,W=35
3. Pocket Shaping L=80,H=40,W=35
4. Pocket Shaping L=150,H=40,W=35
5. Pocket Shaping L=80,H=40,W=35
6. Hole Drilling D=16,H=30
7. Hole Drilling D=16,H=30
8. Hole Drilling D=16,H=30
9. Hole Drilling D=16,H=30
10. Hole Drilling D=16,H=30
11. Hole Drilling D=16,H=30
12. Hole Drilling D=16,H=30
13. Hole Drilling D=16,H=30
14. Hole Drilling D=60,H=11
15. Hole Drilling D=26,H=90

The precedence relations for the part drawing are shown in Fig.2. These precedence relations are generated
according to some standard rules. However, the user is allowed to choose the precedence relations according to requirements and
available resources.














1
2
13
6
12
3 7
4
8
9
10
5 11
14 15
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

18 www.ijergs.org






Initialize the population, design variables and
Number of generations
Generate the plans randomly and find the objective function
Calculate the mean of the each design variables
Identify the best solution
Calculate the Difference Mean and modify the solutions based on best
solution

Find the objective function for the modified solutions

Is new solution better
than existing?
Select any two solutions randomly X
i
and X
j
Is new solution better
than existing?
Is termination criteria
fulfilled
Final solution
Accept
Accept
No
No
No Yes
Yes
Yes
Fig.3. Flow chart of the TLBO Algorithm
Keep the previous
solution
Start
Keep the previous
solution
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

19 www.ijergs.org

Table 2: Best two process plans for part drawing

Table.3: Alternative five process plans for part drawing


OPERATION ID 1 2 3 4 5 14 13 6 12 7 8 9 15 11 10
OPERATION TYPE 7 10 10 10 10 3 3 3 3 3 3 3 3 3 3
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 4 8 8 3 8 3 4 9 7 8
TOOL ALLOCATED 9 15 15 15 15 4 4 4 4 4 6 5 4 4 5
SET UP ALLOCATED 2 6 6 6 6 6 1 1 1 1 6 6 6 6 6
558.17
389.08 561.1465
4
7
11
2.97675
OPERATION ID 1 2 3 4 5 14 15 6 12 7 8 9 10 11 13
OPERATION TYPE 7 10 10 10 10 3 3 3 3 3 3 3 3 3 3
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 10 7 3 3 3 3 10 8 8 10
TOOL ALLOCATED 10 16 16 15 15 6 4 7 5 4 4 4 6 5 7
SET UP ALLOCATED 6 6 6 6 6 6 6 1 1 1 6 6 6 6 1
587.57
384.08 590.54675
4
11
8
2.97675
CRITERIAN 1: MINIMUM COST
CRITERIAN 2: MINIMUM TIME
COST
TOTAL TIME
NO.OF SETUPCHANGES
NO. OF TOOL CHANGES
COST
TOTAL TIME
NO.OF SETUPCHANGES
NO. OF TOOL CHANGES
NO.OF M/C CHANGES
RAW MATERIAL COST
TOTAL COST
TOTAL COST
NO.OF M/C CHANGES
RAW MATERIAL COST
Part No 2
PLAN1
OPERATION ID 1 2 3 4 5 14 13 6 12 7 8 9 15 11 10
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 4 8 8 3 8 3 4 9 7 8
TOOL ALLOCATED 9 15 15 15 15 4 4 4 4 4 6 5 4 4 5
SET UP ALLOCATED 2 6 6 6 6 6 1 1 1 1 6 6 6 6 6
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 74.08 74.08 74.08 74.08 74.08 74.08 666.8 74.08 74.08
PLAN2
OPERATION ID 1 2 3 4 5 14 13 6 12 15 8 9 10 11 7
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 3 10 10 8 8 9 4 4 3 8
TOOL ALLOCATED 10 16 16 15 15 7 4 7 6 6 7 4 7 7 5
SET UP ALLOCATED 6 6 6 6 6 6 6 6 6 1 1 6 6 6 1
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 74.08 74.08 74.08 666.8 74.08 74.08 74.08 74.08 74.08
PLAN3
OPERATION ID 1 2 3 4 5 14 15 6 12 7 8 9 10 11 13
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 10 7 3 3 3 3 10 8 8 10
TOOL ALLOCATED 10 16 16 15 15 6 4 7 5 4 4 4 6 5 7
SET UP ALLOCATED 6 6 6 6 6 6 6 1 1 1 6 6 6 6 1
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 666.8 74.08 74.08 74.08 74.08 74.08 74.08 74.08 74.08
PLAN4
OPERATION ID 1 2 3 4 5 14 13 6 12 7 8 9 10 15 11
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 10 13 13 13 13 7 7 9 10 3 3 8 3 8 8
TOOL ALLOCATED 12 16 15 16 16 4 6 6 6 4 7 6 4 5 5
SET UP ALLOCATED 6 6 5 5 5 6 1 1 1 1 1 1 1 6 6
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 74.08 74.08 74.08 74.08 74.08 74.08 74.08 666.8 74.08
PLAN5
OPERATION ID 1 2 3 4 5 14 13 6 12 7 8 9 10 11 15
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 9 9 7 9 3 3 7 8 3 10
TOOL ALLOCATED 11 16 16 16 16 7 6 4 6 6 4 5 5 5 4
SET UP ALLOCATED 6 6 6 5 5 6 6 6 1 1 1 1 1 1 1
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 74.08 74.08 74.08 74.08 74.08 74.08 74.08 666.8 74.08
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

20 www.ijergs.org

CONCLUSION
In this paper TLBO algorithm is used for solving process planning problem based on sequencing of machine operations. The
problem modeled with manufacturing time and associated cost as the objectives. The better results are obtained with TLBO algorithm.
REFERENCES:

[1] BhaskaraReddy, S.V., Shunmugam, M.S., and Narendran, T.T. ―Operation sequencing in CAPP using genetic algorithms,‖
International Journal of Production Research, vol. 37, no. 5, pp. 1063–1074, 1999.
[2] GopalKrishna, A., and Mallikarjun Rao, K. ―Optimization of operations sequence in CAPP using an ant colony algorithm,‖
Advanced Manufacturing Technology, vol. 29, no. 1-2, pp. 159–164, 2006.
[3] Li, W.D., Ong, S.K., and Nee, A.Y.C. ―Hybrid genetic algorithm and simulated annealing approach for the optimization of
process plans for prismatic parts‖, International Journal of Production Research, Vol. 40, No. 8, pp.1899–1922, 2002.
[4] Keesari, H.V., and Rao, R.V. ―Optimization of job shop scheduling problems using teaching-learning-based optimization
algorithm‖, Operational Research Society of India 2013.
[5] Nallakumarasamy, G., Srinivasan, P.S.S., Venkatesh Raja, K., and Malayalamurthi, R. ―Optimization of operation
sequencing in CAPP using simulated annealing technique (SAT),‖ International Journal of Advanced Manufacturing
Technology, vol. 54, no. 5-8, pp. 721–728, 2011.
[6] Rao R.V., Savsani, V.J., and Vakharia, D.P. ―Teaching-learning-based optimization: an optimization method for continuous
non-linear large scale problems‖. Info. Sci. 183, 1–15 2012.
[7] Sreenivasulu Reddy, A. ―Generation of Optimal process plan using Depth First Search(DFS) Algorithm‖ Proceedings of the
IV National conference on Trends in Mechanical Engineering. TIME‘10,30
th
December 2010, Kakatiya Institute of
Technology & Science, Warangal.
[8] Sreenivasulu Reddy, A., and Ravindranath, K. ―Integration of Process planning and scheduling activities using Petrinets‖,
International journal of Multidisciplinary Research and Advanced in Engineering (IJMRAE), ISSN 0975-7074, Volume 4,
No.III, pp. 387-402, July 2012.
[9] Sreeramulu, D., and Sudeep Kumar Singh ―Generation of optimum sequence of operations using ant colony algorithm‖, Int.
J. Advanced Operations Management, Vol. 4, No. 4, 2012.
[10] Srinivas P.S., RamachandraRaju, V., and Rao, C.S.P. ―Optimization of Process Planning and Scheduling using ACO and
PSO Algorithms‖, International Journal of Emerging Technology and Advanced Engineering,ISSN 2250-2459, Volume 2,
Issue 10, October 2012.
[11] Usher, J. M., and Bowden, R.O. ―The Application of Genetic Algorithms to Operation Sequencing for Use in Computer-
Aided Process Planning‖, Computers Ind. Engg., Vol. No. 4, pp. 999-1013, 1996.
[12] Usher J.M., and Sharma G. ―Process planning in the face of constraints‖, Proc. Industrial Engineering and Management
System Conference., pp 278−283, 1994.







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

21 www.ijergs.org

A Strategical Description of Ripple Borrow Subtractor in Different Logic
Styles
T. Dineshkumar
1
, M. Arunlakshman
1
1
Research Scholar (M.Tech), VLSI, Sathyabama University, Chennai, India
Email- arunlakshman@live.com

ABSTRACT – The demand and popularity of portable electronics is driving designers to strive for small silicon area, higher speeds,
low power dissipation and reliability. Design of 2-input AND, 2-input OR, 2-input XOR and an INVERTER, which are the basic
building blocks for the 4- bit Ripple borrow subtractor. This paper thoroughly involves designing of ripple borrow subtractor in cMOS
logic, transmission gate logic and pass transistor logic styles. The schematic design is further transferredto prefabrication layout.
Simulation of the microwind layout realizations of the subtractor is performed and results are discussed. From the results obtained
comparison of cMOS logic, transmission gate logic and pass transistor logic is done and discussing the efficient logic for ripple
borrow subtractor.
Keywords— cMOS logic, transmission gate logic, pass transistorlogic, fullsubtractor, ripple borrow subtractor.
INTRODUCTION
In thispaper,we have presenteda brief review on Rippleborrow subtractorusingcMOS,Transmission
gatesandPasstransistorlogic style.Thebasiccircuitdiagramof 1 bit Fullsubtractoris asdescribedbelowalongwiththe blockdiagram andits
truthtable. Full subtractor is a combinational circuit which is used to perform subtraction of three bits, it has three inputs a(minuend)
and b(subtrahend) and borrow_in(subtrahend) and two outputs d(difference) and borrow_out(borrow).


Fig. 1. Gate level representation of full subtractor




Fig. 2. Truth table and block diagram of full subtractor

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

22 www.ijergs.org

CIRCUITTECHNIQUES

FULLSUBTRACTOR

Thefull-subtractorcircuitsubtractsthreeone-bitbinary numbers(A,B,borrow_in)andoutputstwoone-bit binary numbers,a
difference(D)anda borrow(borrow_out).

RIPPLE BORROW SUBTRACTOR
It is possible tocreate logicalcircuit usingmultiplefullsubtractortosubtractN(precase4)bitnumbers.
Eachfullsubtractorinputsaborrow_in(borrow input)whichistheborrow_out (borrow output) of
previoussubtractor.Thiskindofsubtractorisripple borrow subtractorsince eachborrowbitripples tothenextfullsubtractor



Fig. 3. Ripple borrow subtractor

4-BITRIPPLEBORROW SUBTRACTORUSINGCMOSCIRCUITS
cMOS is referred to as complementarysymmetry metaloxide semiconductor (COS-MOS).Thewords ―complementary-symmetry" refer
to the fact that the typical digital design style with CMOS uses complementary and symmetrical pairs of p-type and n-type metal oxide
semiconductor field effect transistors (MOSFETs) for logic functions. The circuit level description of the Ripple Borrow Subtractor in
cMOS logic is described below.,

Fig. 4. Ripple borrow subtractor in cMOS logic.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

23 www.ijergs.org

4-BIT RIPPLEBORROW SUBTRACTOR USING TRANSMISSIONGATES

TheCMOStransmissiongateconsistsoftwoMOSFETs,one n-channelresponsibleforcorrecttransmissionoflogiclow,and one p-
channel, responsibleforcorrecttransmissionof logic high.The circuit level description of the Ripple Borrow Subtractor in
Transmission Gate logic is described below.


Fig. 3. Ripple Borrow Subtractor in Transmission Gate Logic.

4-BIT RIPPLEBORROW SUBTRACTOR USINGPASS TRANSISTORS

Wecanview thecomplementaryCMOSgateasswitchingthe output pintooneof powerorground.A slightlymore general
gateisobtainedifweswitchtheoutputtooneofpower; ground;orany oftheinputsignals.Insuchdesignsthe MOSFETis consideredto be
a passtransistor.When used as a passtransistorthe devicemay conductcurrentineither direction.The circuit level description of
the Ripple Borrow Subtractor in Pass Transistor logic is described below.,

Fig. 4. Ripple Borrow Subtractor in Transmission Gate Logic.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

24 www.ijergs.org


DESIGN ANDLAYOUT ASPECTS

LAYOUT OF RIPPLE BORROW SUBTRACTOR USING cMOS LOGIC



Fig. 5. Layout of ripple borrow subtractor using cMOS logic

LAYOUT OF RIPPLE BORROW SUBTRACTOR USING TRANSMISSION GATE LOGIC



Fig. 6. Layout of ripple borrow subtractor using transmission gate logic

LAYOUT OF RIPPLE BORROW SUBTRACTOR USING PASS TRANSISTOR LOGIC

Fig. 7. Layout of ripple borrow subtractor using pass transistor logic
SIMULATIONAND RESULTS
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

25 www.ijergs.org


SIMULATION OF RIPPLE BORROW SUBTRACTOR USING cMOS LOGIC


Fig. 8. Simulation of ripple borrow subtractor using cMOS logic

SIMULATION OF RIPPLE BORROW SUBTRACTOR USING TRANSMISSION GATE LOGIC


Fig. 9. Simulation of ripple borrow subtractor using transmission gate logic

LAYOUT OF RIPPLE BORROW SUBTRACTOR USING PASS TRANSISTOR LOGIC



Fig. 10. Simulation of ripple borrow subtractor using pass transistor logic


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

26 www.ijergs.org

POWER ANALYSIS

Thetable showstheresultsof4-bitrippleborrow subtractorusing cMOS circuits,
TransmissiongatesandPassTransistors.Itcomparesthesecircuitsregarding Power consumption.Fig.11 representstheaboveresults
graphically.


Fig. 11. Power consumption.

CONCLUSION

In this paper, an attempt has been made to design 2input AND, 2inputOR, 2inputXOR,whichare the basicbuilding
blocksforthebenchmarkcircuits4-bitRippleborrow subtractor. Theproposedcircuitshaveofferedanimprovedperformance inpower
dissipation.Inthispaper,we canbe concludedthatasthepowerdissipationof transmission gatecircuitsis
muchlessthanthepowerdissipationofcMOSandPasstransistors,thusitprovestobemuchefficientthan the circuits from cMOSandPass
transistors.The circuitandits VLSItechnologyisveryusefulinthe applications relatedtorural developmentas itislesspower
consumingandthuscanbeefficiently usedinvarious technologies.

REFERENCES:

[1] Nilesh P. Bobade ,―Design and Performance of CMOS Circuits in microwind‖. IJCA,Jan‘2012,wardha.M.S., India.
[2] S. Govindarajulu, T. Jayachandra Prasad, ―Low-Power,High Performance Dual Threshold Voltage CMOS Domino Logic
Circuits‖, published in ICRAES, 8th & 9th Jan‘2010, pp-109- 117,KSR College of Engg., Tiruchengode, India.
[3] S.Govindarajulu, T.Jayachandra Prasad, ―Considerations of Performance Factors in CMOS Designs‖, ICED 2008, Dec.1- 3
Penang, Malaysia, IEEE Xplore.
[4] Gary K. yeap , ―Practical low power digital vlsi design‖.
[5] John P.Uyemura ,―Cmos logic circuits design‖.
[6] A.AnandKumar ,―Fundamental ofdigitalcircuits‖.
[7] Sung-MoKang,Yusuf Leblebici , ―CMOSDigitalIntegratedCircuits‖.
[8] Microwind and Dsch User‘s Manual , Toulouse, france.
[9] http://www.allaboutcircuits.com
[10] http://www.ptm.asu.edu
[11] http://vides.nanotcad.com/vides/
[12] http://en.wikipedia.org/wiki/Field-effect_transistor
[13] http://en.wikipedia.org/wiki/design-logics(electronics)
[14] http://en.wikipedia.org/wiki/MOSFET
[15] http://en.wikipedia.org/wiki/Transistor


0 20 40 60 80
cmos
trans_gate
pass
power consumption
CIRCUITS
POWER
CONSUMPTION

CMOSCIRCUITS 68.356uW
TRANSMISSIONGATES 9.225uW
PASSTRANSISTORS 37.515uW
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

27 www.ijergs.org

Comparison of Forced convective heat transfer coefficient between solid pin
fin and perforated pin fin
Anusaya Salwe
1
, Ashwin U. Bhagat
1
, Mohitkumar G.Gabhane
1

1
Deparatment of mechanical Engineering, Manoharbhai Patel Institute of Engineering and Technology, Shahapur Rashtrasant Tukdoji
Maharaj Nagpur University, Nagpur, Maharashtra, india
Email- mohitgabhane79@gmail.com

ABSTRACT – The rapid growth in high speed multi-functional miniaturized electronics demands more stringent thermal management.
The present work numerically investigates the use of perforated pin fins to enhance the rate of heat transfer. In particular, the numbers of horizontal
perforations, horizontal diameters of perforation on each pin-fin are studied. Results show that heat transfer in perforated pin fin is greater than solid
pin fin. Pressure drop with perforated pins is reduced as compared with that in solid fins and more surface area get available which enhance the
convective heat transfer.

Keywords: Heat Transfer, Extended Surface, Forced convection, Perforated Fin.
1. Introduction
Extended Surface (Fin) is used in a large number of applications to increase the heat transfer from surfaces. Typically, the fin
material has a high thermal conductivity. The fin is exposed to a flowing fluid, which cools or heats with it. The high thermal
conductivity allowing increased heat being conducted from the wall through the fin. Fins are used to enhance convective heat transfer
in a wide range of engineering applications, and offer practical means for achieving a large total heat transfer surface area without the
use of an excessive amount of primary surface area. Fins are commonly applied for heat management in electrical appliances such as
computer power supplies or substation transformers. Other applications include IC engine cooling, such as Fins in a car radiator.
Heat sinks are employed to dissipate thermal energy generated by electronic components to maintain a stable operation
temperature. A compact, efficient and easily fabricated heat sink is required. However, the design of heat sink device is strongly
dependent upon the need to balance thermal dissipation and pressure drop across the system such that the overall cost and efficiency
may be optimized. An example of a familiar solution is to apply pin fin into a heat sink design.
The thermal dissipation performance of pin fin and pin fin heat sinks when subject to a horizontal impinging flow.
We concluded that the heat transfer and pressure coefficients for cylindrical attach perforated pin fin are higher than those of
solid pin fin. Fins are widely used in the trailing edges of gas-turbine blades, in electronic cooling and in the aerospace industry. The
relative fin height (H/d) affects the heat transfer of pin-fins, and other affecting factors include the velocity of fluid flow, the thermal
properties of the fluid, the cross-sectional area of fluid flow.
2. Experimentation set-up
The experimental set-up consisting of the following parts
A. Main Duct (cylindrical)
B. Heater Unit
C. Base Plate
D. Data Unit

A. Main Duct (cylindrical): A cylindrical channel constructed by using galvanizing steel of 1 mm thickness and has a diameter of
150mm and length of 1200mm . at the middle there is attach the perforated pin fin .It will be operated in force draught mode by the
blower of 0.5 H.P. 13000 rpm.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

28 www.ijergs.org


B. Heater Unit: Heater Unit (test section) has a diameter of 160mm and width of 20mm which is wound on the cylindrical fin portion
the heating unit mainly consisted of an electrical heater The heater output has a power of 200 W at 220V and a current of 10 amp.

C .Central portion: On the central portion of the cylindrical duct there is pin fin attach and to heat that pin fin on the central portion
of cylindrical duct band heater is wound to heat the pin fin.

D. Data Unit: It consists of various indicating devices which indicate the reading taken by the various components like sensors,
voltmeter, manometer. There are temperature indicator which shows reading taken by the seven sensors in the range 0°c to 450 °c
among this, two gives inlet and out temperature of air, three gives temperature at base, middle, and tip of the fin.

There is one sensor which shows temperature above the fin. One sensor gives reading at outlet.
Inlet flow rate of air is indicated by velocity indicator using manometer.




3. Experimentation Procedure

1) First blower and heater are started simultaneously.
2) After starting the blower pressure difference due to the fins employed using manometer are noted.
3) Reading of atmospheric temperature is also taken.
4) Voltage is set to different values like 90v, 100v, 120v, 130v, 140v etc. and readings are taken for solid pin fin, single hole
pin fin, double hole pin fin, three hole pin fins .
5) The voltage, current and temperatures at different points where thermocouples are attached are noted down.
Similarly readings at same voltage for the different pin fin sets. (i.e. solid, single hole, double hole, three holes are observed and
noted)

Fig1: pictorial view of experiment

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

29 www.ijergs.org

4. Nomenclature

Q heat transfer
Qcon heat transfer due to convection
Qrad heat transfer due to radiation
h heat transfer coefficient
A
s
Surface area of fin
Tm mean temperature
I current (amp)
D diameter of duct
R resistance

5. Governing Equations

The Convective heat transfer rate electrically heated test surface is calculated by using

Q
conv
.= Q
e
- Q
cond
- Q
rad
(1)


where ,
Q
conv
is the heat transfer rate by convection
Q
e
is the heat transfer rate of electrical
Q
cond
is the heat transfer rate by conduction
Q
rad
. is the heat transfer rate by radiation,
Q
e
is calculated using following equation
Q
e
= I
2
X R

(2)
Where,
I is current flowing through heater and R is resistance.

In similar studies, investigators reported that total heat loss through radiation from a similar test surface would be about 0.5%
of the total electrical heat input. The conductive heat losses through the sidewalls can be neglected in comparison to those
through the bottom surface of the test section. Using these findings, together with the fact that wall of the test section are
well insulated and readings of the thermocouple placed at the inlet of tunnel should be nearly equal to ambient temperature,
one could assume with some confidence that the last two terms of Eq. (1) may be ignored.

The heat transfer from the test section by convection can be expressed as

Q
conv.
= h
avg
A
s
[T
m1
-T
m2
] (3)

Hence, the average convective heat transfer coefficient have could be deduced using

h
avg
=

[1−2]
(4)

where
A
s
is the surface area of fin.
T
m1
is the mean temperature over surface.
T
m2
is the temperature outside the fins.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

30 www.ijergs.org

Friction factor to measure amount of friction using pressure drop is calculated by equation below

=
ΔP

L
D
h
ρ
V
2
2
(5)




Fig2: Solid Fins
Fig4: 2 holes fins





Fig3: 1 hole fins Fig5: 3 holes fins

6. Observations

The various observation like heat input ―Q‖ in Watt, mean temperature over the fins T
m1
in
o
C, mean outside temperature T
m2
in
o
C,
temperature difference ―∆T‖ in
o
C and ―h‖ heat transfer rate in W/mm
2 o
C and ―∆P‖ pressure drop in mm of water for solid pin fin , 1
hole pin fin, 2 holes pin fin, and 3 holes pin fins are made and calculated.


7.Result
Result stated that heat transfer increases with increasing number of parforation on fins .

7.1. Pressure Drop Effect

Fig.2, Fig.3, Fig.4 & Fig.5 shows how the fins are arranged in circular duct. ―f‖ i.e. Friction Factor decreases with increasing
number of perforation as the perforations decrease the blockage effect. Since the number of perforation is restricted on a given pin, f
may be further reduced by increasing the perforation diameter. It is important to note that vertically perforated pins are critical for heat
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

31 www.ijergs.org

sinks subject to impinging flow. As shown in Fig., pins with horizontal and vertical perforations have lower f than pins without, and
pins with vertical perforations have the lowest f.

7.2. Heat Transfer Performance

More importantly, thermal dissipation is higher with perforated pin fins than with solid pins. It is found that the larger the number
of perforation on each pin fin However, further increasing the perforation diameter reduces heat transfer from base to tip of the fin.
This is due to the decrease in the cross sectional area of the pin for heat conduction along the pins.




7.3.Heat Transfer Efficiency

It is found that the perforated pin fins have higher efficiency than the solid pin fins. The result shows that heat transfer
increases with number of perforations, when solid fins are compared with 3 holes pin fin it is find out that h increases with increasing
number of hole from no hole to 3 hole obtained successfully. Also the temperature difference decreases with increase of number of
perforation. This shows that low temperature difference leads to high heat transfer. The efficiency of the perforated pin fins are 15 to
17 % more than solid pin fins.


7.4. Conclusions
In this study, the overall heat transfer and friction factor for the heat exchanger equipped with cylindrical cross-sectional perforated
pin fins were investigated experimentally. The effects of the flow and geometrical parameters on the heat transfer and friction
characteristics were determined:

a) ΔP across the pin fins are smaller with increasing number of perforation and perforation diameter. In all cases, perforated pin fin
array performs better than the solid pins. Hence, perforated pin fins require less pumping power than the solid pins for the same
thermal performance.

b) Maximum ―h‖ is obtained from pin fin with 3 perforations, 3mm horizontal perforation diameter, It is approximately 10% higher
than that for the solid pins at R
ep
=11×10
3
. More importantly, the thermal energy is dissipated at a smaller pressure drop.

c) Further increasing the perforation diameters will lead to a reduction in thermal dissipation. This is due to the decrease in vertical
heat conduction along the perforated pin fins, as well as the perforations induces reshaping of wakes behind the pins.
0
5
10
15
20
25
30
35
∆T(oc)solid
∆T(oc)1 hole
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
0.0035
66.4385.49 96.8 114 133.9
h(W/mm2(oc)
h(W/mm2(oc)
h(W/mm2(oc)
h(W/mm2(oc)

X axis power input
Y axis temp.dif.
Graph 1.
Graph 2.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

32 www.ijergs.org


REFERENCES:
[1] -Jinn Foo
1, 2
, Shung-Yuh Pui
1
, Yin-Ling Lai
1
, Swee-Boon Chin, SEGi Review ISSN 1985-5672 Vol. 5, No. 1, July 2012 .

[2] Abdullah H. AlEssa1*, Ayman M. Maqableh
1
and Shatha Ammourah
2
,
―Enhancement of natural convection heat transfer from a
fin by rectangular perforations with aspect ratio of two‖ International Journal of Physical Sciences Vol. 4 (10), pp. 540-547, October,
2009

[3] Raaid R. Jassem, SAVAP International, ―EFFECT THE FORM OF PERFORATION ON THE HEAT TRANSFER
IN THE PERFORATED FINS", ISSN 1985-5672 Vol. 5, No. 1, July 2012, 29-40.
















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

33 www.ijergs.org

Developments in Wall Climbing Robots: A Review
Raju D.Dethe
1
,Dr. S.B. Jaju
2

1
Research Scholar (M.Tech), CAD/CAM, G.H.Raisoni College of Engineering, Nagpur
2
Professor, .H.Raisoni College of Engineering, Nagpur

ABSTRACT – The purpose of wall climbing robots is climbing mainly on the vertical surfaces like that of walls. The robots are
required to have high, maneuverability and robust & efficient attachment and detachment. The robot can automate tasks which are
done manually with an extra degreed of human safety in a cost effective manner. The robot can move in all the four directions
forward, backward, left and right. The other locomotion capabilities include linear movement, turning movement, lateral movement,
rotating and rolling movement. Apart from the reliable attachment principal the robot should have low self weight and high payload
capacity. The design and control of robot should be such that it can be operated from any place. A wireless communication link is
used for high performance robotic system. Regarding the adhesion to the surface the robots should be able to produce secure griping
force. The robots should adopt to different surface environlments from steel, glass, ceramic, wood, concrete etc. with low energy
consumption and cost. This paper presents a survey of different proposed and adopted climbing robots developed on the recent
technologies to fulfill the objective
Keywords: robot, climbing, adhesion, suction, magnetic. Electrostatic.
1 INTRODUCTION
Wall climbing robots (WCR) are special mobile robots that can be used in a variety of application like inspection and
maintenance of surfaces of sea vessels; oil tanks, glass slabs of high rise building etc. To increase the operational efficiency and to
protect human health and safety in hazardous tasks make the wall climbing robot a useful device. These systems are mainly adopted
in such conditions where direct access by human operator is very expensive due to hazardous environment or need of scaffolding.
During navigation wall climbing robots carry instrument hence they should have the capability to bear high payload with lower self
weight. Researchers have developed various types of wall climbing robot models after the very first wall climbing robot dated back
to 60‘s,developed by Nishi based on single vacuum suction cup. Basically these are design factors for developing the mobile robots,
their adhesion and locomotion. Based on locomotion the robots can be differentiated into three types viz. crawler, wheeled and
legged type. Although the crawler type is able to move relatively faster, it cannot be applied in rough environments. On the other
hand legged type easily copes with obstacle found in the environments. Generally its speed is lower and requires complex control
system. The wheeled robots can have relatively high speed except they cannot be used for surfaces with higher obstruction. Based on
the adhesion method the robots can be classified into magnetic, vacuum or suction, grasping grippers, Electrostatic and biologically
inspired robots. The magnetic type robots are heavy due to weight of magnets and can only be used on ferromagnetic surface.
Vacuum based robots are lightweight and easy to control but they cannot be used on cracked surfaces due to leakage of compressed
air. The biologically inspired robots are still in the development stage as newer material is tested and to be improved. The technology
based on electrostatic adhesion is lightweight and have high flexibility to be used on different type of walls is in the developing stage.
2 CLIMBING ROBOTS DESIGN CONCEPT AND APPLICATIONS
The paper by Shigeo Hirose and Keisuke A.Rikawa describes seemingly two opposite design and control concepts based on coupled
and decoupled actuation of robotic mechanism. From the viewpoint of controllability, decoupled actuation is better than coupled
actuation.
5

Manual F. Silva‘s paper presents the survey of different technologies proposed and adopted for climbing robots adhesion to surfaces,
focusing on the new technologies that are developed recently to fulfill these objectives.
15

The paper by H.X.Zhang presents a novel modular caterpillar named ZC-I featuring fast building mechanical structure and low
frequency vibrating passive attachment principle.
20

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

34 www.ijergs.org

Shanging Wu proposes wireless distributed wall climbing robotic system for reconnaissance purpose.
2

The solution for inspection of marine vessels is proposed in design and control of a lightweight magnetic climbing robot for
vessel inspection by Markus Eich and Thomas Vogele.
4

A paper by Hao Yangand Rong Liu Proposes the vibration suction method(VSM) which is a new kind of suction strategy for
wall climbing robots.
6

Stephen Paul Linder have designed a handhold based low cost robot to climb a near vertical indoor climbing wall using computer
vision.
10

The paper by Jason Gu presents a proposed research on wall climbing robot with permanent magnetic tracks. The mechanical
system architecture is described in the paper.
11

The inspection of large concrete walls with autonomous system to overcome small obstacles and cracks are described by
K.Berns.
16

Gecko, climbing robot for wall climbing vertical surfaces and ceiling is presented by F.Cepolina.
24

Climbing service robots for improving safety by Bing L.Luk describes how to overcome the traditional manual inspection and
maintenance of tall building, normally require scaffolding and gondolas in which human operator need to work in mid air and life
threatening environment.
25

Houxiang Zhang‘s paper describes three different kinds of robots for cleaning the curtain walls of high rise building.
26

Climbing robots are useful devices that can be adopted in a variety of application such as Non
Destructive Evaluation (NDE), diagnosis in hazardous environments, welding, construction, cleaning and maintenance of high
rise buildings, reconnaissance purpose, visual inspection of manmade structures. They are also used for inspection and
maintenance of ground storage tanks and can be used in any type of surveying process including inspection of marine vessels, to
detect damaged areas, cracks and corrosion on large cargo hold tanks and other parts of ships. Small sized wall climbing robots
can be used for anti terror and rescue scout tasks. Firefighting, inspection and maintenance of storage tanks in nuclear power
plants, airplanes and petrochemical enterprise etc.
Application of some of the wall climbing robots is given in the table below.
SR. NO AUTHOR YEAR APPLICATION
1 Young Kouk Song, Chang Min
Lee
2008 Inspection purpose
2 Love P. Kalra, Weimin Shen,
Jason Gu
2006 Non destructive inspection
3 Shanqiang Wu, Mantian Li,
Shu
2006 Reconnaissance purpose
4 Markus Eich And Thomas
Vogele
2011 Vessel inspection
5 shuyan liu, xueshan gao, kejie
li, jun li
2007 Anti terror scout
6 Juan Carlos Grieco, Manuel
Prieto.
1998 Industrial application
7 K. Berns, C.Hillenbrand - Inspection of concrete walls
8 F. Cepolina, R.C. Michelini, 2003 Wall cleaning
9 Bing L. Luk, Louis K. P. Liu 2007 Improve safety in building
maintenance
10 Houxiang Zhang , Daniel
Westhoff
- Glass curtain walls cleaning
Table No. 1
3 PRINCIPAL OF LOCOMOTION
The wall climbing robots are based on the following three types of locomotion
a) Wheeled
b) Legged and
c) Suction cups
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

35 www.ijergs.org


The following robots comes under this category
Wheeled wall climbing robots
This section describe, the hardware platform of a wall climbing robot ,called LARVA as shown in Fig. 1 and its control method.
LARVA is the robot containing all the components except the power, when is supplied via a tether cable. Total weight of system
is 3.3 kg. Its dimensions are 40.0cm width, 34.5cm length, and 11.0 cm height. Impellent force generator can evacuate the
chamber to 5kpa. It is same 300M approximately. Finally, it can move on the wall in 10cm/s as a maximum speed
1

The mechanical design of the proposed WCR is shown in Fig 2. The robot consists of aluminum frame, motors and drive train,
and tracked wheels with permanent magnets plate in evenly spaced steel channels.
3

A differential drive mechanism has been selected for this robot in which the wheels or tracks on each side of the robot are driven
by two independent motors, allowing great maneuverability and the ability to rotate the robot on its own axis. The tracks provide
a greater surface area for permanent magnets near the contact surface than moral wheels, creating enough attraction force to keep
the robot on the wall and enough flexibility to cross over small obstacles like welding seams resulting in a more stable
locomotion.
The mechanical design of the city-Climber is divided into three main areas; the adhesion mechanism , the drive system and the
transition system. The adhesion mechanism is the most critical of these as it allows the robot to adhere to the surface on which it
climbs. The drive system is designed to transmit power to four wheels of the robot and to provide maximum traction as it climbs
to move from a vertical wall to the ceiling.
22

Fig.
No. 1 WCR LARVA Fig. No. 2 Magnetic Wheel WCR Fig. No. 3 TheCity Climber WCR

The Legged wall climbing robots are described below

Distributed inward Gripping (DIG) advances the concept of directional attachment by directing legs on opposite sides of the body
to pull tangentially inward toward the body. The shear forces oppose each other rather than the pull of gravity, allowing the robot
to climb on surface of any orientation with respect to gravity including ceilings.
12

REST design was focused on the main specification features , which include :
- Capacity to carry high payloads (up to 100 kg) on vertical walls and ceilings.
- Some degree of adaptation to traverse obstacles and irregularities.
- High safeness for industrial environment operation.
- Semiautonomous behavior.
13

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

36 www.ijergs.org


Fig. No. 4 DIGbot WCR Fig. No. 5 Legged WCR
The basic function of inspired climbing caterpillar include following aspects. The climbing caterpillar has to be safely attached to
the slope with different material and has to overcome gravity. The mechanical structure for safe and reliable attachment to the
vertical surface is needed. Now our research is focusing on the realization of new passive suckers which will save considerable
power. Because of the unique vibrating adsorbing principle, the passive suckers can attach not only to glass, but also to a wall
with maximum tiles.
20


Following table shows robots categorized on the basis of method of climbing
SR. NO AUTHOR YEAR METHOD OF CLIMBING
1 Young Kouk Song, Chang
Min Lee
2008 Impellar with sction seal
2 Love P. Kalra, Weimin Shen,
Jason Gu
2006 Magnets
3 Shanqiang Wu, Mantian Li,
Shu
2006 Distributed wall climbing
4 Hao Yang And Rong Liu, 2008 New vibration suction robotic
foot
5 Akio YAMAMOTO, Takumi
NAKASHIM
2007 Electrostatic attraction
6 Yu Yoshida And Shugen Ma 2010 Passive suction cups
7 L. R. Palmer Iii, E. D. Lkiller 2009 Distributed inward gripping
8 XiaoQI Chen 2007 Bernoulli effect
9 Sangbae Kim 2008 Geckos
10 Philip Von Guggenberg 2012 Electro adhesion
Table No. 2

4 TECHNOLOGIES FOR ADHERING TO SURFACES
To hold a robot on the wall is the basic concept behind the development of adhesion principal. There are many factors which
affect holding especially on all vertical walls and ceiling. Forces, robot movement and mechanical design are such factors.
The Suction force based wall climbing robots are


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

37 www.ijergs.org


Fig no. 6 Vaccum Cup WCR Fig. No. 7 Scout Task WCR

This papers proposes the critical suction method (CSM) the method means that the suction force includes two types of forces,
one is the negative suction force generated inner suction disc, and the other is the thrust force generated by the propeller . And
while the robot is adsorbing on the wall surface, the two forces could push it on the wall safely, and improve its obstacle –
overleaping abilities. The robot suction principle mainly composted of suction cup, flexible sealed ring and propeller. Once
propeller goes round and round at full speed, the air vents and thrust force produces that pushes the suction cup to the wall. What
more air enters into the suction cup through the flexible sealed ring and it makes the cup achieve the negative pressure state. So
there is the pressure force for the robot to suck on the wall. By adjusting the gap between the sealed ring and the wall surface, the
critical suction would be obtained in the robot suction system. It also meets the demand that the robot can stay on the wall and
move smoothly.
9

Fig. 6 shows materials handling application where a vacuum cup called a suction cup is used to establish the force capability to
lift a flat sheet. The cup is typically made of a flexible material such as rubber so that a seal can be made where its lip contacts the
surface of the flat sheet. A vacuum pump is turned on to remove air from the cavity between the inside of the cup and top surface
of the flat sheet. As the pressure in the cavity falls below atmosphere pressure, the atmosphere pressure acting on the bottom of
the flat sheet pushes the flat sheet up against the lip of the cup. This action result in vacuum pressure in the cavity between the
cup and the flat sheet that causes an upward force to be exerted on the flat sheet.
23

The requirement of the robot is to be self contained i.e. it should be able to operate throughout its task by totally depending upon
the on board batteries. This demands on adhesion mechanism that does not require any external power. Permanent magnet makes
a great candidate for such a requirement. By carefully selecting the size of the magnets and by introducing an appropriate air gap
between the magnet and the wall surface we can have a very efficient adhesion mechanism unlike other alternative like vacuum
suction cups which need a continuous supply of negative pressure to stick
.3
The previous adhesion techniques make the robot suitable for moving on at walls and ceilings. However, it is difficult for them to
move on irregular surfaces and surfaces like wire meshes. In order to surfaces this difficulty, some robots climb through manmade
structure or through natural environments, by gripping themselves to the surface where they are moving over. These robots
typically exhibit grippers.
17


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

38 www.ijergs.org


Fig. No. 8 Grasping WCR
A prototype wall climbing robot was designed and fabricated using flexible electrode panels. The robot was designed to utilize
the inchworm walking mechanism. Two square frames made of aluminum were conducted by a linear guide. Their relative
position was controlled by two RC Servo motors. Electrode panels were redesigned to fit the frame design. On each square frame,
two electrode panels measures 130 mm in which and 75 mm in height. Each panel weighs 12 g and the total weight of the robot is
327 g
.7
Geckos are renowned for their exceptional ability to stick and run on any vertical and inverter surface. However gecko toes are
not sticky in the usual way like duct tape or post it notes. Instead, they can detach from the surface quickly and remain quite clean
around everyday contaminates even without grooming. The two front feet of a tokay gecko can withstand 20.1 N of force parallel
to the surface with 227 mm
2
of pad area , a force as much as much as 40 times the gecko‘s weight. Scientists have been
investigating the secret of this extraordinary adhesion ever since the 19
th
century and at least seven possible mechanism for gecko
adhesion have been discussed over the past 175 years.. There have been hypotheses of glue, friction, suction, and electrostatics,
micro-interlocking and intermolecular forces. . Sticky secretions were rules out first early in the study of gecko adhesion since
geckos lack glandular tissue on their toes.
19


Fig. No. 9 Stickbot WCR


5 NEW ADHESION PRINCIPLES
Climbing robot based on new principal of adhesion: an overview
Existing wall climbing robots are often limited to selected surfaces. Magnetic adhesion only works on ferromagnetic metals,
Suction pads may encounter problems on the surface with high permeability. A crack in a wall would cause unreliable functioning
of the attachment mechanism and cause the robot to fall off the wall materials and surface condition is desirable. To this end, the
university of Canterbury has embarked on a research program to develop novel wall climbing robot which offer reliable adhesion,
maneuverability, high payload, to weight ratio, and adaptability on a variety of wall material and surface conditions. The research
has led to the development of a novel wall climbing robot based on the Bernoulli Effect
.14

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

39 www.ijergs.org


Fig. No. 10 Bernoulli Pad based WCR


The proposed robot moves by crawler driven mechanism and attaches by suction cups. The robot has one motor, which drives the
rear pulleys. Several suction cups are installed on the outside surface of the belt with equal intervals as shown in fig. 1 and the
cups rotate together with the belt.
The moving process of the robot can be described as follows, firstly the robot is attached to a wall by pushing of the crawler belts
makes suction cups contact and attach to the wall at the front pulleys. Then the guide shafts slide into a guide rail as shown in fig.
2 when a suction cup reaches the rear pulley, it is detached from the wall by the rotation of the belts. A sequence of this progress
makes the robot move on the wall to keep adhesion
.8



Fig. No. 11 Passive Suction Cup WCR

To develop a robot capable of climbing a wide variety of materials, we have taken design principles adapted from geckos. The
result is stickybot (fig. 9), a robot that climbs glass and other smooth surfaces using directional adhesive pads on its toes.
Geckos are arguable nature‘s most agile smooth surface climbers. They can run at over 1 m/s , in any direction , over wet and dry
surface of varying roughness and of almost any material , with only a few exception like graphite and Teflon . The gecko prowess
is due to a combination of ―design features‖ that work together to permit rapid, smooth locomotion. Foremost among this features
is hierarchical compliance , which helps the gecko conform to rough and undulating surface over multiple length scales. The
result of this conformability is that the gecko achieves intimate contact with surfaces so Waals forces produce sufficient adhesion
for climbing. The gecko adhesion is also directional. This characteristic allows the gecko to adhere with negligible preload in the
normal direction and to detach with very little pull off force and effect that is enhanced by peeling the toes in digital
hyperextension.
18
The electro adhesion exploits the electrostatic force between the material that serves as a substrate and the electro adhesive pad. The
pad is generally made up of polymer coated electrodes or simply by conductive materials. When the charges are induced on the
electrodes, the field between the electrodes polarizes the dielectric substrate causing electrostatics adhesion. It is essential to maintain
the electro adhesive pad and the surface in close contact. Since the electrostatic forces decrease dramatically with the square of the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

40 www.ijergs.org

distance, the basic idea is to create structure with two electrodes that have shape, size and distance requirements that ensure a high
electrostatic field and that generate high adhesion forces on different types of material as wood, glass, paper, ceramics, concrete etc
.7


Fig. No. 12 WCR for conductive walls


SRI International is introducing wall climbing robot prototypes for surveillance, inspection , and sensor placement application .
Ideal of remote surveillance or inspection of concrete pillars or other structure, this robot uses SRI‘s patented electro adhesion
technology to enable wall climbing. It can also be used to carry payloads such as cameras, wireless network nodes, and other
sensors.
27

Fig. No. 13 SRI’s WCR














International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

41 www.ijergs.org

6 Limitations of WCR
Some of the limitations of different wall climbing robots is given in the following tabular form.
SR. NO AUTHOR YEAR OBJECTIVE OF
STUDY
OUT COME LIMITATION
1 Love P. Kalra, Weimin
Shen, Jason Gu
2006 A wireless wall
climber
Used magnets for
adhesion
Limited to
ferrous walls &
less battery life
2 Shanqiang Wu,
Mantian Li, Shu
2006 Wireless
operation
Distributed wall
climbing
Mother & child
two robots
3 Markus Eich And
Thomas Vogele
2006 Light weight
robot
Used LED based
sensor
Crawler felled if
other bright light
spot found nearby
4 Akio YAMAMOTO,
Takumi
NAKASHIMA
2007 To realize
Electrostatic
adhesion
Improvement of
speed
Very low speed
5 Yu Yoshida And
Shugen Ma
2010 Passive suction
cup based
Prototype fells
due to larger
power
requirements
Mechanism was
to be improved
6 Stephen Paul Linder,
Edward Wei
2005 Balancing of
Hands &legs
Computr vision
reliably locates
itself
Less flexibility
7 L. R. Palmer Iii, E. D.
Lkiller
2009 Design hexapod
for advance
maneuvering
Leg motion &
body balancing
Gripping limited
to tangential
force
8 Juan Carlos Grieco 1998 High payload
carrying
Complexity of
design
High self weight
9 Wikipedia - Study of bio
inspired robots
Climbs smooth
walls
Cost/less research
on material
10 Jizhong Xiao and Ali
Sadegh
- Modular
climbing
caterpiller
A highly
integrated robotic
system
Manufacturing
complexity
Table No. 3
7 CONCLUSIONS
During the two last decades, the interest in climbing robotic systems has grown steadily. Their main intended cleaning to
inspection of difficult to reach constructions. This paper presented a survey of different technologies proposed and adopted for
climbing robots adhesion to surfaces, focusing on the new technologies that are presently being developed to fulfill these objectives. A
lot of improvement is expected in the future design of the wall climbing robots depending upon its utility .This paper gives a short
review of the existing wall climbing robot

REFERENCES:
[1] Young Kouk Song, Chang Min Lee, Ig Mo Koo, Duc Trong Tran, Hyungpil Moon And Hyouk Ryeol Choi‖ Development Of
Wall Climbing Robotic System For Inspection Purpose‖IEEE/RSJ International Conference On Intelligent Robots And Systems,
2008, pp. 1990 - 1995
[2] Shanqiang Wu, Mantian Li, Shu, Xiao And Yang Li ―A Wireless Distributed Wall Climbing Robotic System For Reconnaissance
Purpose‖ IEEE international Conference On Mechatronices And Automation, 2006,pp. 1308 – 1312
[3] Love P. Kalra, Weimin Shen, Jason Gu ―A Wall Climbing Robotic System For Non Destructive Inspection Of Above Ground
Tanks‖ IEEE CCECE/CCGEI, Ottawa, May, 2006, pp. 402 – 405
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

42 www.ijergs.org

[4] Markus Eich And Thomas Vogele ‖ Design And Control Of A Lightweight Magnetic Climbing Robot For Vessel Inspection
‖IEEE 19
th
Mediterranean Conference On Control And Automation Aquis Corfu Holiaday Palace, Corfu.Greece June 20 - 23,
2011, pp. 1200-1205
[5] Shigeo HIROSE and Keisuke ARIKAWA ―Coupled And Decoupled Actuation Of Robotic Mechanisms‖ IEEE International
Conference On Robotics & Automation san Francisco. CA. April 2000, pp. 33 – 39
[6] Hao Yang And Rong Liu, Qingfeng Hong And Na Shun Bu He ―A Miniature Multi – Joint Wall – Wall - Climbing Robot Based
On New Vibration Suction Robotic Foot‖ IEEE International Conference On Automation And Logistics Qingdao, China
September 2008, pp. 1160 – 1165
[7] Akio YAMAMOTO, Takumi NAKASHIMA, and Toshiro HIGUCHI ―Wall Climbing Mechanisms Using Electrostatic Attraction
Generated By Flexible Electrodes‖ IEEE 2007, PP. 389 - 394
[8] Yu Yoshida And Shugen Ma ―desing of a wall – climbing robot with passive suction cups‖ IEEE international conference on
robotics and biomimetics December 14 – 18, 2010,Tianjin, China, pp. 1513 – 1870
[9] shuyan liu, xueshan gao, kejie li, jun li, and xingguang duan ―a small-sized wall-climbing robot for anti-terror scout ‖IEEE
international conference on robotics and biominetics becember 15-18, 2007, sanya, china, pp. 1866-1870
[10] Stephen Paul Linder, Edward Wei, Alexsander Clay ―Robotic Rock Climbing Using Computer Vision And Force Feedback‖
IEEE International Conference On Robotics And Automation Barcelona, Spain, April 2005, pp. 4685 – 4690
[11] Weimin Shen And Jason Gu, Yanjun Shen ―Proposed Wall Clombing Robot With Permanent Magnetic Tracks For Inspecting Oil
Tanks‖IEEE International Conference On Mechatroincs & Automation Niagara Falls, Canada. July 2005, pp. 2072 – 2077
[12] L. R. Palmer Iii, E. D. Lkiller. And R. D. Quinn ―Design Of A Wall-Climbing Hexapod For Advanced Maneuvers‖ IEEE/RSJ
International Conference On Intelligent Robots And Systems October 11-15, 2009 St. Louis, USA, pp. 625-630
[13] Juan Carlos Grieco, Manuel Prieto. Manuel Armada. Pablo Gonzalez De Santos ―A SIX – LEGGED CLIMBING ROBOT FOR
HIGH PAYLOADS‖ IEEE international conference on control application Trieste, ltaly 1-4 september 1998, pp. 446 – 450
[14] XiaoQI Chen, Senior Member, IEEE Matthias Wager, Mostafa Nayyerloo, Wenhui Wang, Member, IEEE, And J. Geoffrey
Chase ‖a novel wall climbing robot based on Bernoulli effect ‖
[15] Manuel F. Silva, J. A. Tenreiro Machada ―New Technologies For Climbing Robots Adhesion To Surfaces‖
[16] K. Berns, C.Hillenbrand Robotics Research Lab, Department Of Computer Science , Technical University Of Kaiserslautern ―A
Climbing Robot Based On Under Pressure Adhesion For The Inspection Of Concrete Walls‖
[17] K. Berns, C. Hillenbrand, T. Luksch University Of Kaiserslautern, 67653 Kaiserslautern, Germany ―Climbing Robots For
Commercial Applications – A Survey‖
[18] Sangbae Kim, Student Member, IEEE, Matthew Spenko, IEEE, Salomon Trujillo, Barrett Heyneman, Daniel Santos, Student
Member, IEEE, And Mark R. Cutkosky, Member, IEEE ‖Smooth Vertical Surface Climbing With Directional Adhesion‖ IEEE
TRANSACTIONS. VOL.24,NO 1. FEBRUARY 2008, PP. 1-10
[19] ―synthetic setae‖ from Wikipedia, the free encyclopedia
[20] H. X. Zhang, Member, IEEE, J. González-Gómez, S.Y. Chen, Member, IEEE, W. Wang, R. Liu, D. Li, J. W. Zhang ―A Novel
Modular Climbing Caterpillar Using Low-frequency Vibrating Passive Suckers‖
[21] Jizhong Xiao and Ali Sadegh The City College, City University of New York USA‖ City Climber: A New Generation Wall-
climbing Robots‖ Climbing & Walking Robots, Towards New Applications, Book edited by Houxiang Zhang,
[22] William Morris, Class of 2008, Major: Mechanical Engineering
Mentor: Jizhong Xiao, Department of Electrical Engineering ―City-Climber:Development ofa Novel Wall-Climbing Robot‖
[23] Surachai PanichSrinakharinwirot ―Development of a Wall Climbing Robot" University, 114, Sukhumvit 23, Bangkok 10110,
Thailand
[24] F. Cepolina, R.C. Michelini, R. P. Razzoli, M. Zoppi Rmar Lab – Dept. Of Mechanics And Machine Desing University Of
Genova, Via Aii‘ Opera Pia 15/A 16145 Genova, ―Gecko, A Climbing Robot For Walls Cleaning‖1
st
int. workshop on advances
in service robotics ASER03, march 13-15, bardolino, Italy 2003
[25] Bing L. Luk, Louis K. P. Liu And Arthur A. Collie ―Climbing Serv Ice Robots For Improvingy Safety In Building Maintenance
Industry‖ bioinspiration and robotics: walking and climbing robots, 2007, pp. 127-146
[26] Houxiang Zhang , Daniel Westhoff, Jianwei Zhang Guanghua Zong ―Service Robotic Systems For Glass Curtain Walls Cleaning
On The High- Rise Buildings‖ seminar on robotics in new markets and application
[27] Philip Von Guggenberg; Director Business Development SRI International, Silicon Valley




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

43 www.ijergs.org

Comparative Analysis of Improved Domino Logic Based Techniques for VLSI
Circuits
Shilpa Kamde
1
, Dr. Saanjay Badjate
2
, Pratik Hajare
1

1
Research Scholar
Email- sshilpa_11@ymail.com

ABSTRACT - In modern VLSI design, Domino logic based design technique is widely used and in which power ignites the speed of
circuit. The Dynamic (Domino) logic circuit are often favored in high performance designs because of the high speed and low area
advantage. But in integrated circuits, the power consumed by clocking gradually takes a dominant part, and therefore our research
work in this paper is mainly focused on to study the comparative performance of various domino logic based techniques proposed
recently in last decade viz. basic logic domino technique, domino with keeper, high speed leakage tolerant domino, low swing
domino logic and domino logic with variable threshold voltage keeper, sleep switch dual threshold voltage domino.
This work evaluates the performance of the different domino techniques in terms of delay, power and their product on BSIM4
model using Agilent Advanced Design System tool. The domino techniques compared in this work were found to have optimized
area, power, delay and hence better power delay product (PDP) as compared with standard domino.
The main focus of this research work is to find the best possible trade off that would optimize multiple goals viz. area, power, speed
and noise immunity at the same time to meet the multi-objective goal for our future research work.

Keywords - Domino logic circuit, Domino logic with keeper, High speed and leakage Tolerant Domino, Low Swing Domino,
Domino Logic with Variable Voltage Threshold Keeper, Sleep Switch Dual Threshold Voltage Domino.
INTRODUCTION
Domino logic circuit techniques are extensively applied in high-performance microprocessors due to the superior speed and area
characteristics of dynamic CMOS circuits as compared to static CMOS circuits. High-speed operation of domino logic circuits is
primarily due to the lower noise margins of domino circuits as compared to static gates [1,2]. Domino logic offers speed and area
advantages over conventional static CMOS and is especially useful for implementing complex logic gates with large fan-outs. A
limitation of the domino technique is that only non-inverting gates are possible. This limits the logic flexibility and implies that logic
inversion has to be performed at the inputs or outputs of blocks of domino logic [2]. In this paper, we explored the various domino
logic based techniques for combinational circuit design for high fan in and high speed application in deep submicron VLSI
technology.
DOMINO LOGIC TECHNIQUES
A. Basic Domino Logic

Domino CMOS was proposed in 1982 by Krambeck. It has same structure as dynamic logic gates, but adds static buffering
CMOS inverter to its output. The introduction of the static inverter has the additional advantage of the output having a low-impedance
output, which increases noise immunity and drives the fan-out of the gate. The buffer furthermore reduces the capacitance of the
dynamic output node by separating internal and load capacitance. The buffer itself can be optimized to drive the fan-out in an optimal
way for high speed. This logic is the most common form of dynamic gates, achieving a 20% to 50% performance increase over static
logic [3].
In Basic Domino logic family evolved from PMOS and NMOS transistors and therefore retained two phase of operation. A
single clock is used to both precharge and evaluation phase. This circuitry incorporates a static CMOS buffer into each logic gate as
shown in Figure.1 During the precharge phase input is low (CLK=0), PMOS transistor is ON and NMOS transistor is OFF, Node Vo
is charged up to Vdd and the output from the inverter is at close to the 0 voltage level. In this phase no path between pull down
network to Vo.[9]
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

44 www.ijergs.org

Next, during the evaluation phase, NMOS transistor is ON creating the path node Vo through to pull down network to the ground.
Node Vo is discharged and inverter make output one.. It should be noted that in Domino logic the transition of nodes Y is always from
low to high and it is rippled through the logic from the primary inputs lo the primary outputs.


Fig. 1 Basic Domino logic circuit
B. Domino Logic Circuit with keeper
The Keeper technique improves the noise immunity and avoids the problem of charge sharing of Domino logic circuit.
The keeper is a weak pMOS transistor that holds the output at the correct level when it would otherwise float. When the dynamic
node is high, the output is low and the keeper is ON to prevent from floating (Figure.2). When the dynamic node (Y) falls, the keeper
initially opposes the transition so it must be much weaker than the pull down network. Eventually Z rises, turning the keeper OFF and
avoiding static power dissipation.
The keeper must be strong enough to compensate for any leakage current drawn when the output is floating and the pull down stack is
OFF.If increase the width of keeper transistor then increase delay, so keeper transistor are order of 1/10 the strength of the pull down
stack.[5]

Fig. 2 Domino logic circuit with keeper
C. High Speed Leakage Tolerant Domino
The HSLTD circuit scheme is shown in Figure3. Transistor M3 is used as stacking transistor. Due to voltage drop across M3, gate-to-
source voltage of the NMOS transistor in the PDN (Pull down network) decreases. M7 causes the stacking effect and makes gate-to-
source voltage of M6 smaller (M6 less conducting). Hence circuit becomes more noise robust and less leakage power consuming. But
performance degrades because of stacking effect in mirror current path. This can be increased by widening the M2 (high W/L) to
make it more conducting.[6]
If there is noise at the inputs at the onset of evaluation, the dynamic node can be discharged resulting in wrong evaluation.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

45 www.ijergs.org


Fig. 3 High Speed Leakage Tolerant Domino Circuit
D. Low Swing Domino Logic
Low swing domino technique applied to reduce dynamic switching power. Two techniques are under the low swing domino
circuit. The first technique is low swing domino with fully driven keeper (LSDFDk). The output voltage swing between ground
and VDD-Vtn. And second is low swing domino circuit with weakly driven keeper (LSDWDK).

Fig. 4.a: LSDFDK Fig. 4.b: LSDWDK
Fig. 4 Low Swing Domino Logic
These techniques reduce the voltage swing at the output node using the NMOS transistor as a pull up transistor. The first technique is
improved the delay and power while maintaining robustness against noise. The second technique reduces the contention current by
reducing the gate voltage swing of keeper transistor. LSDWDK generate two different voltage swings. The output voltage swing
between ground and VDD-Vtn. The gate voltage swing between |Vtp| and VDD[2].


E. Domino Logic with Variable Voltage Threshold Keeper (DVTVK)
The operation of the DVTVK circuit behaves in the following manner. When the clock is low, the pullup transistor is on and the
dynamic node is charged to VDD1. The substrate of the keeper is charged to VDD2 (VDD2 > VDD1) by the body bias generator,
increasing the keeper threshold voltage. The value of the high threshold voltage (high-Vt) of the keeper is determined by the
reverse body bias voltage (VDD2 - VDD1) applied to the source-to-substrate p-n junction of the keeper. The current sourced by
the high-Vt keeper is reduced, lowering the contention current when the evaluation phase begins. A reduction in the current drive
of the keeper does not degrade the noise immunity during precharge as the dynamic node voltage is maintained during this phase
by the pullup transistor rather than by the keeper.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

46 www.ijergs.org

When the clock goes high (the evaluation phase), the pullup transistor is cut-off and only the high-Vt keeper current contends
with the current from the evaluation path transistor(s). Provided that the appropriate input combination that discharges the
dynamic node is applied in the evaluation phase, the contention current due to the high-Vt keeper is significantly reduced as
compared to standard domino logic. After a delay determined by the worst case evaluation delay of the domino gate, the body
bias voltage of the keeper is reduced to VDD1, zero biasing the source-to-substrate p-n junction of the keeper. The threshold
voltage of the keeper is lowered to the zero body bias level, thereby increasing the keeper current. The DVTVK keeper has the
same threshold voltage of a standard domino (SD) keeper, offering the same noise immunity during the remaining portion of the
evaluation phase.

Fig. 5 Domino Logic with variable Voltage Threshold Keeper
The threshold voltage of the keeper transistor is dynamically modified during circuit operation to reduce contention current without
sacrificing noise immunity.
F. Sleep Switch Du al Threshold Voltage Domino Logic
The operation of this transistor is controlled by a separate sleep signal. During the active mode of operation, the sleep signal is set low,
the sleep switch is cut-off, and the proposed dual-Vt circuit operates as a standard dual-Vt domino circuit. During the standby mode of
operation, the clock signal is maintained high, turning off the high-Vt pull-up transistor of each domino gate. The sleep signal
transitions high, turning on the sleep switch. The dynamic node of the domino gate is discharged through the sleep switch, thereby
turning off the high-Vt NMOS transistor within the output inverter. The output transitions high, cutting off the high-Vt keeper.

Fig. 6 Sleep Switch Dual Threshold Voltage Domino Logic
After a sleep switch dual-Vt domino gate is forced to evaluate, the following gates (fed by the non-inverting signals) also evaluate in
a domino fashion. After the node voltages settle to a steady state, all of the high-Vt transistors in the circuit are strongly cut-off,
significantly reducing the subthreshold leakage current.
The sleep switch circuit technique exploits the scalable of the dual-Vt transistors to reduce the subthreshold leakage current by
strongly cutting off all of the high-Vt transistors.
POWER DISSIPATION
The power consumed by CMOS circuit classified in two type.
- Static power dissipation
- Dynamic power dissipation
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

47 www.ijergs.org

i. Static Power Dissipation:- This is the power dissipation due to leakage currents which flow through a transistor when no
transactions occur and the transistor is in a steady state. static power dissipation in CMOS inverter is negligible.[6]
ii. Dynamic Power Dissipation:-The PMOS and NMOS transistors are on during the perform operation simultaneously. the
duration of changing inputs low to high and discharging high to low pMOS and nMOS turn on respectively. During this time
a current flows between Vdd to GND (make short path) and Dynamic Power produce. The dynamic power dissipation is
proportional to the square of voltage supply.[7-8]

SIMULATION AND RESULT
In this work, the OR and AND logic gates had used for implementation of six techniques. The power consumption (Pavg),
propagation delay (Tpd) and power delay product (PDP)are used to compare these techniques. The circuits implemented are OR gate
for 4 input, 6 input and AND gate for 4 input and 6 input. These design styles are compared by performing detailed transistor-level
simulations on circuits using Advance Design System (ADS). The results of the circuits for all techniques are given below. Table1
showed the comparison for all the techniques for four input OR gate. Table2 shows the comparison of all the six techniques with that
of standard domino circuit for six input OR gate.Table3 shows the comparison of all the six techniques for four input AND gate.
Table4 shows the comparison of all the six techniques with that of standard domino circuit for six input AND gate.




From the results, it can be observed that the Domino logic techniques, viz., Domino logic circuit with keeper, High speed leakage
tolerant domino and Low Swing Domino, Domino Logic with Variable Voltage Threshold Keeper, Sleep Switch Dual Threshold
Voltage Domino techniques provide lower values of power dissipation, propagation delay and PDP when compared to the standard
domino logic structure. The propagation delay (Tpd-Sec), power consumption (Pavg-Watt) and power delay product (PDP-Watt-Sec)
calculated and plotted in the form of graph.
Table.1: Comparison for four input OR gate
















Technique Tpd Pavg PDP
Domino 3.77E-08 3.77E-06 1.42E-13
Keeper 3.76E-08 4.22E-06 1.59E-13
HSLDT 3.78E-08 2.19E-06 8.21E-14
LSDFDK 3.77E-08 5.85E-06 2.20E-13
DVTVK 3.77E-08 2.72E-05 1.02E-12
SLS 3.77E-08 4.69E-05 1.77E-12
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

48 www.ijergs.org

Table.2:Comparison for six input OR gate













Table.3: Comparison for four input AND gate

















Table.4: Comparison for four input AND gate









Technique Tpd Pavg PDP
Domino 1.05E-07 4.45E-06 4.67E-13
Keeper 1.03E-07 6.46E-05 6.68E-12
HSLDT 1.06E-07 6.67E-06 7.07E-13
LSDFDK 1.05E-07 7.55E-06 7.91E-13
DVTVK 3.61E-08 2.31E-04 8.37E-12
SLS 1.06E-07 6.70E-05 7.07E-12
Technique Tpd Pavg PDP
Domino 5.09E-09 1.057E--6 5.38E-15
Keeper 1.01E-08 8.11E-07 8.19E-15
HSLDT 5.00E-09 5.73E-07 2.87E-15
LSDFDK 1.01E-08 6.10E-07 6.16E-15
DVTVK 5.00E-09 4.68E-05 2.34E-13
SLS 1.02E-08 3.95E-05 4.01E-13
Technique Tpd Pavg PDP
Domino 5.09E-09 1.48E-06 7.52E-15
Keeper 1.01E-08 8.09E-07 8.17E-15
HSLDT 1.34E-09 2.99E-06 4.00E-13
LSDFDK 5.12E-09 3.01E-07 1.54E-15
DVTVK 1.00E-08 5.19E-05 5.19E-13
SLS 1.02E-08 3.64E-05 3.71E-13
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

49 www.ijergs.org


Chart.1: Comparison for four input OR gat

Chart.2 Comparison for six input OR gate

Chart.3 Comparison for four input AND gate


Chart.4 Comparison for six input AND gate

CONCLUSION
In this work, an attempt had been made to simulate OR gate and AND gate for four and six inputs by using six domino based
techniques including basic domino (standard domino).
0.00E+00
1.00E-05
2.00E-05
3.00E-05
4.00E-05
5.00E-05
Power Delay
Pavg
Tpd
0.00E+00
5.00E-05
1.00E-04
1.50E-04
2.00E-04
2.50E-04
Power
Delay
Pavg
0.00E+00
2.00E-05
4.00E-05
6.00E-05
Power Delay
Pavg
Tpd
0.00E+00
2.00E-05
4.00E-05
6.00E-05
Power Delay
Pavg
Tpd
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

50 www.ijergs.org

The comparative analysis from table.1 for 4 input OR gate showed HSLDT had less power, less Tpd and low PDP compared to
other domino techniques.
The comparative analysis table.2 showed the maximum number (six) of input for OR gate Basic domino logic technique is better
because it had low power consumption, PDP but DVTVK had less Tpd.
Similarly comparison for the four input AND gate table.3 showed HSLDT had less power, less Tpd and low PDP compare to other
techniques. The table also showed propagation delay of DVTVK had equal to the HSLDT.
The comparative analysis table.4 showed the maximum number (six) of input for AND LSDFDK technique is better because it had
low power consumption and less PDP but HSLDT had less Tpd.

REFERENCES:

[1] V. Kursun and E. G. Friedman, "Variable Threshold Voltage Keeper for Contention Reduction in Dynamic Circuits," Proceedings
of the IEEE International
[2] Volkan Kursun and Eby G. Friedman, ―Speed and Noise Immunity Enhanced Low Power Dynamic Circuits‖, Department of
Electrical and Computer Engineering, University of Rochester, Rochester, New York, 2005.
[3] Jaume Segura, Charles F. Hawkins, ―CMOS Electronics How IT WORKS, HOW IT FAILS‖, IEEE Press, John Wiley & Sons, Inc.
Publications.
[4] Farshad Moradi, Dag T. Wisland, Hamid Mahmoodi and Tuan Cao, ―High Speed and Leakage Tolerant Domino Circuits for
High Fan in Applications in 70 nm CMOS technology‖, IEEE Proceedings of the 7th International Caribbean Conference on
Devices, Circuits and Systems, Mexico, Apr. 28-30, 2008.
[5] Neil H.E Weste, David Harris, Ayan Banerjee, ―CMOS VLSI DESIGN‖ Third edition, Pearson Education 2006.
[6] H.Mahmoodi -Meimand, Kauchic Roy, "A Leakage-Tolerant High Fan-in Dynamic circuit Design Style,‖ IEEE Trans 2004 Y.
Yorozu, M. Hirano, K. Oka, and Y. Tagawa, ―Electron spectroscopy studies on magneto-optical media and plastic substrate
interface,‖ IEEE Transl. J. Magn. Japan, vol. 2, pp. 740-741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301,
1982].
[7] Salendra.Govindarajulu, , Dr.T.Jayachandra Prasad, P.Rangappa, ―Low Power, Reduced Dynamic Voltage Swing Domino Logic
Circuits‖, Indian Journal of Computer Science and Engineering Vol 1 No 2, 74-81, 2011.
[8] Sung-Mo Kang,Yusuf Leblebici‖CMOS Digital Integrated Circuits‖,Tata McGraw-Hill Publishin company limited 2004.
[9] Vojin G. Oklobdziza and Robert K. Montoye ―Design Performance Trad-Offs in CMOS Domino Logic‖ IEEE Journal solid state
circuiit VOL Sc12 No-2 1987l
[10] Preetisudha Meher, K.K. Mahapatra, ― A New Ultra low- Power and Noise Tolerant Circuit Technique for CMOS Domino Logic‖,
ACEEE Int. J.on.Information Technology, Vol . 01, No. 03, Dec 2011.
[11] Volkan Kursun and Eby G. Friedman, ―Low Swing Dual Threshold VoltagemDomino Logic‖, Dept. of Electrical and Computer
Engineering University of Rochester, New York,14627-0231.
[12] Srinivasa V S Sarma D and Kamala Kanta Mahapatra ―Improved Technique for High Performance Noise Tolerant Domino
CMOS Logic Circuit‖
[13] Salendra.Govindarajulu, , Dr.T.Jayachandra Prasad, P.Rangappa, ―Energy Efficient,Noise –Tolerant CMOS Domino VLSI
Circuits in VDSM Technology‖, Indian Journal of Advanced Computer Science and Application Vol 2 No 4, 2011
[14] Volkan Kursun and Eby G. Friedman, ―Sleep Switch Dual Threshold Voltage Domino Logic‖, IEEE Transactions on VERY
LARGE SCALE INTEGRATION (VLSI) Systems, Vol. 12, No. 5, May 2004.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

51 www.ijergs.org

Review Paper on Leak Detection
S.B.Kakuste
1
, U.B.Bhujbal
1
, S.V.Devkar
1

1
Department of mechanical Engineering, Sinhgad Institute Of Technology, Lonavala Maharashtra, India
Email- sandy.kakuste@gmail.com

ABSTRACT - The word ―leak‖ and ―leakage‖ appears in the field of vessels hermetical closing and do not confront only
with vacuum technologies but also engineering working with high pressure. Practically it is impossible to build a completely leak
proof vacuum system. There are multiple of applications in the industry, where it is necessary to test a hollow fabricated body for fluid
leakage.Number of leak testing method has been proposed for testing of hollow components.This paper gives a review of various
methods of leak detection of vacuum system.
Keywords: pressure decay, water bubble test, vacuum, helium leak detectors, Helium mass spectrometer, Radioisotope method, Dye
penetrate method, fluid transient model.
INTRODUCTION
All sealed systems leak. Every pressure system has leaks because ―imperfection‖ exists at every joint fitting, seam or weld. These
―imperfection‖ may be too small to detect even with the best of leak detection instruments but given time, vibration, temperature and
environmental stress, these ―imperfection‖ become larger, detectable leaks.
A LEAK IS NOT...Some arbitrary reading on a meter. Gas escapes at different times and at different rates. In fact, some leaks cannot
be detected at the time of the test. Leaks may plug, and then re-open under uncommon conditions.
A LEAK IS...A physical path or hole, usually of irregular dimensions. The leak may be the tail end of a weld fracture, a speck of dirt
on a gasket or a microgroove between fittings.
Production leak testing is implemented to verify the integrity of a manufactured part. It can involve 100% testing or sample inspection.
The goal of production leak testing is to eliminate ―leaky‖ parts from getting to the customer. Because manufacturing processes and
materials are not ―perfect,‖ leak testing is often implemented as a final inspection step.In some cases, leak testing is mandated by a
regulation or industry specification. For example, in order to reduce hydrocarbon emissions from automobiles, auto makers are now
designing and leak testing fuel components to tighter specifications required by the EPA. Also, the nuclear industry enforces
regulations and leak test specifications on components such as valves used in nuclear facilities. Whether mandated by regulation or
implemented to insure product function and customer satisfaction, leak testing is commonly performed on manufactured parts in many
industries including automotive, medical, packaging, appliance, electrical, aerospace, and other general industries.
One of the greatest challenges in production leak testing is often correlating an unacceptable leaking part in use by thecustomer (in the
field) with a leak test on a production line. For example, the design specification of a water pump may require that no water leaks
externally from the pump under specified pressure conditions. However, in production it may be desirable to leak test the part with air.
It is intuitive to assume that air will leak more readily through a defect than water. One cannot simply state ―no leakage‖ or even ―no
leakage using an air pressure decay test‖. This would result in an unreasonably tight test specification resulting in an expensive test
process and potential scrap of parts that may perform acceptably in the field. Therefore, one must set a limit using an air leak test
method that correlates to a water leak. Establishing the proper leak rate reject limit is critical to insure part performance and to
minimize unnecessary scrap in the manufacturing process. Determining the leak rate specification for a test part can be a significant
challenge. Having a clear and detailed understanding of the part and its application is necessary in order to establish the leak rate
specification. Even then, many specifications are estimates and often require the use of safety factors.The automotive industry has
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

52 www.ijergs.org

implemented a leak rate specification for fuel handling components that species a maximum allowable theoretical leak diameter. The
advantage of this way of expressing the leak rate limit is that it gives the part manufacturer significant leeway in designing the
appropriate leak test. The challenge, however, is correlating the theoretical leak diameter to an actual leak rate. Users of these
specifications must understand the theoretical relationships between leak hole geometry and gas flow and all users must implement
these relationships consistently. A second option is to set the leak rate limit of the specific test using a leak orifice or channel that has
been manufactured and dimensionally calibrated to the geometry (diameter and path length) required by the specification..

[1] N. Hilleret
In this paper, various methods of leak detection are explained and also gives information about instruments used for leak
detection purpose. In the case of vacuum the vessels, it is necessary to check that tightness of vessel by means of guarantee of leak
proof before installation. Depending upon the size of leak, method of leak detection is selected from various methods. All methods
based on the variation of a physical property measured along the vacuum vessel. For large leakage gas flow can generate mechanical
effects but for small size leakage finer method required.
The various methods of leak detection such as tracer gas, helium leak detectors, direct flow method, counter flow method,
detector probe method (sniffer) as well as characteristics of detector, the use ofthe detector is described in this paper.

[2] K Zapfe
This paper gives an introduction about leak detection of vacuum system. Helium leak detector and its different applications
along with various leak detection methods are described. Helium leak detector is most widely used method in the industries. It is
important to specify an acceptable leak rate for each vacuum system.
Leak detection plays an important role in manufacturing. After manufacturing of the vacuum vessel is must be proven than
the tightness specifications are fulfilled. Further checks are necessary during as well as after assembly and installation to locate
possible leaks. For that various methods like mechanical effects, pressure increase, tracer gas, helium leak detector, direct flow
method, counter flow method are introduced in this paper. Leakage rate, types of leaks, practical experience and examples of leak
detection, different application of helium leak detector are explained in this paper.

[3] Andrej Pregeli et al
In the industries there is a need to manufacture detect free hermetically closed elements.
In this paper, discussed about leak detection methods and defining the sizes of leakage. In this paper describes the maximum
acceptable leak rate. According to that the product should be accepted or rejected. Various methods of leak detection i.e. Pressure
change method, overpressure method, Halogen leak detector, Dye penetrant method, Acoustical leak detection, Radioisotope method,
mass spectrometer as leak detector, Helium mass spectrometer are described in this paper.

[4] Donald T. Soncrant
In this paper, describes the method to improve speed of testing of hollow particles of fluid leakage consist of closed charged
valve, open charged valve, compressor and hollow workpiece. Time delay valve is used to regulate pressurized air supply. When time
delay valve cut off, test valve get actuated it measure flow rate through hollow component, if workpiece is acceptable it turns ON
‗accept‘ light. If the flow rate exceeds predetermined value, it turns ON ‗reject‘ light.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

53 www.ijergs.org

This leakage testing method is used in the industry for testing hollow body for fluid leakage. Electronically actuated valve
and relays used to conduct test in a sequence. Here, no special voltage reduction, filtering and voltage regulating devices are required.
Operation is independent of voltage variation. This method is more reliable and less complex method. Hence used in the industry for
testing of hollow components.

[5] Joachim W. Pauly
In this paper, vessel such as submarine is selected for testing of leakage of air, by establishing pressure level and test flow to
the vessel. For determining the leakage of air in vessel, difference in pressure in the vessel is monitored, and determining whether the
leakage rate from the vessel exceeds a predetermined rate by relating the test flow rate to its effect on the pressure level in the vessel.
In 1
st
operation, variable test flow is delivered to the vessel and adjusted such that as needed to maintain pressure in the vessel
at test level, rate of this flow is measured when stabilized and measured values are converted into standard units. In 2
nd
operation,
constant flow rate is delivered to the vessel which is equivalent to leakage in vessel and effect of pressure difference in vessel
indicates the relation between leakage rate and test flow rate.
[6] Sami Elaoud et al
This paper presents a technique for detection and location of leakages in a single pipe by means of transient analysis of the hydrogen
natural gas mixture flows. In this technique transient pressure waves used which are initiated by the sudden closure of a downstream
shut off valves. The purpose of this paper is to present a numerical procedure utilizing transient state pressure and a discharge analysis
to detect leakage in a piping system of a hydrogen and natural gas mixture. The presence of leak in pipe partially reflects transient
pressure waves at allows for location of the leak. To determine the leak location, the mathematical formulation has been solved by the
characteristics methods of specified time intervals.
[7] S.Hiroki
ax
et al
In this paper Krypton (Kr) is used as a water soluble tracer for detecting water leak in a fusion reactor. This method was
targeted for applying to the international thermonuclear experimental reactor and 10
-3
Pa m
3
/s order of water leak valves where
fabricated and connected to the water loop circuit. Water dissolve in a Krya is detected by the Quadruple mass spectrometer (QMS)
Imposed leak detection method for the water channels is proposed where the leak detection can be done with fully circuiting cooling
water. Water soluble tracer gas is effused into the vacuums vessel through a water leak passage.

[8] T. Kiuchi
This paper describes a method for detection of leak and location of leak by applying fluid transient model. In this testing of
real time pipeline and resulting conclusion is obtained by using fluid transient model. This method considered flow rate measurement
and pressure measurement. Because of this, the method gives more accurate detection of leak and position of leak than conventional
methods, but in this method assumption is made that flow inside the pipeline is quasi steady state flow. The influence of method
accuracy is examined, and the result shows the advantage of method compared to conventional methods.

[9]John Mushford
In this paper, presents a method of investigation of data obtained by collection of all data monitoring from pressure sensors
inthe pipe network, which gives not only location but also size ofthe leak. In this paper use of support vector machine which acts as a
pattern recognizer, which gives location and size of leak with a high degree of accuracy, and the support vector machine is trained and
tested on data obtained from EPANET data hydraulic system.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

54 www.ijergs.org


[10] Guizeng Wang et al
In this paper, a new leak detection method based on autoregressive modelling proposed. Testing of pipeline model and
resulting concussion is obtained by using Kullback information. Kullback information is very in time sequence analysis. A leak above
0.5% can be easily detected by Kullback information. This process does not require flow rate measurement. Four pressure
measurements, two at each end of pipe is required.

CONCLUSION
Proper selection and implementation of a production leak test method starts with an understanding of WHY the test is being
performed, followed by establishing what the leak rate limit is, and finally a determination of how the leak test will be performed. A
careful and thoughtful evaluation at each of these steps, combined with the selection of high quality leak test hardware, will result in a
cost effective, high performance, and reliable production leak test.
This project has described methods for the finding the leaks and there location in a hollow casting and other components.
Pressure difference obtained by the pressure decay test will give confirmation about presence of leaks, and water immersion test will
give us location of leaks. These two methods are less time consuming and give the quick results with high accuracy. The end result is
stricter quality controls for leak testing.
REFERENCES:
[1]N.Hilleret, Leak detection, CERN, Geneva, Switzerland
[2] K, Zapfe, Leak detection, Hamburg.Germany
[3] Andrej pregelj et al, Leak detection methods and defining sizes of leaks, april1997
[4] Donald T.Soncrant,Fluidic type leak testing machine
[5] Joachim W. Pauly, Method and apparatus for testing leakage rate, May07/1974
[6] Sami Elaoud et al, Leak detection of hydrogen-natural gas mixture in pipe using the characteristics methods of specified
time interval, 21 June 2010
[7] S.Hiroki, Development of water leak detection method in fusion reactor using water-soluble gas, 18 June 2007
[8] T.Kiuchi, A leak localization method of pipeline by means of fluid transient model
[9] John Mashford et al, An approach to leak detection in pipe network using analysis of monitored pressure values by
support vector machine.
[10]Guizengwang et al,Leak detection for transport pipelines based on autoregressive modelling.
[11] William A.McAdams et al, Leakage testing method, Aug 12, 1958
[12] Percy Gray, Jefferson, Lined tank and method of construction and leakage testing the same.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

55 www.ijergs.org

Design and Verification of Nine port Network Router
G. Sri Lakshmi
1
, A Ganga Mani
2

1
Assistant Professor, Department of Electronics and Communication Engineering, Pragathi Engineering College, Andhra Pradesh,
India
2
Research Scholar (M.Tech), Embedded Systems, Department of Electronics and Communication Engineering, Pragathi Engineering
College, Andhra Pradesh, India
Email- srilakshmi1853@gmail.com

ABSTRACT - The focus of this Paper is the design of Network Router and verifies the functionality of the eight port router for
network on chip using verilog qualifies the Design for Synthesis and implementation. This Design consists of Registers, FSM and
FIFO‘s. This Router design contains Eight output ports and one input port, it is packet based Protocol. Router drives the incoming
packet which comes from the input port to output ports based on the address contained in the packet.. The router has an active low
synchronous input resetn which resets the router. Thus the idea is borrowed from large scale multiprocessors and wide area network
domain and envisions on chip routers based network. This helps to understand how router is controlling the signals from source to
destination based on the header adders.

KEYWORDS: FIFO, FSM, Network-On-Chip, Register blocks,Router Simulation,verification plan


INTRODUCTION
System on chip (SOC) is a complex interconnection of various functional elements. It creates communication bottleneck in the gigabit
communication due to its bus based architecture. Thus there was need of system that explicit modularity and parallelism, network on
chip possess many such attractive properties and solve the problem of communication bottleneck. It basically works on the idea of
interconnection of cores using on chip network. The communication on network on chip is carried out by means of router, so for
implementing better NOC, the router should be efficiently design. This router supports four parallel connections at the same time. It
uses store and forward type of flow control and FSM Controller deterministic routing which improves the performance of router. The
switching mechanism used here is packet switching which is generally used on network on chip. In packet switching the data the data
transfers in the form of packets between cooperating routers and independent routing decision is taken. The store and forward flow
mechanism is best because it does not reserve channels and thus does not lead to idle physical channels. The arbiter is of rotating
priority scheme so that every channel once get chance to transfer its data. In this router both input and output buffering is used so that
congestion can be avoided at both sides. A router is a device that forwards data packets across computer networks. Routers perform
the data "traffic direction" functions on the Internet. A router is a microprocessor- controlled device that is connected to two or more
data lines from different networks. When a data packet comes in on one of the lines .The router reads the address information in the
packet to determine its ultimate destination. Then, using information in its routing table, it directs the packet to the next network on its
journey.

WHY WOULD I NEED A ROUTER?
For most home users, they may want to set-up a LAN (local Area Network) or WLAN (wireless LAN) and connect all computers to
the Internet without having to pay a full broadband subscription service to their ISP for each computer on the network. In many
instances, an ISP will allow you to use a router and connect multiple computers to a single Internet connection and pay a nominal fee
for each additional computer sharing the connection. This is when home users will want to look at smaller routers, often called
broadband routers that enable two or more computers to share an Internet connection. Within a business or organization, you may
need to connect multiple computers to the Internet, but also want to connect multiple private networks not all routers are created equal
since their job will differ slightly from network to network. Additionally, you may look at a piece of hardware and not even realize it
is a router.
What defines a router is not its shape, color, size or manufacturer, but its job function of routing data packets between computers. A
cable modem, which routes data between your PC and your ISP can be considered as a router. In its most basic form, a router could
simply be one of two computers running the Windows 98 (or higher) operating system connected together using ICS (Internet
Connection Sharing). In this scenario, the computer that is connected to the Internet is acting as the router for the second computer to
obtain its Internet connection. Going a step up from ICS, we have a category of hardware routers that are used to perform the same
basic task as ICS, albeit with more features and functions often called broadband or Internet connection sharing routers, these routers
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

56 www.ijergs.org

allow you to share one Internet connection with multiple computers. Broadband or ICS routers will look a bit different depending on
the manufacturer or brand, but wired routers are generally a small box-shaped hardware device with ports on the front or back into
which you will plug each computer along with a port to plug in your broadband modem. These connection ports allow the router to do
its job of routing the data packets between each of the computers and the data going to and from the Internet. These routers also
support NAT (network address translation), which allows all of your computers to share a single IP address on the Internet.

ROUTER DESIGN PRINCIPLES

Given the strict contest deadline and the short implementation window we adopted a set of design principles to spend the available
time as efficiently as possible. This document provides specifications for the Router is a packet based protocol. Router drives the
incoming packet which comes from the input port to output ports based on the address contained in the packet. The router is a‖
Network Router‖ has a one input port from which the packet enters. It has Eight output ports where the packet is driven out. Packet
contains 3 parts. They are Header, data and frame check sequence. Packet width is 16 bits and the length of the packet can be between
1 byte to 8192 bytes. Packet header contains three fields DA and length. Destination address (DA) of the packet is of 16 bits. The
switch drives the packet to respective ports based on this destination address of the packets. Each output port has 16-bit unique port
address. If the destination address of the packet matches the port address, then switch drives the packet to the output port, Length of
the data is of 16 bits and from 0 to 8191. Length is measured in terms of bytes. Data should be in terms of bytes and can take anything.
Frame check sequence contains the security check of the packet. It is calculated over the header and data. The communication on
network on chip is carried out by means of router, so for implementing better NOC, the router should be efficiently design. This router
supports four parallel connections at the same time. It uses store and forward type of flow control and FSM Controller deterministic
routing which improves the performance of router. The switching mechanism used here is packet switching which is generally used on
network on chip. In packet switching the data the data transfers in the form of packets between co-operating routers and Independent
routing decision is taken. The store and forward flow mechanism is best because it does not reserve channels and thus does not lead to
idle physical channels. The arbiter is of rotating priority scheme so that every channel once get chance to transfer its data. In this
router both input and output buffering is used so that congestion can be avoided at both sides.
Features
 Full duplex synchronous serial data transfer
 Variable length of transfer word up to 8192 bytes.
 HEADER is the first data transfer.
 Rx and Tx on both rising or falling
 Fully static synchronous design with one clock domain
 Technology independent VERILOG
 Fully synthesizable.

ROUTER is a Synchronous protocol. The clock signal is provided by the master to provide synchronization. The clock signal controls
when data can change and when it is valid for reading. Since ROUTER is synchronous, it has a clock pulse along with the data. RS-
232 and other asynchronous protocols do not use a clock pulse, but the data must be timed very accurately.


OPERATION

The Nine Port Router Design is done by using of the three blocks. The blocks are 16-Bit Register, Router Controller and output block.
The router controller is design by using FSM design and the output block consists of four FIFO‘s combined together. The FIFO‘s store
data packets and when you want to send data that time the data will read from the FIFO‘s. In this router design has Eight outputs i.e.
16-Bit size and one 16-bit data port. It is used to drive the data into router. we are using the global clock, reset signals, error signal and
suspended data signals are the output‘s of the router. The FSM controller gives the error and SUSPENDED_DATA_IN signals. These
functions are discussed clearly in below FSM description. The ROUTER can operate with a single master device and with one or more
slave devices. If a single slave device is used, the RE (read enable) pin may be fixed to logic low if the slave permits it. Some slaves
require the falling edge (HIGH→LOW transition) of the slave select to initiate an action such as the mobile operators, which starts
conversion on said transition. With multiple slave devices, an independent RE signal is required from the master for each slave device.






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

57 www.ijergs.org



FIGURES


Figure 1: Block Diagram of Nine Port Router



Figure 2: Internal Structure of Nine Port Router


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

58 www.ijergs.org




Figure 3: Simulation of FSM Controller



Figure 4: Simulation of Router
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

59 www.ijergs.org

APPLICATIONS

When multiple routers are used in interconnected networks, the routers exchange information about destination addresses, using a
dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected
networks. A router has interfaces for different physical types of network connections, (such as copper cables, fiber optic, or wireless
transmission). It also contains firmware for different networking protocol standards. Each network interface uses this specialized
computer software to enable data packets to be forwarded from one protocol transmission system to another. Routers may also be used
to connect two or more logical groups of computer devices known as subnets, each with a different sub-network address. The subnet
addresses recorded in the router do not necessarily to map directly to the physical interface connections.

Eda Tools And Methodologies
HVL: System VERILOG.
HDL:VERILOG
Device :Sparatan 3e
EDA Tools: MODELSIM,XILINX ISEE

CONCLUSION

I have designed network ROUTER and I have verified the functionality of the ROUTER with VERILOG which has one input and
eight output ports with each 16-bit observed its functional of ROUTER.For design we had verified functionality of router by giving
different test cases to different FIFO‘s based on header address of the packet.


REFERENCES:

[1]D.Chiou,―MEMOCODE2011Hardware/SoftwareCoDesignContest‖,https://ramp.ece.utexas.edu/redmine/ Attachments/
DesignContest.pdf
[2] Blue spec Inc, http://www.bluespec.com
[3]Xilinx,―ML605HardwareUserGuide‖,http://www.xilinx.com/support/documentation/boardsand its/ug534.pdf
[4]Xilinx,―LogiCOREIPProcessor Local Bus (PLB) v4.6‖,http://www.xilinx.com/support/documentation/ip documentation/plb
v46.pdf
[5] ―Application Note: Using the Router Interface to Communicate Motorola, ANN91/D Rev. 1, 01/2001.
[6] Cisco Router OSPF: Design& Implementation Guide, Publisher: McGraw-Hill
[7] ―Nortel Secure Router 4134‖, Nortel Networks Pvt. Ltd.
[8] ―LRM‖, IEEE Standard Hardware Description Language Based on the Verilog Hardware Description Language – IEEE STD
1364-1995.
Books:
[9]. Chris Spears SYSTEMVERILOG FOR VERIFICATION, Publisher: Springer.
[10]. Bhaskar. J, A Verilog HDL Primer,







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

60 www.ijergs.org

Performance Evaluation of Guarded Static CMOS Logic based Arithmetic and
Logic Unit Design
FelcyJeba Malar.M
1
, Ravi T
2

1
Research Scholar (M.Tech), VLSI Design, Sathyabama University, Chennai Tamilnadu
2
Assistant Professor, Sathyabama University, Chennai Tamilnadu.
Email:
1
felcyjebamalar@gmail.com

ABSTRACT – Real World applications tend to utilize the improved low power processes to reduce power dissipation and to
improve the device efficiency. With regards to this unique aspect, optimization techniques help in reducing down the parameters like
power and area which are of a major concern. The commonly found arithmetic and logic unit in every processor is likely to consume
more power for its internal operations. This power consumed can be reduced using the low power optimization techniques. With
reference to the above issue, in this paper an efficient Arithmetic and Logic unit is designed with a modified static CMOS logic
design. This modified logic is found to be more efficient than the existing logic in terms of many parameters like average power and
power delay product. This way the modified architecture of arithmetic and logic unit in different CMOS technologies performs the
processing with high speed.
Keywords— Low power, modified static CMOS logic, power delay product, arithmetic and logic unit
I.INTRODUCTION
Very Large Scale Integrated (VLSI) circuit technology is a rapidly growing technology for a wide range of innovative
devices and systems that have changed the world today. The tremendous growth in laptop and portable systems and the cellular
networks have intensified the research efforts in low power electronics [1]. High power systems often may lead to several circuit
damages. Low power leads to smaller power supplies and less expensive batteries. Low-power design is not only needed for portable
applications but also to reduce the power of high performance systems. With large integration density and improved speed of
operation, systems with high frequencies are emerging.

The arithmetic logic unit is one of the main components inside a microprocessor. It is responsible for performing arithmetic
and logic operations such as addition, subtraction, increment, and decrement, logical AND, logical OR, logical XOR and logical
XNOR [2]. They use fast dynamic logic circuits and have carefully optimized structures [3]. Its power consumption accounts for a
significant portion of total power consumption of data path. Arithmetic and Logic Units (ALU) also contributes to one of the highest
power-density locations on the processor as it is clocked at the highest speed and is kept busy most of the time resulting in thermal
hotspots and sharp temperature gradients within the execution core. Therefore, this strongly motivates energy-efficient ALU designs
that satisfy the high-performance requirements, while reducing peak and average power dissipation. ALU is a combinational circuit
that performs arithmetic and logical micro-operations on a pair of n bit operands [4]. The power consumption in digital circuits, which
mostly use complementary metal-oxide semiconductor (CMOS) devices, is proportional to the square of the power supply voltage;
therefore, voltage scaling is one of the important methods used to reduce power consumption. To achieve a high transistor drive
current and thereby improve the circuit performance, the transistor threshold voltage must be scaled down in proportion to the supply
voltage [5].

II.EXISTING ALU DESIGN
The existing method includes a simple Arithmetic and Logic Unit design with different arithmetic and logic operations [10].
The existing basic design consists of a conventional type of arithmetic and logic circuits that perform various arithmetic and logic
operations required shown in Fig 2.1.These conventional circuits are designed in CMOS logic. When the architecture is simulated, it is
found to consume more power and it is the main disadvantage of the existing systems.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

61 www.ijergs.org


Fig 2.1 Basic Concept of ALU design
The existing feed through Logic, given in Fig 2.2, works in two phases, Reset phase and Evaluation phase [11], [12]. It can
be shown, when clock is ‗HIGH‘, the output node is pulled to zero value because transistor Tr is ‗ON‘ and transistor TP is ‗OFF‘.
When clock goes ‗LOW‘, reset transistor Tris turned ‗OFF‘ and Tpbecomes ‗ON‘ resulting in charging or discharging of output node
with respect to input. Reset Transistor Tralways provides 0->1 transition, initially in evaluation phase, therefore outperforms the
dynamic CMOS in cascading structure. When dynamic CMOS is cascaded, produced result may be false due to 1-> 0 transitions in
evaluation phase.

Fig 2.2 Existing Feed through Logic
III.PROPOSED ALU DESIGN
The proposed system uses a guarded static logic principle which is explained below. The Fig 3.1 below shows a simple low
power technique. This simple modified technique is designed with two control inputs. It works similar to the existing static CMOS
logic in which during the high phase of clock, the output node does not give the exact output. When clock becomes low the output
node conditionally evaluates to either logic high or low, depending on the inputs to pull up and pull down networks present in the
circuit.


Fig 3.1 Proposed Technique (Guarded static CMOS logic)

The proposed system consists of a modified Arithmetic and Logic Unit, which includes a modified architecture with the
proposed Guarded Static CMOS logic logic (GSCL). Hence an additional loss in power consumption of the circuit is further observed.
The modified arithmetic and logic unit block with the control unit is shown in Fig 3.2 below. According to the block diagram below,
each block is fed with two control signals. One of the control signals chooses whether the operation to be executed is arithmetic block
or logic block. The second control signal chooses which particular block needs to be executed. Hence the choice is made by the user in
providing the arithmetic and logic unit with the necessary control input.
Fig 3.2 depicts the main architecture of the paper. There are different blocks that are interlinked to form a complete
architecture. For this control signal to be activated inside a particular block, each block is modified with low power techniques to
reduce the power consumption that gets wasted because of the execution of all other processing blocks. Using these techniques, only a
single arithmetic or logic block is activated and its output is achieved in accordance with the control signal provided. The low power
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

62 www.ijergs.org

technique for this requirement used is discussed below. This way the design is made simple and the processing is done in a continuous
manner and used in bigger circuits to compensate for high power dissipation.


Fig 3.2 Modified Architecture of Arithmetic and Logic unit
3.1 DESCRIPTION OF ARCHITECTURE
3.1.1 Input Block:
Input block consists of two general purpose registers. These registers provide the necessary inputs to the arithmetic and logic
unit blocks. These blocks functions various arithmetic and logic operations.
3.1.2 ALU Block:
This arithmetic block consists of four bit adder, subtractor, right and left shifter, comparator, encryption and decryption
circuit and multiplier. The logic block consists of four bit AND, OR, NOT, NAND, NOR, XOR and XNOR gates. This architecture is
modified in such a way that from the control unit above, instructions are fed to choose a particular operation to be performed and only
the execution of that particular operation reaches the output port of the processor. Hence this saves time in turn the power
consumption gets reduced to a greater extent than the existing system.
3.1.3 Output Block:
The output block consists of a simple OR gate whose inputs are the outputs from the 8 units of ALU block. Since only one
block outputs a true value, the necessity of an OR gate gets fulfilled here. This is how a simple output block is designed in this paper.
IV. TRANSIENT ANALYSIS


Fig 4.1Transient analysis of existing arithmetic and logic unit
Time(s)

V
o
l
t
a
g
e

(
V
)


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

63 www.ijergs.org

Fig 4.1 shows the output waveform of existing 4-bit arithmetic and logic unit in which v(19)-v(26) represents the ALU input
v(104)-v(107) represents the arithmetic unit output, v(108)-v(111) represents the logic unit output, v(112)-v(115) represents the ALU
output.



Fig.4.2 Transient analysis of proposed Arithmetic and logic unit

Fig.4.2 shows the output waveform of proposed Arithmetic and logic unit in which v(15),v(16),v(5),v(6) represents control
inputs v(19)-v(26) represents the ALU input v(104)-v(107) represents the arithmetic unit output, v(108)-v(111) represents the logic
unit output, v(112)-v(115) represents the ALU output.
V. POWER ANALYSIS

Table 5.1 Power Analysis of Existing System and Proposed System
DEVICE:MOSFET TECHNOLOGY:130nm
OPERATING FREQUENCY:1GHz

DESIGN EXISTING PROPOSED


ALU DESIGN
Avgpwr
(µw)
Delay
(µs)
PDP
(pJ)

Avgpwr
(µw)
Delay
(µs)
PDP
(pJ)
126.6 2.002 253.45 101.19 0.009 0.910


Table 5.2 Power Analysis of Existing System and Proposed System
DEVICE:MOSFET TECHNOLOGY:32nm
OPERATING FREQUENCY:1GHz

DESIGN EXISTING PROPOSED

Avgpwr Delay PDP Avgpwr Delay PDP
V
o
l
t
a
g
e

(
V
)


Time(s)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

64 www.ijergs.org

ALU DESIGN (µw) (µs) (pJ)

(µw) (µs) (pJ)
3967 2.026 8037.1

11.94 0.009 0.107

Table 5.3 Power Analysis of Existing System and Proposed System
DEVICE:MOSFET TECHNOLOGY:16nm
OPERATING FREQUENCY:1GHz

DESIGN EXISTING PROPOSED


ALU DESIGN
Avgpwr
(µw)
Delay
(µs)
PDP
(pJ)

Avgpwr
(µw)
Delay
(µs)
PDP
(pJ)
513.8 2.026 1040.9

20.86 0.009 0.18

The above Tables 5.1, 5.2 & 5.3 shows the performance analysis report for existing and modified arithmetic and logic circuit
with the Guarded static CMOS logic technique in three different CMOS nanometer technologies


Fig5.1 Power consumption comparison of existing and proposed system
The above chart in Fig 5.1 shows the comparative performance of ALU with operating voltage of 3.3V using HSPICE in
130nm CMOS technology. The analysis clearly shows that the arithmetic and logic unit in the existing system consumes 126.6 µW of
power and the proposed system consumes a considerable less amount of 101.19µW of power. When the controlled feed through logic
is used in the ALU circuit, the power consumption experiences a drastic reduction increasing the speed of the device.
CONCLUSION
Thus the power consumption is greatly reduced in the modified design using the Guarded static CMOS logic and it is found
to be more efficient. With the conventional type of arithmetic and logic unit that executes all the operations at the same time, the
power dissipation gets uncontrolled. Hence to discover an alternative, this static CMOS logic was taken as a base and a proposed
Guarded Static CMOS logic was introduced. The performance analysis clearly shows that the arithmetic and logic unit designed using
Guarded Static CMOS logic shows appropriate dimensions of various parameters helping to obtain a near optimum arithmetic and
logic circuit. Hence the power consumption of the modified ALU design is further reduced. The proposed Arithmetic and Logic Unit
design can be used in high end real time applications like ARM processors and also in various other low power applications.
0
1000
2000
3000
AVG PWR PDP
P
O
W
E
R

C
O
N
S
U
M
P
T
I
O
N
COMPARISON CHART
EXISTING
PROPOSED
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

65 www.ijergs.org

REFERENCES:

[1] K.Nehru,Dr.A.Shanmugam,DarmilaThenmozhi.G, ―Design of low power ALU USING 8T FA and PTL based MUX Circuits‖,
IEEE International Conference On Advances In Engineering, Science And Management, pp.724-730, 2012.
[2] B. Lokesh, K. Dushyanth, M. Malathi, ―4 Bit Reconfigurable ALU with Minimum Power and Delay‖,International Journal of
Computer Applications ,pp. 10-13, 2011.
[3] Mazen Al Haddad, ZaghloulElSayed, MagdyBayoumi, ―Green Arithmetic Logic Unit‖, IEEE Transaction, 2012.
[4] MeetuMehrishi, S. K. Lenka, ―VLSI Design of Low Power ALU Using Optimized Barrel Shifter‖,International Journal of VLSI
and Embedded Systems, Vol 04, Issue 03,pp.318-323, 2013.
[5] Nazrul Anuar, Yasuhiro Takahashi, and Toshikazu Sekine, " Two Phase Clocked Adiabatic Static CMOS Logic and its Logic
Family ", Journal Of Semiconductor Technology And Science, Vol.10, No.1, March, 2010, pp~1-10.
[6] R.K. Krishnamurthy, S. Hsu, M. Anders, B. Bloechel, B. Chatterjee, M. Sachdev and S. Borkar, "Dual Supply voltage clocking
for 5GHz 130nm integer execution core", proceedings of IEEE VLSI Circuits Symposium, Honolulu Jun. 2002, 128-129.
[7] S. vangal, Y. Hoskote, D. Somasekhar, V. Erraguntla, J. Howard, G. Ruhl, V. Veeramachaneni, D. Finan, S. Mathew, and N.
Borkar, "A 5-GHz floating point multiply accumulator in 90-nm dual VT CMOS", in Proc. IEEE Int. Solid-State Circuits Conf.,
San Francisco, CA, Feb.2003, 334–335.
[8] V. Navarro Botello, J. A. Montiel Nelson, and S. Nooshabadi, "Analysis of high performance fast feedthrough logic families in
CMOS", IEEE Trans. Cir. & syst. II, vol. 54, no. 6, Jun. 2007, 489-493.
[9] Rabaey, J. M., Chandrakasan, A., and Nikolic. B., 2002. Digital integrated circuits: A design perspective, 2nd ed, Upper Saddle
River, NJ: Prentice-Hall.
[10] BishwajeetPandey and ManishaPattanaik, ―Clock Gating Aware Low Power ALU Design and Implementation on FPGA‖,
International Journal of Future Computer and Communication, Vol. 2, No. 5, October 2013,pp~461-465.
[11] Nooshabadi, S., and Montiel-Nelson, J. A. 2004. Fast feedthrough logic: A high-performance logic family for GaAs. In IEEE
transaction on circuits, Syst. I, Reg. Papers, vol. 51, no. 11, pp. 2189–2203.
[12] Navarro-Botello, V., Montiel-Nelson, J. A., and Nooshabadi, S. 2007. Analysis of high performance fast feedthrough logic
families in CMOS. In IEEE transaction on circuits, Syst. II, vol. 54, no. 6, pp. 489-493.













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

66 www.ijergs.org

Fabrication and Analysis of Tube-In-Tube Helical Coil Heat Exchanger
Mrunal P.Kshirsagar
1
, Trupti J. Kansara
1
, Swapnil M. Aher
1

1
Research Scholar, Sinhgad Institute of Technology ,
sswapnilaher@gmail.com,9881925601

ABSTRACT – Conventional heat exchangers are large in size and heat transfer rate is also less and in conventional heat exchanger
dead zone is produce which reduces the heat transfer rate and to create turbulence in conventional heat exchanger some external
means is required and the fluid in conventional heat exchanger is not in continuous motion with each other. Tube in tube helical coil
heat exchanger provides a compact shape with its geometry offering more fluid contact and eliminating the dead zone, increasing the
turbulence and hence the heat transfer rate. An experimental setup is fabricated for the estimation of the heat transfer characteristics. A
wire is wounded in the core to increase the turbulence in turn increases the heat transfer rate. The paper deals with the pitch variation
of the internal wounded wire and its result on the heat transfer rate. The Reynolds number and Dean number in the annulus was
compared to the numerical data. The experimental result was compared with the analytical result which confirmed the validation. This
heat exchanger finds its application mostly in food industries and waste heat recovery.
Keywords—Tube-in-tube helical coil, Nusselt number, wire wound, Reynold number, Dean number, dead zone, efficiency .
1. INTRODUCTION
Several studies have indicated that helically coiled tubes are superior to straight tubes when employed in heat transfer
applications. The centrifugal force due to the curvature of the tube results in the development of secondary flows (flows perpendicular
to the axial direction) which assist in mixing the fluid and enhance the heat transfer. In straight tube heat exchangers there is little
mixing in the laminar flow regime, thus the application of curved tubes in laminar flow heat exchange processes can be highly
beneficial. These situations can arise in the food processing industry for the heating and cooling of either highly viscous liquid food,
such as pastes or purees, or for products that are sensitive to high shear stresses. Another advantage to using helical coils over straight
tubes is that the residence time spread is reduced, allowing helical coils to be used to reduce axial dispersion in tubular reactors.
The first attempt has been made by Dean to describe mathematically the flow in a coiled tube. A first approximation of the steady
motion of incompressible fluid flowing through a coiled pipe with a circular cross-section is considered in his analysis. It was
observed that the reduction in the rate of flow due to curvature depends on a single variable, K, which is equal to 2(Re)2r/R, for low
velocities and small r/R ratio. It was then continued for the study of Dean for the laminar flow of fluids with different viscosities
through curved pipes with different curvature ratios (δ). The result shows that the onset of turbulence did not depend on the value of
the Re or the De. It was concluded that the flow in curved pipes is more stable than flow in straight pipes. It was also studied the
resistance to flow as a function of De and Re. There was no difference in flow resistance compared to a straight pipe for values of De
less than 14.6.

Figure 1.1: Diagram of helical coil


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

67 www.ijergs.org


Rough estimates can be made using either constant heat flux or constant wall temperature from the literature. The study
of fluid-to-fluid heat transfer for this arrangement needs further investigation. The second difficulty is in estimating the area of the coil
surface available to heat transfer. As can be seen in Figure, a solid baffle is placed at the core of the heat exchanger. In this
configuration the baffle is needed so that the fluid will not flow straight through the shell with minimal interaction with the coil. This
baffle changes the flow velocity around the coil and it is expected that there would be possible dead-zones in the area between the
coils where the fluid would not be flowing. The heat would then have to conduct through the fluid in these zones, reducing the heat
transfer effectiveness on the outside of the coil.

Figure 1.2 close-up of double pipe heat exchanger
Additionally, the recommendation for the calculation of the outside heat transfer coefficient is based on the flow over a bank of non-
staggered circular tubes, which is another approximation to account for the complex geometry. Thus, the major drawbacks to this type
of heat exchanger are the difficulty in predicting the heat transfer coefficients and the surface area available for heat transfer. These
problems are brought on because of the lack of information in fluid-to-fluid helical heat exchangers and the poor predictability of the
flow around the outside of the coil.

Nomenclatures:
A surface area of tube (m
2
) C constant in Eq. (4)
d diameter of inner tube (m) D diameter of annulus (m)
De* modified Dean number (dimensionless) h heat transfer coefficient (W/m
2
K)
k thermal conductivity (W/m K) L length of heat exchanger (m)
LMTD log-mean temperature difference (K or C) q heat transfer rate (J/s)
U overall heat transfer coefficient (W/m
2
K) ∆T1 temperature difference at inlet (K)
v velocity (m/s) density (kg/m
3
)
∆T2 temperature difference at outlet (K) dynamic viscosity (kg/ms)


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

68 www.ijergs.org

Subscripts
I inside/inner hotin Hot fluid in
o outside/outer i Inside/inner
c Cold max Maximum
coldin Cold fluid in min Minimum
cur Curved tube o Outside/outer
h Hot

2. DIMENSIONAL AND OPRATING PARAMETERS:
Table 1: Characteristic dimensions of heat exchanger
Dimensional parameters
Heat
Exchanger
di,mm 10
do,mm 12
Di,mm 23
Do,mm 25
Curvature Radius,mm 135
Stretch Length,mm 3992
Wire diameter,mm 1.5

Table 2: Range of parameters:
Parameters Range
Inner tube flow rate 200-500LPH
Outer tube flow rate 50-200 LPH
Inner tube inlet temperature 28-30
Outer tube inlet temperature 58-62
Inner tube outlet temperature 30-40
Outer tube outlet temperature 35-46

2.1 METHODLOGY:
The heat exchangers were constructed from mild steel and stainless steel. The inner tube having outer diameter 12mm and
inner 10mm was constructed from mild steel and outer tube of outer diameter 25mm and inner diameter 23mm was constructed from
stainless steel. Mild steel wire is wounded on the inner tube which has pitch 6 mm and 10 mm on the heat exchangers. The curvature
radius of the coil is 135 mm and the stretched length of the coil is 3992 mm. While the bending of tubes very fine sand filled in tube to
maintain smoothness on inner surface and this washed with compressed air. The care is taken to preserve the circular cross section of
the coil during the bending process. The end connections soldered at tube ends and two ends drawn from coiled tube at one position.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

69 www.ijergs.org

3. EXPERIMENTAL SETUP AND WORKING:

Figure 3.1: Experimental setup
Cold tap water was used for the fluid flowing in the annulus. The water in the annulus was circulated. The flow was controlled by a
valve, allowing flows to be controlled and measured between 200 and 500 LPH. Hot water for the inner tube was heated in a tank with
the thermostatic heater set at600 C. This water was circulated via pump. The flow rate for the inner tube was controlled by flow
metering valve as described for the annulus flow. Flexible PVC tubing was used for all the connections. J-Type thermocouples were
inserted into the flexible PVC tubing to measure the inlet and outlet temperatures for both fluids. Temperature data was recorded using
a creative temperature indicator.

Figure 3.2: Actual setup

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

70 www.ijergs.org


3.1 Experimental Study
A test run was completed on the apparatus. Once all of the components were in place, the system was checked thoroughly for
leaks. After fixing the leaks, the apparatus was prepared for testing. The test run commenced with the apparatus being tested under
laboratory conditions. Data was recorded every five minutes until the apparatus reached steady state. The hot temperatures fell as
expected; the cold temperatures seemed to be more unpredictable in one instance rising six degrees in five minutes and then on the
next reading falling three degrees. The apparatus took 120 minutes to reach steady state, which can vary based on operating
conditions. Readings were taken until the three-hour mark; however, the data became inconsistent, so a steady state set was
determined based on proximity of the readings.
Flow rates in the annulus and in the inner tube varied. The following five levels were used: 100, 200,300, 400, and 500 LPH.
All possible combinations of these flow rates in both the annulus and the inner tube were tested. These were done for all the coils in
counter flow configurations. Furthermore, three replicates were carried out every combination of flow rate, coil size and configuration.
This resulted in a total of 50 trials. Temperature data was recorded every ten seconds. The data used in the calculations was
synthesized only after the system had stabilized. Temperature measurements from the 120 s of the stable system were used, with
temperature reading fluctuations within ±1.10C. All the thermocouples were constructed from the same roll of thermocouple wire thus
carried out for the repeatability of temperature readings being high.

4. DATA COLLECTION AND ANALYSIS:
In present investigation work the heat transfer coefficient and heat transfer rates were determined based on the measured
temperature data. The heat is flowing from inner tube side hot water to outer tube side cold water. The operating parameter range is
given in table 2.
Mass flow rate of hot water (Kg/sec):
m
H
= Q
HOT
(LPH) × (Kg/m
3
)
Mass flow rate of hot water (Kg/sec)
m
C
= Q
COLD
(LPH) × ρ (Kg/m
3
)
Velocity of hot fluid (m/sec)
V
H
=

1000×Area

Heat transfer rate of hot water (J/sec)
q
H
= m
H
×C
P
×∆t
hot
×1000

Heat transfer rate of cold water (J/sec)
q
C
= m
C
×C
P
×t
cold
×1000

Average heat transfer rate
Q
avg
=
qH+ qC
2

The heat transfer coefficient was calculated with,
=

×

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

71 www.ijergs.org

The overall heat transfer surface area was determined based on the tube diameter and developed area of heat transfer which is
A= 0.22272m
2
, The total convective area of the tube keep constant for two geometry of coiled heat exchanger.
LMTD is the log mean temperature difference, based on the inlet temperature difference ∆T
1
, and outlet temperature difference ∆T
2
,
LMTD=
(∆1−∆2)
(ln⁡(
∆1
∆2
))


The overall heat transfer coefficient can be related to the inner and outer heat transfer coefficients by the following equation,
]
1
0
=
0

+
0×ln⁡(

)
2
+
1


Where di and do are inner and outer diameters of the tube respectively. k is thermal conductivity of wall material and L, length of tube
(stretch length) of heat exchanger. After calculating overall heat transfer coefficient, only unknown variables are hi and ho convective
heat transfer coefficient inner and outer side respectively, by keeping mass flow rate in annulus side is constant and tube side mass
flow rate varying,
h
i
=CV
i
n

Where Vi are the tube side fluid velocity m/sec., the values for the constant, C, and the exponent, n, were determined through curve
fitting. The inner heat transfer could be calculated for both circular and square coil by using Wilson plot method. This procedure is
repeated for tube side and annulus side for each mass flow rate on both helical coils.
The efficiency of the heat exchanger was calculated by,

=
1 −

1 −



= 93.33%.
The Reynolds number
Re=
(×V×D)

.
Dean number,
D
e
=

(

2
)
1
2
.
Friction factor,
(∆×)
(2××
2
×)
.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

72 www.ijergs.org

5. RESULT AND DISSCUSION:
The experiment was conducted for single-phase water to water heat transfer application. The tube in tube helical coil
heat exchanger has been analyzed in terms of temperature variation and friction factor for changing the pitch distance of wire which is
wound on outer side of inner tube. The results obtained from the experimental investigation of heat exchanger operated at various
operating conditions are studied in detail and presented.

Figure 5.1: Inner Reynold Number vs inner Nusselt Number
Nusselt Number VS Reynolds Number (Annulus Area)
As the Reynolds number increases Nusselt number increases. A larger Nusselt number corresponds to more active
convection the 10 mm pitch wire mesh tube place in the tube helical coil shows rapid increment after 5000 Re because of the
decreasing friction factor.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

73 www.ijergs.org

Figure 5.2: Variation of inner tube flow rate with inner Nusselt Number at constant annulus flow rate for plain tube in tube helical coil
heat exchanger

Figure 5.3: Variation of inner tube flow rate with inner Nusselt Number at constant annulus flow rate for 10 mm pitch of wire wound
of tube in tube helical coil heat exchanger

Figure 5.4: Variation of inner tube flow rate with inner Nusselt Number at constant annulus flow rate for 6 mm pitch of wire wound of
tube in tube helical coil heat exchanger
The Nusselt Number of inner tube at constant flow rate from annulus side was linearly increasing with increasing
flow rate of water through inner tube. Similarly the inner Nusselt Number was proportionally changed with variation of annulus side
flow rate at same inner side flow rate.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

74 www.ijergs.org



Figure 5.5: Annulus Reynolds Number vs Annulus Friction Factor

5.5 FRICTION FACTOR V/S R
A
It is observed from the figure.5.5, that the pressure drop in the annulus section is higher. This may be due to friction
generated by outer wall of the inner-coiled tube, as well as inner wall of the outer-coiled tube. As expected, the friction factor obtained
from the tube with coil-wire wound is significantly higher than that without coil-wire insert.


ACKNOWLEDGMENT
This work is currently supported by Prof. Milind S. Rohokale, H.O.D.,Sinhgad Institute of Technology and Prof. Annasaheb
Narode for their valuable input and encouragement.
CONCLUSION
Experimental study of a wire wound tube-in-tube helical coiled heat exchanger was performed considering hot water
in the inner tube at various flow rate conditions and with cooling water in the outer tube. The mass flow rates in the inner tube and in
the annulus were both varied and the counter-current flow configurations were tested.
The experimentally obtained overall heat transfer coefficient (Uo) for different values of flow rate in the inner-coiled
tube and in the annulus region were reported. It was observed that the overall heat transfer coefficient increases with increase in the
inner-coiled tube flow rate, for a constant flow rate in the annulus region. Similar trends in the variation of overall heat transfer
coefficient were observed for different flow rates in the annulus region for a constant flow rate in the inner-coiled tube. It was also
observed that when wire coils are compared with a smooth tube, it was also observed that overall heat transfer coefficient is increases
with minimum pitch distance of wire coils.
The efficiency of the tube-in-tube helical coil heat exchanger is 15-20% more as compared to the convention heat exchanger and
the experimentally calculated efficiency is 93.33%.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

75 www.ijergs.org

REFERENCES:
1. Robert Bakker, Edwin Keijsers, and Hans van der Beak ―Alternative Concepts and Technologies for Beneficial Utilization of
Rice Straw‖ Wageningen UR Food & Biobased Research ,Number Food & Biobased Research number 1176 ,ISBN-number
978-90-8585-755-6,December 31st, 2009
2. T.J. Rennie, V.G.S. Raghavan ― Experimental studies of a double-pipe helical heat exchanger‖, Experimental Thermal and Fluid Science
29 (2005) 919–924.
3. Dependra, Uprety, and Bhusal Jagrit. "Development of Heat Exchangers for Water Pasteurization in Improved
Cooking." Table of Content Topics Page no: 6.
4. V. Kumar, ―Numerical studies of a tube-in-tube helically coiled heat exchanger‖, Department of Chemical Engineering
Chemical Engineering and Processing 47 (2008) 2287–2295.
5. P. Naphon, ―Effect of coil-wire insert on heat transfer enhancement and pressure drop of the horizontal concentric tubes‖,
International Communications in Heat and Mass Transfer 33 (2006) 753–763.
6. A. Garcia, ―Experimental study of heat transfer enhancement with wire coil inserts in laminar-transition-turbulent regimes at
different Prandtl numbers‖, International Journal of Heat and Mass Transfer 48 (2005) 4640–4651.
7. Jung-Yang San*, Chih-Hsiang Hsu, Shih-Hao Chen, ―Heat transfer characteristics of a helical heat exchanger‖, Applied
Thermal Engineering 39 (2012) 114e120, Jan 2012.
8. W. Witchayanuwat and S. kheawhom,‖Heat transfer coefficient for Particulated air flow in shell and coil tube heat
exchanger‖,International journal of chemical and biological engineering 3:1, 2010.
9. Mohamed A. Abd Raboh, Hesham M. Mostafa,‖Experimental study of condensation heat transfer inside helical coil‖,
www.intechopen.com.
10. John H. Lienhard IV,―A heat transfer text book‖,3
rd
edition.
11. Paisarnnaphon,‖Effect of coil-wire insert on heat transfer enhancement andPressure drop of the horizontal concentric tubes‖.
12. Handbook of Heat transfer third edition Mcgraw-hill Third Edition.
13. F M White heat and mass transferMcgraw-hill Second Edition.
14. Ahmad Fakheri,‖ Heat Exchanger Efficiency‖, ASME 1268 / Vol. 129, SEPTEMBER 2007.













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

76 www.ijergs.org

Assessment of labour Risk in High-Rise Building
R Kathiravan
1
, G.Ravichandran
1
, Dr.S.Senthamil Kumar
2

1
Research Scholar (M.Tech),Construction Engineering and Management, Periyar Maniammai University, Thanjavur
kathiravancivilengg@gmail.com
[1]
, 09585544664

ABSTRACT - In the recent past the infrastructural development in India has been developing at a rapid rate. The
infrastructural development plays a major role in the economic development of the country. There are several risks allied with the
construction industry. Managing risks in construction projects has been recognized as a very important management process in order
to achieve the project objectives in terms of time, cost, quality, safety and environmental sustainability. Project risk management has
been intensively discussed in recent years. This paper aims to identify and analyze the risks associated with the development of
construction projects from project stakeholder and life cycle perspectives in terms of human safety and its effect on time and cost. This
can be done by calculating the productivity rate of the labors and also analyzing the organization needs from the work force. This
research found that these risks are mainly related to contractors, labors who directly take part in the construction process. Among
them, tight project schedule is recognized to have high influence on all project objectives maximally. In this study the survey has to be
conducted with in various construction industries in Tamil Nadu and the opinion at various levels of management through the standard
questionnaires are to be collected and the result are to be analyzed and aims at providing recommendations to overcome those risk
mitigations.

Keywords—risk, risk management, construction projects, labour risk, human safety, productivity, life cycle perspectives.
1. INTRODUCTION

1.1 An Over View On Construction Industry
The construction industry is the second largest industry of the country after agriculture. It makes a significant contribution to the
national economy and provides employment to large number of people. The use of various new technologies and deployment of
project management strategies has made it possible to undertake projects of mega scale. In its path of advancement, the industry has to
overcome a number of challenges. However, the industry is still faced with some major challenges, including housing, disaster
resistant construction, water management and mass transportation. Recent experiences of several new mega-projects are clear
indicators that the industry is poised for a bright future. It is the second homecoming of the civil engineering profession to the
forefront amongst all professions in the country.
Construction industry, with its backward and forward linkages with various other industries like cement, steel bricks etc.
catalyses employment generation in the country. According to Planning Commission the Infrastructure spending of the
government is around 1500 USD million or Rs. 67,50,000/- for 11th and 12th year plan. Statistics over the period have shown
that compared to other sectors, this sector of economic activity generally creates 4.7 times increase in incomes and 7.76 times
increase in employment generation potentiality. Sustained efforts by the Indian construction industry and the Planning
Commission have led to assigning the industry status to construction today. This means formal planning and above board financial
planning will be the obvious destination of the construction sector in the country, with over 3.1 Cr persons employed in it. The
key drivers of this growth are government investment in infrastructure creation and real estate demand in the residential and
industrial sectors.
There are mainly three segments in the construction industry like real estate construction which includes residential and
commercial construction; infrastructure building which includes roads, railways, power etc; and industrial construction that
consists of oil and gas refineries, pipelines, textiles etc .The construction activity differs from segment to segment.
Construction of houses and roads involves about 75% and 60% of civil construction respectively. Building of airports and
ports has construction activity in the range of 40-50%. For industrial projects, construction component ranges between 15-
20%. Within a particular sector also construction component varies from project to project.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

77 www.ijergs.org

2. CONCEPT OF RISK MANAGEMENT
2.1 Risk
Risk An uncertain event or condition that results from the work, having an impact that contradicts expectations. An
event is at least partially related to other parties in a business.
Risk management is recognized as an integral part of good management practice. To be most effective, risk
management should become part of an organization's culture. It should be integrated into the organization' s philosophy,
practices and business plans rather than be viewed or practiced as a separate program. When this is achieved, risk management
becomes the business of everyone in the organization. Risk management enables continual improvement in decision-making. It is
as much about identifying opportunities and avoiding or mitigating losses.
2.2 Major Human Risk In Construction Projects
• Inability to work.
• Unwillingness to work.
• Inadequate supervision while executing work activities.
• Insufficient labours.
• Effect of severe weather condition.
• Labour and contractors issues.
• Over time of work.
These are some of the major factors that causes damages and situation of risk in the construction site. There are several other factors
that also involved in the factor causing the situation of risk. These factors are to be identified from technicians point of view and also
from labours point of view so that the actual situation or the factors causing the risk are identified.
2.3 Lean Approach
In the recent past 'Lean Construction' - a philosophy based on the 'Lean Manufacturing' approaches undertaken in
the automotive industry has been applied to reduce wastes and increase efficiency in construction practices. The objective of
Lean Construction is to design a production system that will deliver a custom product instantly on order but maintain no
intermediate inventories. Applied to construction, 'Lean' changes the way work is done throughout the delivery process.
Current construction techniques attempt to optimize the project activity by activity and pay little attention to how value is created
and flows to the customer.

2.4 Work Sampling

Labor productivity has a major impact on whether a construction project is completed on time and within budget. Therefore,
it is important for construction managers to improve the conditions that affect labor productivity on their jobsites. Work sampling is a
method that evaluates the amount of productive, supportive, and non-productive time spent by the trade workers engaged in
performing their assigned work activities. It also helps identify any trends affecting labor productivity.
Construction companies are constantly searching for ways to improve labor productivity. Since labor is one of the greatest risks in a
construction contract it must be controlled and continuously improved. The construction company with the most efficient operations
has a greater chance to make more money and deliver a faster construction project to the project owner. There are several factors that
affect labor productivity on a jobsite such as weather conditions, workers‘ skill level, overcrowding of work crews, construction
methods used, and material delivery/ storage/ handling procedures.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

78 www.ijergs.org



Table 2.1 work sampling model























*VA-
valuable activities NVA- non valuable activities NVAN-non valuable activities by labour.


Table 2.2 Percentage Of Activities
S.no Activities Percentage
1 Value activities 65%
2 Non value activities 34%
3 Non value activities by labours 1%




2.5 Last Planner System
Date 12-sep-13
Time Labour
No of
labour
Time
Consumed
(in hrs)
Total
Time
Consumed
Man-
days
Team
Observation
Reason for NVA Impact
08:00:00
09:00:00 Skilled 15 0.5 7.5 1 NVAN Minor Resources 4.1%
10:00:00 Skilled 190 0.5 95 12 NVA Water & restroom 4.1%
11:00:00
13:00:00 Skilled 190 1 190 24 NVA Lunch 8.2%
14:00:00
15:00:00 Skilled 190 0.5 95 12 NVA Water & restroom 4.1%
16:00:00

17:00:00

18:00:00 Skilled 190 0.5 95 12 NVA Snacks & Tea 4.1%
19:00:00 Skilled 55 2 110 14 NVA Lighting 4.8%
20:00:00 Skilled 190 0.5 95 12 NVA Snacks 4.1%
21:00:00 Skilled 20 0.5 10 1 NVAN Minor Resources 4.1%
22:00:00 Skilled 190 0.5 95 12 NVA Water & restroom 4.1%
23:00:00

00:00:00
Total 100

Total Man-day per day 285

Waste % on crew 36%

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

79 www.ijergs.org

Better planning improves productivity by reducing delays, getting the work done in the best constructability sequence,
matching manpower to available work, coordinating multiple interdependent activities, etc. The relationship is obvious and very
powerful. One of the very most effective things you can do to improve productivity is to improve planning.


Table 2.3 Last planner log sheet

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

80 www.ijergs.org


Fig 2.1 Floor level VsPercentage Work Completed
From the graph it is incurred that the completion of the work within the cycle time is in increasing order but not with in the
specified time of completion. An average of 70% of the planned completion of the work are carried out in every cycle time. This
shows that 30% of the remaining work are completed with extra time or extended time of completion. It is inferred that there should
be a constrain for the labours to finish the work completely within the stipulated time.
It is in the recent trend of economy that the real estate company or the builders want to finish the project as for as possible so
that the consumer will satisfied and the margin of profit will rise. The companies are in a look of finishing the project on before hand
by continues and fast working. This greatly affects the labours in several aspects such as mental stress, health problems etc. If the
workers are likely to act as per the companies needs there should be a situation of risk occurs. This create situation of damage or even
cause loss of life.
2.6 Productivity
Productivity in the sense the amount of work done by the in a work man day or in a hour. Different companies having
different productivity rate.Construction companies are constantly searching for ways to improve labor productivity. Since labor is one
of the greatest risks in a construction contract it must be controlled and continuously improved. The construction company with the
most efficient operations has a greater chance to make more money and deliver a faster construction project to the project owner.
There are several factors that affect labor productivity on a jobsite such as weather conditions, workers‘ skill level, overcrowding of
work crews, construction methods used, and material delivery/ storage/handling procedures . Several methods exist for measuring and
analyzing worker productivity. In this study video visual of the progress of work is monitored using the cctv cam recording system.
This helps greatly in watching the progress of work without any obstructions. Since the camera is located in the highest elevation
points such as tower crane.

0%
20%
40%
60%
80%
100%
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
29%
0%
29%
100%
29%
71%
57%
43%
100% 100% 100%
31%
79%
100% 100%
86%
43%
71%
43%
29%
71%
71%
100% 100%
54%
62%
100%
79%
100% 100%
Pour 1 Pour 2
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

81 www.ijergs.org

Productivity is simply the ratio of overall quantity of work done to the ratio of quantity of lobours who take part in the
completion of the work in one day or one cycle time.
PRODUCTIVITY= (total work done/ number of labours involved)
In this study both the productivity rate of concrete work and the steel work are calculated as the structure was a typical shear
wall structure. The labours involved in this category are carpenter and the bar benders. They are accompanied by the helpers so as to
help the work force in the completion of the work within the stipulated time. The shear wall structure of the typical block 9 be filled
with 2950 sq.m of formwork and 20.59 tons of rebar work. Each block of the typical floor 9 is to be filled with same amount of
resource materials as mentioned. The concrete is prepared in the site itself where a RMC plant was located. Necessary peptalk and
safety precautions are provided for the labours every day before going for work by technicians and the concerned officers. These are
necessarily done to increase the productivity rate.
A meeting was held with the general manager of the contractor in question to describe the procedures of the work sampling
study. The data collection method was described as well as the type of information that could be extrapolated during the analysis
phase. After the general manager was familiar with the process and the information that could be obtained from a work sampling
study, an objective was determined. The contractor wanted to have a baseline of the labor productivity for the company‘s profit
centers.
Table 2.4 Quantities Of Work To Done
S.no Resource Amount
1 Form work 2950 sq.m
2 Steel 20.59 tons

Table 2.5Lobour strength of block 9, pour of concrete 1
Sl.no Floor Carpenters Bar benders
1 12 304 502
2 13 620 835
3 14 408 464
4 15 328 400
5 16 256 325
6 17 320 361

Table 2.6Lobour strength of block 9, pour of concrete 2
Sl.no Floor Carpenters Bar benders
1 11 365 682
2 12 411 607
3 13 404 426
4 14 408 460
5 15 269 390
6 16 243 325
7 17 272 358



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

82 www.ijergs.org


The above tables shows the total amount of workers worked in each floors to accomplish the cycle time project. It is clear
that the above work force worked in each floors only completed the work. Their productivity rate only be used or consumed for the
completion of the block 9 of the project. There is a great fluctuation in the amount of workers employed in the construction process in
every floor. Thus the productivity rate of the workers will be increased due to shortage of labours. This increase in productivity falls
on the head of the concerned labours employed to complete the project within the stipulated time.

Table 2.7 Form work and reinforcement productivity
Sl.no Form work Reinforcement work
Pour1 Pour2 Pour1 Pour 2
1 1.7 2.0 26 27
2 1.9 2.4 30 27
3 1.6 3.3 27 35
4 3.0 3.4 46 35
5 2.4 4.7 34 44
6 2.7 4.0 34 24
7 4.9 6.2 31 35
8 3.8 5.3 36 35
9 4.3 5.2 43 46
10 5.3 6.3 49 28
11 4.9 5.4 45 27
12 6.4 3.3 27 32
13 1.1 3.7 35 32
14 4.2 3.8 30 30
15 4.6 6.4 34 35
16 7.6 8.1 42 42
17 5.3 7.8 38 38

The table shows the productivity rate of each the carpenters and the bar benders in each floor that the average rate of productivity for
each labour concerned with thework are calculated. But it is noted that the productivity rate of the company was not achieved as the
average productivity rate of each labour in the work concerned are low. There may be several problems that may cause or stops the
worker. The worker available in the work for a day will not be available on another or next day or the forth coming cycle wor k as
there is a increase in the productivity rate of the work concerned.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

83 www.ijergs.org


Fig 2.2 Floor level Vs Formwork productivity

Fig 2.3 Floor level Vs Rebar productivity
This shows that the average productivity rate of the form work fixing and reinforcement work. The average productivity rate
displays that the companies productivity rate was not achieved so as to complete the project within the stipulated time and also to
achieve the calculate margin of profit. In the economic point of view it recogonised that people may want the facility as for as possible
so that they can satisfied. It is why Honda maze car selling at a rapid rate than any other cars. As because the delivers of the car will be
done as soon as possible after the order and also comfortable in the economic point of view. In the same way the building and real
estate industries also in need of satisfying the market needs so that they can get the marginal profit. It is not possible to achieve the
target with the use of available resources. The availability of resources also less. So the only way of achieving the target is through
increasing the labour productivity of the available labour.
There is a matter of concern that these increased labour productivity will create several risk factors that affects the labours
concerned directly or indirectly. This also may cause the situation of accident due to increased productivity rate. This may affects the
entire course of the project and may cause even loss of life and injuries to the workforce concerned.
1.7
2.0
1.6
3.0
2.4
2.7
4.9
3.8
4.3
5.3
4.9
6.4
1.1
4.2
4.6
7.6
5.3
1.9
2.4
3.3
3.4
4.7
4.0
6.2
5.3
5.2
6.3
5.4
3.3
3.7
3.8
6.4
8.1
7.8
0.0
2.0
4.0
6.0
8.0
10.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
P
r
o
d
u
c
t
i
v
i
t
y

(
i
n

s
q
.
m
/
M
a
n


D
a
y
)
Floor (in No’s)
Pou
r 1
26
30
27
46
34 34
31
36
43
49
45
27
35
30
34
42
38
27
27
35 35
44
24
35
35
46
28
27
32 32
30
35
42
38
0
10
20
30
40
50
60
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
P
r
o
d
u
c
t
i
v
i
t
y

(
i
n

k
g
/
M
a
n


D
a
y
)
Floor (in No’s)
Pou
r 1
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

84 www.ijergs.org

CONCLUSION
The increase in the economic development of the country greatly influences in creating the demand and requirement. The construction
industries are in the view of satisfying the needs of the customers and to achieve large margin of profits can readily agree with short
completion of the work. This can greatly affects the labour force by increasing their productivity rate of work.As there is shortage in the
availability of the construction labours the companies assign the work on available labour and impose them to work faster to complete the
project in stipulated time. On the other hand this may create mental disturbance for the labours working in the site due to increased
productivity and increased time of work. For working of long hours thelabours may consume drugs like pan masala, cigarette, and some
timeseven consume liquor during the course of work. This leads to poor quality of work and makes the labour lazy by diverting their
concentration over the construction process. The study shows a great decrease in the availability of labour for work when the floor level goes
on increasing. This may create the situation of risk and causes severe consequences in forms of collapse of structure, damage, time waste,
injuries, loss of life, waste of money.

REFERENCES:

[1] Arbulu, R. J. and Tommelein, I. D. (2002). ―Value stream analysis of construction supply chains: Case study on pipe supports
used in power plants.‖ Proceeding, 12th Annual Conference of the International Group for Lean Construction, Federal
University of RioGrande doSul, Brazil, 183 - 195.
[2] Polat, G., and Ballard, G. (2004), "Waste in Turkish construction: need for leanconstruction techniques." Proceeding, 12th
Annual Conference on Lean Construction,Helsingor, Denmark, 488-501.
[3] Subramanyan, H., Sawant, P., and Bhatt, V. (2012). ‖Construction Project Risk Assessment: Development of Model Based
on Investigation of Opinion of Construction Project Experts from India.‖
[4] Aggarwal,S., (2003), ―Challenges for Construction Industries in DevelopingCountries,‖ Proceedings of the 6th National
Conference on Construction, 10-11November 2003, New Delhi, CDROM, Technical Session 5, Paper No.1.
[5] Bhattacharya,C., (2002), ―JJ Hospital Flyover – Precast Piers,‖ Gammon Bulletin,Vol.126, April-June 2002.
[6] Daniel baloi, ―Risk analysis techniques in construction engineering projects‖, journal of risk sanalysis and crisis Response,
vol 2, no 2 ( aug 2012), 115-123.











International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

85 www.ijergs.org

Optimization of Transmission Power in Ad Hoc Network
M.D. Boomija
1

1
Assistant Professor, Department of IT, Prathyusha Institute of Technology and management, Chennai, Tamil Nadu
Email: boomija.md@gmail.com

ABSTRACT - A mobile ad-hoc network is a infrastructure less, self-configuring network of mobile devices. Infrastructure less
networks have no fixed router, all nodes are capable of movement and can be connected dynamically in an arbitrary manner. Nodes of
these networks function as routers which discover and maintain routes to other nodes in the network. Each device in a mobile ad hoc
network is free to move independently in all direction, and will therefore change its links to other devices frequently. The primary
challenge in building an ad hoc is equipping each device to continuously maintain the information required to properly route traffic.
The Optimization of Mobile Ad Hoc Network System Design engine works by taking a specification of network requirements and
objectives and allocates resources which satisfy the input constraints and maximize the communication performance objective. The
tool is used to explore networking design options and challenges, including power control, flow control, mobility, uncertainty in
channel models and cross-layer design. The project covers the case study of power control analysis.
Keywords— Ad hoc network, optimization, power control, Time slot, MIMO, AMPL, Multi objective optimal
I INTRODUCTION
A mobile ad-hoc network (MANET) is a self-configuring network of mobile routers topology. The routers are free to move randomly
and organize themselves at random. So, the network's wireless topology may change rapidly and unpredictably. Such a network may
operate in a standalone fashion, or may be connected to the larger Internet. Minimal configuration and quick deployment make ad hoc
networks suitable for emergency situations like natural or human induced disasters, military conflicts, emergency medical situations
etc.
Optimization of Mobile Ad hoc network system as in Fig 1, the network design is approached as a process of optimizing variabl es.
The optimization of network parameters is a feedback process of optimization and performance estimation through simulation. Two
approaches (i) Generic Solver (ii) Specialized method. The set of control variables and objective parameters are the input to the
project. If specialized method is available for the given problem, then the solution is formulated by using AMPL modeling language.
It is a comprehensive and powerful algebraic modeling language for linear and nonlinear optimization problems.

Design Problem Optimization of ad hoc network Output









Fig 1 Mobile ad hoc network framework

II OPTIMIZATION PROBLEM

Optimization refers to the process of making a device or a collection of devices run more efficiently in terms of time and resources
(e.g., energy, memory). Optimization is a necessity for MANET management decisions due to the inherent individual and collective
Generic Specialized
Optimization Process
Simulation
Performance Analysis
Resource Allocations
Performance Estimate
Specification parameters
Design Problem
Model parameters
Optimized parameters

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

86 www.ijergs.org

resource limitations within the network. Mathematically, optimization entails minimizing or maximizing an objective function by
choosing values for the input variables from within an allowed set. An objective function is a mathematical expression made up of one
or more variables that are useful in evaluating solutions to a problem. [4]







Fig 2 Optimization problem in AMPL

Mathematical explanation of optimization
Set : A = a set of feasible solutions to the objective function, f
Variable : x = an element (a vector of input variables) in the set of feasible solutions, A
Objective Function: f = a given function
If the optimization problem calls for minimizing the results of the function, then we find an element, x
0
, of the set A such that:
f (x
0
) ≤ f (x) ∀x∈ A
If the problem calls for maximizing the results, then we find an element, x
0
, of the set A such that: f (x
0
) ≥ f (x) ∀x∈ A
The elements of the allowed set, x, are combinations of variable assignments that result in a feasible solution (a solution that satisfies
all of the constraints in the optimization problem). A feasible solution that minimizes or maximizes the value of the objective function
is called an optimal solution. [6]
The first step is to find an optimization method most appropriate to the set of control variables and objectives provided as input. If no
specialized algorithm is available in the framework for the specified problem, then the problem is formulated as a mathematical
program in the AMPL modeling language as shown in Fig. 2.
An appropriate generic solver is then used to solve the program, depending on whether objectives and constraints are linear or
nonlinear, and whether the variables are discrete or continuous. [12] The power control problem of minimizing power under a signal-
to-noise-interference constraint is an example of a linear program which is optimized using this generic solver approach. If a
specialized method is available for the problem, the framework automatically uses it to find a solution. An example of a specialized
method is a heuristic packing procedure. It schedules a set of concurrent transmissions and ensures a chance for every node to transmit
at least once. [3]
III RELATED WORK
The generally accepted network design cycle consists of three steps: 1) developing a network model 2) estimating the performance of
the proposed network through simulation 3) manually adjusting the model parameters until an acceptable performance is achieved for
the target set of scenarios. The complexity of networks and the large number of design parameters, changes in the design of a model
may have unintended effects. This project allows the designer to control high-level objectives instead of controlling low-level decision
variables. It applies optimization theory to generalized networking models. The existing optimization techniques combined with the
simulation capabilities of existing tools for the task of performance estimation.
IV SOFTWARE DESIGN
The ad hoc framework has two distinct forms: 1) an application with a graphical user interface and 2) a library with an application
programming interface. The former is an interface for human users while the latter is an interface for other programs that link against
it. One of the goals of the proposed framework as a network design tool is to provide a mechanism for comparing network
technologies. Each such model or algorithm is implemented in this framework in a modular way such that it can be swapped out for
any number of alternatives. The GUI provides a streamlined way of configuring multiple alternatives, and compare and test them
Ad hoc
framework
Optimization
Problem (
Input
Scenario)
AMPL
Resource Allocation
Output
Problem
Solvers
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

87 www.ijergs.org

through concurrent simulation and optimization. Without the need of any modification, the API supports such an extension. A set of
control parameters only added with the new extension. These parameters are then automatically added to the GUI through active
generative programming
V NETWORK DESIGN
The resources which are to be efficiently allocated on an ad hoc wireless network are naturally distributed, residing either on the nodes
or the edges of the graphs that represent the network state. The algorithms in this framework are separated into two categories: 1)
centralized and 2) distributed. The former operates on single snapshots or on a time-averaged model of the global network state. The
latter operates as a control mechanism on the node.
VI POWER CONTROL ANALYSIS
A. Introduction
Allocation of physical resources (e.g., transmission power) based on knowledge of the network state is often complicated by the
presence of uncertainty in the available information. Therefore, when the characteristics of the wireless propagation channel are highly
dynamic or only noisy measurements are available, the framework represents the uncertainty as a collection of S samples of each
channel state Hij in which represent a range of values that each channel between a transmitter and a receiver can take on. The problem
of optimally allocating resources under such a statistical representation of the channels can be solved in the proposed model by
assuming the distribution mean for each channel state or by using a optimization method which seeks to quantify the dependability of
the resource allocation solution. [1]
A fundamental problem in this optimization method is the tradeoff between feasibility and optimality. It may be interpreted to be a
multi objective optimization problem with two objectives: maintain feasibility and seek optimality. With this view in mind, a Pareto
front can be constructed to demonstrate the tradeoff between the two objectives. A network designer then only provides this
framework with 1) the requirement of sufficiently high feasibility or 2) a ceiling for the transmission power on the network.
B. Multi Objective Optimal
Multi objective optimization (known as multi objective programming, vector optimization, multi criteria optimization multi attribute
optimization or Pareto optimization) is an area of multiple criteria decision making, that is concerned with mathematical optimization
problems involving more than one objective function to be optimized simultaneously. Multi objective optimization has been applied in
many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of
trade-offs between two or more conflicting objectives. [7]
In practical problems, there can be more than three objectives. For a nontrivial multi objective optimization problem, there is not a
single solution that simultaneously optimizes each objective. In that case, the objective functions are said to be conflicting, and there
exists a (possibly infinite number of) Pareto optimal solutions. A solution is called non dominated, Pareto optimal, Pareto efficient or
non inferior, if none of the objective functions can be improved in value without impairment in some of the other objective values.
Without additional preference information, all Pareto optimal solutions can be considered mathematically equally good (as vectors
cannot be ordered completely). Researchers study multi objective optimization problems from different viewpoints and, thus, there
exist different solution philosophies and goals when setting and solving them. The goal may be finding a representative set of Pareto
optimal solutions, and/or quantifying the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies
the preferences of a human decision maker. [2]
A multi objective optimization problem is an optimization problem that involves multiple objective functions. In mathematical terms,
a multi objective optimization problem can be formulated [5]

where the integer is the number of objectives and the set is the feasible set of decision vectors defined by constraint
functions. In addition, the vector-valued objective function is often defined as [8]
.
If some objective function is to be maximized, it is equivalent to minimize its negative. The image of is denoted by
An element is called a feasible solution or a feasible decision. A vector for a feasible
solution is called an objective vector or an outcome. In multi objective optimization, there does not typically exist a feasible
solution that minimizes all objective functions simultaneously.[9] Therefore, attention is paid to Pareto optimal solutions, i.e.,
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

88 www.ijergs.org

solutions that cannot be improved in any of the objectives without impairment in at least one of the other objectives. In mathematical
terms, a feasible solution is said to (Pareto) dominate another solution , if
1. for all indices and
2. for at least one index . [7]
A solution (and the corresponding outcome ) is called Pareto optimal, if there does not exist another solution that
dominates it. The set of Pareto optimal outcomes is often called the Pareto front. [11]
VII IMPLEMENTATION
























Fig 3 Framework Architecture
The framework in Fig. 3 takes model parameters, optimized parameters and controllable resources as power and time as input. The
nodes are created with initialized parameters as distance and bandwidth. Each node detects its neighboring node automatically within
its range. Multiple nodes are created. Select any two nodes as source and destination. N nodes are deployed randomly in a surface
Resource Allocation
Time slots Packet Dynamics
Centralized Time Slot
Algorithm
Rate Control Power control
Robust Power Control
Algorithm
Output : Resource Allocation, Visualization of Sampling Result

Ad hoc Network

Neighbor Node Detection
Input : Design Problem, Scenario Specification, Model Parameters,
Optimized Parameters, Controllable Resources

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

89 www.ijergs.org

uniformly. In a conventional multi-hop transmission, each source communicates to its intended destination through multiple
intermediate nodes (hops). It guarantees each node at least once chance to transmit and guarantee concurrent transmitters and succeed.
The majority of the works involving transmitting data through different clusters we propose an adaptive cluster size dynamically
throughout the network. In the first stage, a source node of a MIMO link performs a local transmission at a rate to a subset of its
neighbors. This is followed by the simultaneous transmission of encoded versions of the same message by the cooperating neighbors
including the source to the destination of the MIMO link Strategies for routing. It presents a joint power and rate control adaptive
algorithm to optimize the trade-off between power consumption and throughput in ad hoc networks. Each node chooses its own
transmission power and rate based on limited environment information in order to achieve optimal transmission efficiency. Fig 4 – 7
shows the node creation and optimal path findings process to send packets form source node to destination node.



Fig 4 Node 1 Creation Fig 5 Multiple Node Creation

Fig 6 Path Creation Fig 7 Send Content

A Simulation and Results

A trade-off (or tradeoff) is a situation that involves losing one quality or aspect of something in return for gaining another quality or
aspect. Pareto efficiency, or Pareto optimality, is a state of allocation of resources in which it is impossible to make any one individual
better off without making at least one individual worse off.The Pareto front formed from solving the power control problem allows the
network designer to choose an operating point based on the prioritization of the two objectives, transmit power and channel feasibility.
We first look at a single mobile network where the uncertainty in the channel state (represented by the set of samples of each channel)
comes from the changing topology due to the movement of the nodes. We then look at the effect of considering only a fraction of
nearest interferers at each active receiver.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

90 www.ijergs.org



Fig 8 Optimization of Mobile Ad Hoc Network Simulation for 3 topology
Fig 8 shows the throughput in packets per time slot under slotted Aloha for three transmitters as a function of the system contention
rate (defined as Np for N = 3 users and p the per-user contention probability). The figure shows the theoretical throughput under the
channel collision model (error free reception iff a transmitter is the sole transmitter in the slot), the measured throughput using the
WARP testbed when the nodes employ 64-QAM, and the simulated throughput under the framework when the nodes operate in the
collision channel model.

B Pareto Optimal Tradeoff

The simulation setup is a mobile network of 100 nodes on a 1 km 2 square arena. The nodes are placed uniformly at random. The
nodes then move under a random waypoint model [10] at a speed of 2 m/s for a duration of 1 second during which the channel
sampling process is performed. The transmission packing procedure is performed once and results in 10 unicast transmission-receiver
pairs. The interference set I1 in (1) is used for computing the SINR. In other words, in this case, every active transmitter is defined in
the optimization problem as a potential source of interference. A single simulation run for 100 nodes with a full interference set takes
approximately 1 second to execute on a 2.4 GHz processor.

The bottom curve in Fig. 8 shows a Pareto front of solutions produced by the framework. Given the network topology, this solution set
provides the network designer a range of optimal transmission power allocations. The designer can then choose one of these solutions
based on the relative value of the power objective versus the feasibility objective.

C Shrinking the Interference Set Effect

The set of channel state samples is collected and optimized over for that single network, producing a Pareto front of solutions. In this
section, we consider the effect of k in the interference set Ikj . This set is the set of k closest interferers to active receiver j.





Fig 9 Tradeoff between the feasibility objective and the optimality objective (minimizing the total transmit power)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

91 www.ijergs.org


Fig. 9 shows the effect of decreasing k from the maximum of 9 to 1 for the single network described in the previous section. This plot
provides an intuition that increasing k has diminishing returns. This pattern is looked at more closely in this section. However, for the
objective function in the optimization problem, a more limited interference set is used. The importance of keeping k small and
independent of network size is twofold. First, a small constant k significantly reduces the complexity of the optimization problem as
the size of the SINR computation in (1) no longer depends on the number of active transmitters, and thus is independent of network
size. Second, a constant k removes the need for every transmitter-receiver pair to have channel state information from every other
interfering transmitter on the network to this pair‘s receiver. Therefore, the significant overhead of sharing this information between
nodes is removed, allowing for distributed power control approaches to make resource allocation decisions without first gathering
channel state information from the whole network.
VIII CONCLUSION
Power control in ad-hoc networks is a more difficult problem due to non availability of access point in network. Power control
problem is defined by two ways. First, in ad-hoc networks, a node can be both a data source and a router that forwards data for other
nodes .Node is involving in high-level routing and control protocols. Additionally, the roles of a particular node may change over
time. Second, there is no centralized entity such as an access point to control and maintain the power control mode of each node in the
network. The power control analysis of mobile ad hoc network system shows the tradeoffs and optimization approaches implemented
in the framework. The method for finding an optimized power allocation solves the power control problem. This empirical result
indicates that only a small number of nearest interfering transmitters have a significant effect on the feasibility of a channel.
REFERENCES:
[1] OMAN: A Mobile Ad Hoc Network Design System July 2012 (vol. 11 no. 7) pp. 1179-1191 Fridman, A. Weber. S,Graff
C, Breen D.E, Dandekar.R,. Kam, M.
[2] Analyzing performance of ad hoc network mobility models in a peer-to-peer network application over mobile ad hoc
network Amin, R.; Ashrafch, S.; Akhtar, M.B.; Khan, A.A. Electronics and Information Engineering (ICEIE), 2010
International Conference Volume: 2
[3] Power Aware and Signal Strength Based Routing Algorithm for Mobile Ad Hoc Networks Varaprasad,G. Communication
Systems and Network Technologies (CSNT), 2011 International Conference Publication Year: 2011 , Page(s): 131 - 134
[4] Performance analysis of effect of transmission power in mobile ad hoc network Das, M.; Panda,B.K.;Sahu,B.Wireless and
Optical Communications Networks (WOCN), 2012 Ninth International Conference Page(s): 1 - 5
[5] Lin, ―Multiple-Objective Problems: Pareto-Optimal Solutions by Method of Proper Equality Constraints,‖ IEEE Trans.
Automatic Control, vol. AC-21, no. 5, pp. 641-650, Oct. 1976. S. Agarwal, R. Katz, S. Krishnamurthy, and S. Dao,
―Distributed Power Control in Ad-Hoc Wireless Networks,‖ Proc. 12th IEEE Int‘l Symp. Personal, Indoor and Mobile Radio
Comm., vol. 2, 2001
[6] Chen, Y., Yu, G., Qiu, P., & Zhang, Z. (2006). Power aware cooperative relay selection strategies in wireless ad-hoc
networks. In Proceedings of IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (pp. 1–
5).
[7] Yasser Kamal Hassan, Mohamed Hashim Abd El-Aziz, and Ahmed Safwat Abd El-Radi, ―Performance Evaluation of
Mobility Speed over MANET Routing Protocols‖, in International Journal of Network Security, Vol.11, No.3, PP.128 - 138,
Nov. 2010.
[8] P. Gupta and P. Kumar, ―Critical Power for Asymptotic Connectivity in Wireless Networks,‖ Stochastic Analysis, Control,
Optimization and Applications: A Volume in Honor of WH Fleming, pp. 547-566, Springer, 1998.
[9] T. Camp, J. Boleng, and V. Davies, ―A Survey of Mobility Models for Ad Hoc Network Research,‖ Wireless Comm. and
Mobile Computing, vol. 2, no. 5, pp. 483-502, 2002.
[10] X. Jia, D. Kim, S. Makki, P. Wan, and C. Yi, ―Power Assignment for k-Connectivity in Wireless Ad Hoc
Networks,‖ J. Combinatorial Optimization, vol. 9, no. 2, pp. 213-222, 2005.
[11] F. Dai and J. Wu, ―On Constructing k-Connected k-Dominating Set in Wireless Networks,‖ Proc. 19th IEEE Int‘l Parallel
and Distributed Processing Symp., 2005.
[12] Pradhan N, Saadawi T. Adaptive distributed power management algorithm for interference-aware topology control in
mobile ad hoc networks. In: Global Telecommunications Conference 2010, IEEE GLOBECOM 2010, 2010.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

92 www.ijergs.org

Analysis of Design of Cotton Picking Machine in view of Cotton Fibre Strength
Nikhil Gedam
1

1
Research Scholar (M.Tech), Raisoni College of Engineering affiliated by RTM Nagpur University
Email- nikhilgedam8388@gmail.com

ABSTRACT – The mechanical cotton picker is a machine that automates cotton harvesting in a way that reduces harvest time and
maximizes efficiency. To develop a mechanical cotton picker with the intent on replacing manual labor. The first pickers were only
capable of harvesting one row of cotton at a time, but were still able to replace up to forty hand laborers
The current cotton picker is a self-propelled machine that removes cotton lint and seed (seed-cotton) users rows of barbed
spindles that rotate at high speed and remove the seed-cotton from the plant. The seed-cotton is then removed from the spindles by a
counter-rotating doffer and is then blown up into the basket the plant at up to six rows at a time. The picker or spindle type machine
was designed to pick the open cotton from the bolls using spindles, fingers, or prongs, without injuring the plant's foliage and
unopened bolls.
In this cotton picking by spindle type machine will resul in sshort fibre content ,micronair and fibre length will indirectly looses the
fibre strength quality as compare to hand picking machine.over come to these problem make a cotton picking machine by suction will
made a pressure equal to the hand picking( 100gm)
Keywords:-cotton fibre/cotton harvesting/cotton fibre properties/cotton fibre testing/pneumatic cotton picking machine
Introduction
cotton is primarily grown for its fiber and its reputation and attraction are the natural feel and light weight of cotton fabrics. Heavy
competition from synthetic fibers dictates that continued improvement is needed in cotton fiber properties. There is then an
opportunity to exploit cotton fiber‘s advantage and enhance its reputation by improving and preserving its fiber qualities through the
growing and processing value chain to match those properties sought by cotton spinners, who require improved fiber properties:
longer, stronger, finer, more uniform and cleaner to reduce waste, allow more rapid spinning to reduce production costs and allow
better fabric and garment manufacture. Cotton fibers are naturally variable and it is a challenge to manage this variability. Our
experience with variability in fiber quality shows a substantial range across seasons and irrigated sites for micronaire (35%), with
lesser ranges for length and strength (<7%). Note lint yield had a 58% range across the same data set. If raingrown systems are
included in such an analysis, yield and fiber length have a larger range due to moisture stress. Fiber strength is mostly affected by
cultivar unless the fiber is very immature.
To ensure the best realization of fiber quality from a cotton crop, the best combination of cultivar, management, climate and
processing is required. For example, if you start with a cultivar with poor fiber quality, there is nothing that can be done with
management and processing to make the quality better. However, if you start with a cultivar with good fiber quality traits, there is
some insurance against unfavorable conditions but careful management and processing are still required to preserve quality.
Historically there has generally been greater production problems for low micronaire (assume immature) cotton especially when
grown in relatively short season production areas having a cooler and wetter finish to the season. The response by cotton breeders can
be to select for a higher micronaire in parallel with high yield during cultivar development. Given a negative association between yield
and fiber fineness (Price 1990), such a breeding strategy could produce cultivars with coarse and immature fibers – exactly the
opposite combination required by spinners. Thus although more difficult, it is clear the breeding strategy should be to ensure selection
for intermediate micronaire with fine and mature fibers. Therefore separate measurement of fineness and maturity are important.
These require specialized instruments.
There are many measurements of cotton fiber quality and a corresponding range of measuring instruments. The more common
instruments in commercial use and in marketing are of the high volume type and this paper will concentrate on values measured on
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

93 www.ijergs.org

Uster High Volume Instrumentation (HVI) or equivalent. The range of measurements include fiber length (and its components
uniformity, short fiber content); fiber strength (and elongation or extension); fiber micronaire (and fineness and maturity); grade
(including color, trash, neps, seed coat fragments). This paper will concentrate on fiber length, fiber strength and micronaire. All other
measurements are acknowledged as being important in many circumstances, but we will use length, strength and micronaire to
represent the effects that various factors such as cultivar,
management, climate or processing may have on fiber quality. We aim to review opportunities for breeding, management and
processing to optimize fiber quality under commercial practice.
Material and methods

Cotton harvesting by machine
spindle-type cotton picking machine, remove the cotton from open bolls
The spindles, which rotate on their axes at a high speed, are attached to a drum that also turns, causing the spindles to enter the plant.
The cotton fibre is wrapped around the moistened spindles and then taken off by a special device called the doffer, from which the
cotton is delivered to a large basket carried above the machine During wraping of cotton fibre around the spindles bars, fibre was
stretched will result in loose in fibre quality in terms of short fibre content Possibility of increase short fibre content and trash. Result
into looses the quality of cotton fibre characteristics;

BASIC FIBRE CHARACTERISTICS:
A textile fibre is a peculiar object. It has not truly fixed length, width, thickness, shape and cross-section. Growth of natural fibres or
prodction factors of manmade fibres are responsible for this situation. An individual fibre, if examined carefully, will be seen to vary
in cross-sectional area along it length. This may be the result of variations in growth rate, caused by dietary, metabolic, nutrient-
supply, seasonal, weather, or other factors influencing the rate of cell development in natural fibres. Surface characteristics also play
some part in increasing the variablity of fibre shape. The scales of wool, the twisted arrangement of cotton, the nodes appearing at
intervals along the cellulosic natural fibres etc.
Following are the basic chareteristics of cotton fibre
- fibre length
- fineness
- strength
- maturity
- Rigidity
- fibre friction
- structural features
STANDARD ATMOSPHERE FOR TESTING:
The atmosphere in which physical tests on textile materials are performed. It has a relative humidity of 65 + 2 per cent and a
temperature of 20 + 2° C. In tropical and sub-tropical countries, an alternative standard atmosphere for testing with a relative humidity
of 65 + 2 per cent and a temperature of 27 + 2° C,
may be used.
FIBRE LENGTH:
The "length" of cotton fibres is a property of commercial value as the price is generally based on this character. To some extent it is
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

94 www.ijergs.org

true, as other factors being equal, longer cottons give better spinning performance than shorter ones. But the length of a cotton is an
indefinite quantity, as the fibres, even in a small random bunch of a cotton, vary enormously in length. Following are the various
measures of length in use in different countries
- mean length
- upper quartile
- effective length
- Modal length
- 2.5% span length
- 50% span length
Mean length:
It is the estimated quantity which theoretically signifies the arithmetic mean of the length of all the fibres present in a small but
representative sample of the cotton. This quantity can be an average according to either number or weight.
Upper quartile length:
It is that value of length for which 75% of all the observed values are lower, and 25% higher.
Effective length:
It is difficult to give a clear scientific definition. It may be defined as the upper quartile of a numerical length distribution eliminated
by an arbitrary construction. The fibres eliminated are shorter than half the effective length.
Modal length:
It is the most frequently occurring length of the fibres in the sample and it is related to mean and median for skew distributions, as
exhibited by fibre length, in the follwing way.

(Mode-Mean) = 3(Median-Mean)
where,
Median is the particular value of length above and below which exactly 50% of the fibres lie.
2.5% Span length:
It is defined as the distance spanned by 2.5% of fibres in the specimen being tested when the fibres are parallelized and randomly
distributed and where the initial starting point of the scanning in the test is considered 100%. This length is measured using
"DIGITAL FIBROGRAPH".
50% Span length:
It is defined as the distance spanned by 50% of fibres in the specimen being tested when the fibres are parallelized and randomly
distributed and where the initial starting point of the scanning in the test is considered 100%. This length is measured using
"DIGITAL FIBROGRAPH".
The South India Textile Research Association (SITRA) gives the following empirical relationships to estimate the Effective Length
and Mean Length from the Span Lengths.
Effective length = 1.013 x 2.5% Span length + 4.39
Mean length = 1.242 x 50% Span length + 9.78
FIBRE LENGTH VARIATION:
Even though, the long and short fibres both contribute towards the length irregularity of cotton, the short fibres are particularly
responsible for increasing the waste losses, and cause unevenness and reduction in strength in the yarn spun. The relative proportions
of short fibres are usually different in cottons having different mean lengths; they may even differ in two cottons having nearly the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

95 www.ijergs.org

same mean fibre length, rendering one cotton more irregular than the other.It is therefore important that in addition to the fibre length
of a cotton, the degree of irregularity of its length should also be known. Variability is denoted by any one of the following attributes
1. Co-efficient of variation of length (by weight or number)
2. irregularity percentage
3. Dispersion percentage and percentage of short fibres
4. Uniformity ratio
Uniformity ratio is defined as the ratio of 50% span length to 2.5% span length expressed as a percentage. Several instruments and
methods are available for determination of length. Following are some
- shirley comb sorter
- Baer sorter
- A.N. Stapling apparatus
- Fibrograph
uniformity ration = (50% span length / 2.5% span length) x 100
uniformity index = (mean length / upper half mean length) x 100
SHORT FIBRES:
The negative effects of the presence of a high proportion of short fibers is well known. A high percentage of short fibres is usually
associated with,
- Increased yarn irregularity and ends down which reduce quality and increase processing costs
- Increased number of neps and slubs which is detrimental to the yarn appearance
- Higher fly liberation and machine contamination in spinning, weaving and knitting operations.
- Higher wastage in combing and other operations.
While the detrimental effects of short fibres have been well established, there is still considerable debate on what constitutes a 'short
fibre'. In the simplest way, short fibres are defined as those fibres which are less than 12 mm long. Initially, an estimate of the short
fibres was made from the staple diagram obtained in the Baer Sorter method


Short fibre content = (UB/OB) x 100
While such a simple definition of short fibres is perhaps adequate for characterising raw cotton samples, it is too simple a definition to
use with regard to the spinning process. The setting of all spinning machines is based on either the staple length of fibres or its
equivalent which does not take into account the effect of short fibres. In this regard, the concept of 'Floating Fibre Index' defined by
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

96 www.ijergs.org

Hertel (1962) can be considered to be a better parameter to consider the effect of short fibres on spinning performance. Floating fibres
are defined as those fibres which are not clamped by either pair of rollers in a drafting zone.
Floating Fibre Index (FFI) was defined as
FFI = ((2.5% span length/mean length)-1)x(100)
The proportion of short fibres has an extremely great impact on yarn quality and production. The proportion of short fibres has
increased substantially in recent years due to mechanical picking and hard ginning. In most of the cases the absolute short fibre
proportion is specified today as the percentage of fibres shorter than 12mm. Fibrograph is the most widely used instrument in the
textile industry , some information regarding fibrograph is given below.
FIBROGRAPH:
Fibrograph measurements provide a relatively fast method for determining the length uniformity of the fibres in a sample of cotton in
a reproducible manner.
Results of fibrograph length test do not necessarily agree with those obtained by other methods for measuring lengths of cotton fibres
because of the effect of fibre crimp and other factors.
Fibrograph tests are more objective than commercial staple length classifications and also provide additional information on fibre
length uniformity of cotoon fibres. The cotton quality information provided by these results is used in research studies and quality
surveys, in checking commercial staple length classifications, in assembling bales of cotton into uniform lots, and for other purposes.
Fibrograph measurements are based on the assumptions that a fibre is caught on the comb in proportion to its length as compared to
toal length of all fibres in the sample and that the point of catch for a fibre is at random along its length.


FIBRE FINENESS:
Fibre fineness is another important quality characteristic which plays a prominent part in determining the spinning value of cottons. If
the same count of yarn is spun from two varieties of cotton, the yarn spun from the variety having finer fibres will have a
larger number of fibres in its cross-section and hence it will be more even and strong than that
spun from the sample with coarser fibres.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

97 www.ijergs.org

Fineness denotes the size of the cross-section dimensions of the fibre. AS the cross-sectional features of cotton fibres are irregular,
direct determination of the area of cro-section is difficult and laborious. The Index of fineness which is more commonly used is the
linear density or weight per unit length of the fibre. The unit in which this quantity is expressed varies in different parts of the world.
The common unit used by many countries for cotton is microgrames per inch and the various air-flow instruments developed for
measuring fibre fineness are calibrated in this unit.
Following are some methods of determining fibre fineness.
- gravimetric or dimensional measurements
- air-flow method
- vibrating string method
Some of the above methods are applicable to single fibres while the majority of them deal with a mass of fibres. As there is
considerable variation in the linear density from fibre to fibre, even amongst fibres of the same seed, single fibre methods are time-
consuming and laborious as a large number of fibres have to be tested to get a fairly reliable average value.
It should be pointed out here that most of the fineness determinations are likely to be affected by fibre maturity, which is an another
important characteristic of cotton fibres.
AIR-FLOW METHOD(MICRONAIRE INSTRUMENT):

The resistance offered to the flow of air through a plug of fibres is dpendent upon the specific surface area of the fibres. Fineness
tester have been evolved on this principle for determining fineness of cotton. The specific surface area which determines the flow of
air through a cotton plug, is dependent not only upon the linear density of the fibres in the sample but also upon their maturity. Hence
the micronaire readings have to be treated with caution particularly when testing samples varying widely in maturity.
In the micronaire instrument, a weighed quantity of 3.24 gms of well opened cotton sample is compressed into a cylindrical container
of fixed dimensions. Compressed air is forced through the sample, at a definite pressure and the volume-rate of flow of air is measured
by a rotometer type flowmeter. The sample for Micronaire test should be well opened cleaned and thoroughly mixed( by hand fluffing
and opening method). Out of the various air-flow instruments, the Micronaire is robust in construction, easy to operate and presents
little difficulty as regards its maintenance.
FIBRE MATURITY:
Fibre maturity is another important characteristic of cotton and is an index of the extent of
development of the fibres. As is the case with other fibre properties, the maturity of cotton fibres varies not only between fibres of
different samples but also between fibres of the same seed. The causes for the differences observed in maturity, is due to variations in
the degree of the secondary thickening or deposition of cellulose in a fibre.
A cotton fibre consists of a cuticle, a primary layer and secondary layers of cellulose surrounding the lumen or central canal. In the
case of mature fibres, the secondary thickening is very high, and in some cases, the lumen is not visible. In the case of immature
fibres, due to some physiological causes, the secondary deposition of cellulose has not taken sufficiently and in extreme cases the
secondary thickening is practically absent, leaving a wide lumen throughout the fibre. Hence to a cotton breeder, the presence of
excessive immature
fibres in a sample would indicate some defect in the plant growth. To a technologist, the presence of excessive percentage of immature
fibres in a sample is undesirable as this causes excessive waste losses in processing lowering of the yarn appearance grade due to
formation of neps, uneven dyeing, etc.
An immature fibre will show a lower weight per unit length than a mature fibre of the same cotton, as the former will have less
deposition of cellulose inside the fibre. This analogy can be extended in some cases to fibres belonging to different samples of cotton
also. Hence it is essential to measure the maturity of a cotton sample in addition to determining its fineness, to check whether the
observed fineness is an inherent characteristic or is a result of the maturity.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

98 www.ijergs.org

DIFFERENT METHODS OF TESTING MATURITY:
MATURITY RATIO:
The fibres after being swollen with 18% caustic soda are examined under the microscope with suitable magnification. The fibres are
classified into different maturity groups depending upon the relative dimensions of wall-thickness and lumen. However the procedures
followed in different countries for sampling and classification differ in certain respects. The swollen fibres are classed into three
groups as follows
1. Normal : rod like fibres with no convolution and no continuous lumen are classed as "normal"
2. Dead : convoluted fibres with wall thickness one-fifth or less of the maximum ribbon width are classed as "Dead"
3. Thin-walled: The intermediate ones are classed as "thin-walled"
A combined index known as maturity ratio is used to express the results.
Maturity ratio = ((Normal - Dead)/200) + 0.70
where,
N - %ge of Normal fibres
D - %ge of Dead fibres
MATURITY CO-EFFICIENT:
Around 100 fibres from Baer sorter combs are spread across the glass slide(maturity slide) and the overlapping fibres are again
separated with the help of a teasing needle. The free ends of the fibres are then held in the clamp on the second strip of the maturity
slide which is adjustable to keep the fibres stretched to the desired extent. The fibres are then irrigated with 18% caustic soda solution
and covered with a suitable slip. The slide is then placed on the microscope and examined. Fibres are classed into the following three
categories
1. Mature : (Lumen width "L")/(wall thickness"W") is less than 1
2. Half mature : (Lumen width "L")/(wall thickness "W") is less than 2 and more than 1
3. Immature : (Lumen width "L")/(wall thickness "W") is more than 2
About four to eight slides are prepared from each sample and examined. The results are presented as percentage of mature, half-
mature and immature fibres in a sample. The results are also expressed in terms of "Maturity Coefficient"
Maturity Coefficient = (M + 0.6H + 0.4 I)/100 Where,
M is percentage of Mature fibres
H is percentage of Half mature fibres
I is percentage of Immature fibres
If maturity coefficient is
- less than 0.7, it is called as immature cotton
- between 0.7 to 0.9, it is called as medium mature cotton
- above 0.9, it is called as mature cotton
AIR FLOW METHOD FOR MEASURING MATURITY:
There are other techniques for measuring maturity using Micronaire instrument. As the fineness value determined by the Micronaire is
dependent both on the intrinsic fineness(perimeter of the fibre) and the maturity, it may be assumed that if the intrinsic fineness is
constant then the Micronaire value is a measure of the maturity
DYEING METHODS:
Mature and immature fibers differ in their behaviour towards various dyes. Certain dyes are preferentially taken up by the mature
fibres while some dyes are preferentially absorbed by the immature fibres. Based on this observation, a differential dyeing technique
was developed in the United States of America for estimating the maturity of cotton. In this technique, the sample is dyed in a bath
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

99 www.ijergs.org

containing a mixture of two dyes, namely Diphenyl Fast Red 5 BL and Chlorantine Fast Green BLL. The mature fibres take up the red
dye preferentially, while the thin walled immature fibres take up the green dye. An estimate of the average of the sample can be
visually assessed by the amount of red and green fibres.
FIBRE STRENGTH:
The different measures available for reporting fibre strength are
1. breaking strength
2. tensile strength and
3. tenacity or intrinsic strength
Coarse cottons generally give higher values for fibre strength than finer ones. In order, to compare strength of two cottons differing in
fineness, it is necessary to eliminate the effect of the difference in cross-sectional area by dividing the observed fibre strength by the
fibre weight per unit length. The value so obtained is known as "INTRINSIC STRENGTH or TENACITY". Tenacity is found to be
better related to spinning than the breaking strength.
The strength characteristics can be determined either on individual fibres or on bundle of fibres.
SINGLE FIBRE STRENGTH:
The tenacity of fibre is dependent upon the following factors
chain length of molecules in the fibre orientation of molecules size of the crystallites distribution of the crystallites gauge
length used the rate of loading type of instrument used and atmospheric conditions
The mean single fibre strength determined is expressed in units of "grams/tex". As it is seen the the unit for tenacity has the
dimension of length only, and hence this property is also expressed as the "BREAKING LENGTH", which can be considered
as the length of the specimen equivalent in weight to the breaking load. Since tex is the mass in grams of
Uniformity
Length uniformity is the ratio between the mean length and the upper half mean length of the cotton fibres within a sample. It is
measured on the same beards of cotton that are used for measuring fibre length and is reported as a percentage. The higher the
percentage, the greater the uniformity. If all the fibres in the sample were of the same length, the mean length and the upper half mean
length would be the same, and the uniformity index would be 100. The following tabulation can be used as a guide in interpreting
length uniformity results. Measurements are performed by HVI. Cotton with a low uniformity index is likely to have a high percentage
of short fibres and may be difficult to process
Length uniformity index
Descriptive Designation Length Uniformity
Very Low Below 77
Low 77 - 79
Average 80 - 82
High 83 - 85
Very High Above 85
Result
Length,uniformity ratio,elongation ,strength and Micronaire,.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

100 www.ijergs.org

Can be determined about a cotton fiber by analyzing its basic characteristics by HVI-900 testing machine

Cotton harvested by hand is tested by hvi-900 with gossypium hirsutum sample as fallows


property S1 S2 S3 S4 S5
Length,mm 30.1 29.9 29.87 30.5 30.2
Uniformity,% 54.20 53.21 53.9 50.9 52.9
Strength,g/tex 29.20 28.9 29.1 29.5 27.65
elongation 5.6 5.8 5.0 5.4 5.3
s.f.i % 9.2 9.7 9.2 9.4 9.0
micronaire 4.5 4.5 4.4 4.5 4.4

Cotton harvested by machine is tested by hvi-900 with gossypium hirsutum sample as fallows

property S1 S2 S3 S4 S5
Length,mm 26.9 26.4 27.2 27 26.93
Uniformity,% 51 50.3 51.2 51.30 50.15
Strength,g/tex 26.2 26.8 26.9 27.10 26.44
elongation 5.4 5.6 4.7 5.1 5.0
s.f.i % 13.0 12.9 12.60 12.90 12.55
micronaire 4.1 4.1 4.0 4.2 4.0

conclusion

The purpose of this study was to evaluate the impact of harvesting method on the cotton fibre quality Although it is practical to use a
spindle harvester designed to pick cotton from wider row spacing‘s to harvest cotton planted in rows,will damage the quality of fibre
compare to hand picking .for overcome to thes problem cotton is harvested with near about same picking pressure as impart in hand
picking for that purpose developed a pneumatic cotton picking machine by suction mechanism to pick a cotton with good quality.

REFERENCES:

[1] Determination of Variety Effect of a Simple Cotton Picking Machineon Design ParametersM. Bülent COŞKUN' Geliş Tarihi:
12.07.2002
[2] Quantitation of Fiber Quality and the Cotton Production-Processing Interface: A Physiologist's Perspective; Judith M.
Bradow* and Gayle H. Davidonis
[3] The Effect of Harvesting Procedures on Fiber and Yarn Quality of Ultra-Narrow-Row Cotton; David D. McAlister III* and
Clarence D. Rogers
[4] Harvesting of cotton residue for energy production ,T.A. Gemtosa, *, Th. Tsiricoglou b, Laboratory of Farm Mechanisation,
University of Thessaly, Pedio Areos, 38334 Volos, Greece TEI of Larissa, 41110 Larissa, Greece
[5] . Physiological Cost Analysis for the Operation of Knapsack Cotton Picker in India, M. MUTHAMILSELVAN, K.
RANGASAMY, R. MANIAN AND K. KATHIRVEL; Karnataka J. Agric. Sci.,19(3): (622-627) 2006
[6] RECENT ADVANCES IN GINNING FOR LOWERING COST AND IMPROVING OF EFFICIENCY; M.K. Sharma and
Lav Bajaj, Bajaj Steel Industries Limited, Nagpur, India
[7] Mechanical Cotton Harvesting harvesting costs, value of field waste and grade-loss contribute to economics of machine-
picking of cotton; Warren R. Bailey, Agricultural Economist, Bureau of Agricultural Economics, United States Department
of Agriculture
[8] Assessing Cotton Fiber Maturity and Fineness by Image Analysis; Ghith Adel1, Fayala Faten2, Abdeljelil Radhia1; 1National
Engineering School of Monastir, Monastir, TUNISIA ;Laboratoire des Etudes des Systèmes Thermiques et Energétiques,
LESTE, ENIM, TUNISIA
[9] Evaluation of cotton fiber maturity measurements; Dev R. Paudela, Eric F. Hequeta,b,∗, Noureddine Abidi; Fiber and
Biopolymer Research Institute, Texas Tech University, Lubbock, TX 79409, USA
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

101 www.ijergs.org

[10] A Different Approach to Generating the Fibrogram from Fiber-length-array Data Parti: Theory; R.S. Krowicki, J.M.
Hemstreet*, and K.E. Duckett; Southern Regional Research Center, ARS, USDA, New Orleans, LA, USA; The University of
Tennessee, Knoxville, TN, USA; Received 19.11.1992 Accepted for publication 29.2.1996
[11] Physical and mechanical testing of textiles; X WANG, X LIU and C HURREN, Deakin University, Australia
[12] . Relationships Between Micronaire, Fineness, and Maturity; Part I. Fundamentals; Joseph G. Montalvo, Jr.; The Journal of
Cotton Science 9:81–88 (2005






















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

102 www.ijergs.org

A Review on: Comparision and Analysus of Edge Detection Techniques
Parminder Kaur
1
, Ravi Kant
2

1
Department of ECE, PTU, RBIEBT, Kharara
2
Assistant professor, Head ECE Department, RBIEBT, Kharar
E-mail: pinki_sidhu81@yahoo.com

ABSTRACT - The author has tried to compare the different edge detection techniques on real images in the presence of noise and
then calculating the signal to noise ratio. Edge detection is a tool which is used in shape, colour, contrast detection, image
segmentation and scene analysis etc. Edge detection of Image which provides the information related to the intensity changes at a
point of an image. In this paper the comparison of various edge detection techniques and the visual performance analysis of the
various techniques in noisy conditions is performed by using the different methods such as, Canny, LoG (Laplacian of Gaussian),
Robert, Prewitt, Sobel, Laplacian and wavelet. These methods exhibit the different performance under such conditions respectively.

Keywords—Edge detection, image processing.

INTRODUCTION
The Edges can be defined if there are significant local changes of intensity at a point in an image and these can be formed by
connecting the groups of pixels which takes place on the boundary between two different regions in an image. The first derivative is
used to consider a local maximum at the point of an edge. The gradient magnitude is used to measure the intensity changes at a
particular point of edge. It can be expressed in other two terms such as: the gradient magnitude and the gradient orientation.
In other the objective which is used for the comparison of various edge detection techniques and to analyse the performance of the
various techniques in different conditions. There are different methods which are used to perform edge detection. Thus the majority of
different methods may be grouped into two categories [1]. The different methods are used for the edge detection but in this paper only
1D and 2D edge detection techniques are used.

EDGE DETECTION TECHNIQUES
Sobel Operator
It was invented by the Sobel Irwin Edward in 1970. It is the operator which consists of pair 3×3 convolution kernels as
shown in Figure1.One kernel is simply the other one rotated by 90°. The convolutional kernel provides the way to
multiply two array of numbers of different sizes but of the same dimensionality .This can be used to implement the
operators in digital image processing where output pixel values are simple linear combinations of certain input pixel
values. Thus this kernel is a smallest matrix of numbers that is used in image convolutions. Different sized kernels contain
the different patterns of numbers that give rise to different results under convolution. Convolution is done by moving the
kernel across the frame one pixel at a time. As each pixel and its neighbours are weighted by the corresponding value in
the kernel and summed to produce a new value. The Gx and Gy of the gradient is calculated by subtracting the upper row
to lower row and left column to right column. The gradient magnitude is given by:
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

103 www.ijergs.org


⎹G⎹ =
2
+
2

Typically, an approximate magnitude is computed using:





Gx Gy
Fig1:Sobel convolution kernels

|G|= |Gx| + |Gy|

This is much faster to compute. It is used to detect the thicker edges only such as horizontal and vertical gradients. It is not used to
detect the diagonal edges of any image .Kernels are designed to respond maximally to running vertical and horizontally relative to the
pixel grid .In other 0 is taken to show the maximum contrast from black to white . The Sobel operator performs a 2-D spatial gradient
measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find
the approximate absolute gradient magnitude at each point in an input grayscale image[1] .The angle of orientation of the edge
(relative to the pixel grid) giving rise to the spatial gradient is given by:
ɵ =arc tan (Gy/Gx)
Robert’s cross operator: It was invented by Robert Lawrence Gilman scientist in 1965.This type of detector is very sensitive to the
noise .It is based on pixels. The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an
image. It highlights the regions of high spatial frequency which often correspond to edges.




Gx Gy
Fig2: Roberts cross convolution kernels
In its most common usage the input to the operator is a grayscale image. Pixel values at each point in the output represent the
estimated absolute magnitude of the spatial gradient of the input image at that point. The operator consists of a pair of
2×2 convolution kernels .In which only addition and subtraction takes place. One kernel is simply the other rotated by 90°. This is
-1 0 +1
-1 0 +1
-1 0 +1
+1 +1 +1
0 0 0
-1 -1 -1
+1 0
0 -1
0 +1
-1 0
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

104 www.ijergs.org

very similar to the Sobel operator. In this detector the parameters are fixed which cannot be changed. Convolution is done by moving
the kernel across the frame one pixel at a time. At each pixel and its neighbours are weighted by the corresponding value in the kernel
and summed to produce a new value. Thus the Gx and Gy of the gradient is calculated by subtracting the upper row to lower row and
left column to right column. The gradient magnitude is given such as:
|G|=

+

The angle of orientation of the edge giving ɵ=arc tan (Gy /Gx) -3π/ 4
Rise to the spatial gradient (relative to the pixel grid orientation) [2].

PREWITT‘S OPERATOR:
It was discovered in 1970 by Judith M. S. Prewitt. It is similar to the Sobel edge detector but having different Masks than the Sobel
detector. It is proved that Prewitt is less sensitive as compared to the Roberts edge detector. It is used in Image processing for edge
detection [3].





Gx Gy

Fig.3:Masks for the Prewitt edge detector

The Prewitt operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical
direction and it is inexpensive in terms of computations. Convolution is done by moving the kernel across the frame one pixel at a
time. As each pixel and its neighbours are weighted by the corresponding value in the kernel and summed to produce a new value.
Thus the Gx and Gy of the gradient is calculated by subtracting the upper row to lower row and left column to right column. It is used
to calculate the gradient of the image intensity at each point and giving the direction of the largest possible increase from light to dark
and the rate of change in that direction. The obtained results show how abruptly or smoothly the image changes at that point.

LAPLACIAN OF GAUSSIAN:
It was discovered by David Marr and Ellen C.Hildreth. It is the combination of laplacian and Gaussian. The image is smoothed to a
greater extent. The Laplacian of an image highlights regions of rapid intensity change and is therefore often used for edge detection
(zero crossing edge detectors). Second order derivative also known to be as Marr-Hildreth edge detector. The Laplacian is often
applied to an image that has first been smoothed with something approximating a Gaussian smoothing filter in order to reduce its
sensitivity to noise and the two variants will be described together here. The operator normally takes a single grey level image as input
and produces another grey level image as output. The Laplacian of an image highlights regions of rapid intensity change and is often
used for edge detection [4].This pre-processing step reduces the high frequency noise components prior to the differentiation step. x is
the distance from the origin in the horizontal axis and y represents the distance from the origin in the vertical axis. ζ is the spread of
+1 +2 +1
0 0 0
-1 2 -1
-1 0 +1
-2 0 +2
-1 0 +1
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

105 www.ijergs.org

the Gaussian and controls the degree of smoothing. Greater the value of ζ broader is the Gaussian filter, more is the smoothing. Two
commonly used small kernels are shown in Figure 4.
LOG(x, y) = -
1

4
1 −

2
+
2
2
2


2+2
2
2







Fig.4:Three commonly used discrete approximations to the laplacian filter [5-9].

CANNY EDGE DETECTON
The Canny operator was designed to be an optimal edge detector. It was invented by canny john F.in 1986.In other the input is
provided in the form of grey scale image and then it produces the output of an image which is showing the positions of tracked
intensity discontinuities. The Canny operator was designed to be an optimal edge detector .It is taking an input as a grey scale image
and produces the output of an image showing the positions of tracked intensity discontinuities

WORKING:
It uses the maximum and minimum thresholds and if the magnitude is between the two thresholds then it could be set to zero unless
there is a path from this pixel to a pixel with a gradient above T2. Edge strength can be found out by taking the gradient of the image
.The mask which is used for the canny edge detector can be a sobel or Robert mask [10-12]. The magnitude or edge strength of the
gradient is approximated by using the formula such as:
|G| = |Gx| + |Gy|
The formula which is used to find the edge direction such as:
Theta = invtan (Gy / Gx)
Convolution is done by moving the kernel across the frame one pixel at a time. At each pixel and its neighbours are weighted by the
corresponding value in the kernel and summed to produce a new value.

LAPLACIAN EDGE DETECTION METHOD:

The Laplacian method searches the zero values than those surrounding it. The Laplace operator is named after the French
mathematician Pierre-simon de Laplace (1749-1827).In other the Zero crossings in the second derivative of the image which are used
to find edges.
1 1 1
1 8 1
1 1 1
-1 -2 -1
2 -4 -2
-1 2 -1
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

106 www.ijergs.org


L(x, y) =

2

2
+

2

2

To remove the noise [13].The Laplacian L(x, y) of an image with pixel intensity values I(x,y) is given by:






Fig. 5:Mask of Laplacian


HAAR WAVELET:

It was invented by Haar Alfred in 1910.Wavelet is the combination of small waves. When it is applied to the image then it provides
the approximation and detail coefficient information of the image. Detail a coefficient which contains high frequency information is
only used to detect the edges and the approximation coefficients contains the low frequency information. It decomposes the
discrete signal into sub-signals of half its length, one signal is running average and the other subsignal is running difference. There are
also other types of waveforms but the haar wavelet is the origin of other wavelets. It is also used as an edge detection method. Haar
wavelet consists the rapid fluctuations between the just non-zero values and with an average value of 0.Reasons for introducing the 1
level Haar wavelet represents the 1 level fluctuations [12].

ADVANTAGES AND DISADVANTAGES OF EDGE DET ECTOR:
Edge detection is an important tool which is providing the important information related to the shape, colour, size etc. To find out the
true edges to get the better results from the matching process. That‘s why it is necessary to take the edge detector that fit best to the
application [14-21].





0 1 0
1 -4 1
0 1 0
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

107 www.ijergs.org


TABLE 1: ADVANTAGES AND DISADVANTAGES OF EDGE DETECTORS

Operator Advantages Disadvantages
Classical(Sobel, Prewitt, Robert…) Simplicity and easy to
implement. Detection of
edges and their directions.
These are sensitive to noise,
Inaccurate
Zero Crossing(Laplacian,
Second directional
derivative)
These are used to Detect the
edges and their directionality.
Thus there is a fixed
characteristics in all
directions
These operators are used to
Respond some of the Existing
edges.
Laplacian of Gaussian (LoG) (Marr-
Hildreth)
It is used to find the correct
places of edges, Testing
broad area around the pixel
.Emphasizes the pixels at the
place where intensity changes
takes place.
Malfunctioning takes place at
the corners, curves and where
the grey level intensity
function varies. Not finding
the direction of edge because
of using the Laplacian filter.
Gaussian(Canny) It is Using a probability for
finding error rate,
Localization and response.
Improving signal to noise
ratio. It is providing a Better
detection specially in noisy
conditions
Complex to Compute, False
zero crossing. It is a Time
consuming.

Comparison of various edge detection techniques
Edge detection of all seven types was performed as in fig. 6.Prewitt provided the better results as compared to the other
methods. On the noisy imagescannot provide the better result
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

108 www.ijergs.org


Fig. 6: Comparison of edge Detection techniques on the college image

Fig. 8: comparison of edge detection techniques on noisy clock image.
CONCLUSI ON
1D edge detection method involves the methods such as: Sobel, Prewitt and Roberts edge detectors and the 2D edge detection
involves the methods such as: Laplacian, Laplacian of Gaussian, optimal edge detector and wavelets are used to find the optimum
edge detector technique.Results on the college image in which the horizontal,vertical and diagonal edges are properly detected by
using the Prewitt edge detector. The LOG and canny also providing the better results even on the low quality images than the other
methods.In other the results on the noisy clock images are better obtained by using canny edge detector than the other methods.
Different detectors are useful for different quality of the images. In the future use the hybrid techniques can be used for better results.
REFERENCES:

[1] J .Matthews.―An introduction to edge detection.The sobel edge detector ―Available http://www/ generation5.
org/content/2002.im01.im 01.Asp,2002.


[2] L. G. Roberts―Machine perception of 3-D solids‖ ser .optical and electro- optical information Processing MIT press,
1965.

[3] E. R. Davies.―Constraints on the design of template masks for edge detection‖ Pattern recognition Lett,vol.4, pp.111- 120,
Apr.1986.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

109 www.ijergs.org

[4] Mamta Juneja, Parvinder Singh Sandhu (2009), ―Performance Evaluation of Edge Detection Techniques for images
in`spatial Domain international Journal of computer Theory and Engineering, Vol.1, No.5, December 2009,pp.614- 621.

[5] V. Torre and T. A. Poggio. ―On edge detection‖.IEEE Trans .Pattern Anal. Machine Intell. vol. PAMI-8, no.2, pp.187-163, Mar.
1986.

[6] W.Frei and`C.-C. Chen.―Fast`boundary Detection,Ageneralization`and`a`new algorithm ‖.lEEE Trans. Comput. vol. C-26, no.
10, pp. 988-998, 1977.

[7 ] W. E. Grimson and E. C. Hildreth. ―Comments on Digital step edges from zero crossings of second directional
Derivatives‘‘. IEEE Trans.Pattern Anal. Machine Intell., vol. PAMI-7, no. 1, pp. 121-129, 1985
.
[8] R. M. Haralick. ―Digital step edges from zero crossing of the second directional derivatives,‖ IEEE Trans. Pattern Anal. Machine
Intell. vol. PAMI-6, no. 1, pp. 58-68, Jan. 1984.

[9] E. Argyle.―Techniques for edge detection,‖Proc. IEEE, vol. 59,pp. 285- 286, 1971

[10] J. F. Canny.―A computational approach to edge detection‖. IEEE Trans. Pattern Anal. Machine Intell. vol. PAMI-8, no. 6, pp.
679-697, 1986

[11] J. Canny. ―Finding edges and lines in image‖. Master‘s thesis, MIT, 1983.

[12] R. C. Gonzalez and R. E. Woods. ―Digital Image Processing‖. 2nd ed. Prentice Hall, 2002.

[13] Kumar Parasuraman and Subin P.S (2010)―SVM Based License Plate Recognition System‖ 2010 IEEE International conference
on Computational intelligence and Computing Research.

[14] Olivier Laligant and Frederic Truchetet (2010)―A Nonlinear Derivative Scheme Applied to Edge Detection‖ IEEE
Transactions on Pattern analysis and Machine Intelligence,vol.32,No.2, February2010 ,pp.242-257.

[15] Mohammadreza Heydarian, Michael D. Noseworthy, Markad V. Kamath, Colm Boylan, and W. F. S. Poehlman (2009)
Detecting Object Edges in MR and CT Images‖ IEEE Transactions on Nuclear Science, vol.56, No.1, February2009,
pp.156- 166.

[16] Olga Barinova, Victor Lempitsky, and Pushmeet Kholi (2012) ―On Detection of Multiple Object Instances Using Hough
Transforms‖IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, NO. 9, September 2012 pp.1773-
1784.

[17] G.Robert Redinbo (2010)―Wavelet Codes for Algorithm-Based Fault Tolerance Applications‖IEEE Transactionson
Dependable and Secure Computing, vol.7, No.3, july- september 2010, pp.315-328.

[18] Sang-Jun Park, Gwanggil Jeon, and Jechang Jeong (2009),‖ Deinterlacing Algorithm using Edge Direction from Analysis of the
DCT Coefficient Distribution‖ IEEE Transactions on Consumer Electronics, Vol. 55, No. 3, August 2009.pp 174-1681.

[19] Sonya A. Coleman, Bryan W. Scotney, and Shanmugalingam Suganthan(2010) ―Edge Detecting for Range Data Using
Laplacian Operators‖IEEE Transactions on image Processing,vol. 19, No.11,November 2010,pp.2814- 2824.

[20] Pablo Arbela´ez, Michael Maire,Charless Fowlkes,and Jitendra Malik (2011)―Contour detection and Hierarchical``Image ‖
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.33,No.5 May 2011, pp.898-916.

[21] Abbas M.Al-Ghaili, Syamsiah Mashohor, Rahman Ramli,and Alyani Ismail (2013)―Vertical-Edge-Based car-License- Plate
Detection Method‖IEEE Transactions On Vehicular Technology,vol.62, No.1, January 2013,pp.26-38.

[22] Vinh Dinh Nguyen, Thuy Tuong Nguyen, Dung Duc Nguyen, Sang Jun Lee, and Jae ok Jeon,(2013)―A Fast
rvolutionaryAlgorithm for Real-Time Vehicle`Detection IEEE Transaction on Vehicular Technology,vol.No.6,july2013, pp.2453-
2468.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

110 www.ijergs.org

Steganography & Biometric Security Based Online Voting System
Shweta A.Tambe
1
, Nikita P. Joshi

, P.S. Topannavar
1

1
Scholar, ICOER, Pune, India
Emai- suchetakedari@gmail.com

ABSTRACT – Online voting system that helps to manage elections easily and securely. With the help of steganography one can try
to provide a biometric as well as password security to the voters account. The system will make a conclusion whether the voter is
correct person or not. System uses voters fingerprint image as cover image and embed voter‘s secret data into the image using
steganography. This method produces a stego image which is equal to the original fingerprint image only. On the whole there are
changes in the original fingerprint image & stego image but they are not visible by human eye.
Keywords – Biometric, Cover, Fingerprint, Online, Password, Steganography, Security
INTRODUCTION
An election is an official process by which person chooses an individual to hold all kind of public issues. The elected person should
satisfy all necessary needs of common people so the system of whole country works properly. The main requirements of election
system are like authentication, speed, accuracy, and safety. The voting system should be speedy so the valuable time of voters as well
as the voting system conductors will be saved. Accuracy means the whole system should be accurate with respect to result. Safety
involves the secure environment around the election area so that voters will not be under any force. In online voting system main aim
is to concentrate the focus on security of voters account. For any type of voting system following points must be taken into
consideration. This can include confusing or misleading voters about how to vote, violation of the secret ballot, ballot stuffing,
tampering with voting machines, voter registration fraud, failure to validate voter residency, fraudulent tabulation of results, and use of
physical force or verbal intimation at polling places. If online voting system works well then it will be a good progress over the current
system. In the next section the proposed methodology, database creation & embedding the secret data, online voting system,
recognition of the embedded message, & analysis of the system is explained.
PROPOSED METHODOLOGY

The methodology includes steganography with the help of biometric security. Fundamentally there are various types of steganography
like text, audio, image, and video. Images are the well-liked cover media used for steganography. In many applications, the most
important requirement for steganography is the security, which means that the stego image should be visually and statistically similar
to their corresponding cover image strictly. Now a day‘s steganographic system uses images as cover object because people often send
digital images by email. So using image for steganography is the good choice as all kind of emails contain at least single image. After
digitalization, images contain the quantization noise which provides space to hide data.
When images are used as the cover image they are generally manipulated by changing one or more of the bits of an image. With the
help of least significant bit (LSB) insertion, system hides the message. As LSB of an image contain less amount of information,
individual can easily hide any personal data by replacing those bits by message bits. To work with the system each person should be
provided with a PAN number (Personal Authentication Number).This is like a serial number allocated to every person. System also
needs the thumb impression of all voters as a cover image. Finally at the time of account creation a secret key is given to each voter
which the voter should hide from every single person.
Considering that all above data is collected from every voter the system will work as follows. First of all the voter has to sign in to the
account with the help of voter‘s account identification number. Then voter is asked to give the thumb impression. Then the voter is
asked to enter the secret key for PAN number decryption from the database embedded fingerprint image. Finally the voter has to enter
the PAN number. If PAN number match is found then the voter is an authenticate person & can cast a vote. Then the account will be
closed for that person. Once the account will be closed then that account will not be opened again for second time. So the fraudulent
cases such as duplicate voting will be avoided in the online voting system. After giving vote the count will be incremented for that
political person.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

111 www.ijergs.org

A. DATABASE CREATION & EMBEDDING PROCESS

For database creation a voter committee should be appointed. Committee member‘s job is to collect the data from each person. Every
voter should have an account identification number to maintain the account, PAN number for voter authentication & a secret key as a
password or cross verification of the database. As shown in the fig.1 finger print image block takes the fingerprint image of voter as an
input. PAN number block accepts the personal authentication number as an input. Steganography block performs steganography on
the personal authentication number. Thus a stego image is saved as database image. Different aspects in data hiding systems are of
great concern like capacity and security. Capacity means the amount of data that can be hidden in the cover object; security means an
eavesdropper‘s failure to detect hidden information. We have concentrated our focus on security.








Figure.1 Block Diagram for Database Creation
The fingerprint image should be plain which will act as cover image after data hiding. So the cover image for each voter is its own
fingerprint image only. Prior to the least significant bit insertion, system uses discrete wavelet transform. In discrete wavelet transform
with the help of HAAR transform the fingerprint image is transformed from spatial domain to frequency domain. For 2-D images,
HAAR transform processes the image by 2-D filters in each dimension. The filters divide the input image into four non-overlapping
sub bands. The Discrete Wavelet Transform is made up of realization of Low pass Filters and High Pass Filters.
It is one of the simplest and basic transformations from the time domain to a frequency domain. First of all HAAR transform convert
the fingerprint input image into four non overlapping sub bands LL, LH, HL, HH as shown in the fig.2 (a) . Where L stands for low
frequency band & LL is shown at left upper most corners. H stands for high frequency band & HH is shown at right lower most
corners. With the help of LSB (least significant bit) insertion technique the PAN number is embedded into the LL sub band. The
fingerprint image after PAN number embedding is shown in fig.2 (b) as embedded image. If compared to the Fourier transform which
only differs in frequency, the Haar function varies in both scale and position.


Figure.2
(a) Four Sub bands of DWT (b) Embedded Image
Applying a discrete wavelet transform to images, much of the signal energy lies at low frequencies and they appear in the upper left
corner of the discrete wavelet transform. This property of energy compaction is made use of in this embedding procedure. Embedding
is achieved by inserting the secret data into a set of discrete wavelet transform coefficients, thus ensuring the personal authentication
number (PAN) invisibility. The combination of fingerprint image & PAN number is nothing but a stego image is produced with the
Fingerprint Image
Steganography
Database
PAN Number
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

112 www.ijergs.org

help of LSB insertion technique. It is assumed that, embedding message in this way is not going to destroy the information of the
original image to a great extent. A secret key is separately provided to each voter along with the PAN number. Voter should remember
that in order to use it at the time of online voting. After completion of all the steps thus the database creation of the voter is complete.
This task will be performed for each person.

B. ONLINE VOTING SYSTEM

At the time of online voting as shown in the fig.3 a voter is first asked for voter‘s account identification number so that voter‘s
election account will be opened. Then voter is asked to give the fingerprint image followed by secret key. If the secret key is correct
then the PAN number decryption & recognition












Figure.3 Online Voting System

is carried out with the help of discrete wavelet transform. Discrete wavelet transform is applied to the embedded fingerprint image in
order to get the embedded PAN number. Then the voter is asked to enter the PAN number. After comparing both the PAN numbers, if
the match is found then the voter is an authenticate person & can cast a vote.

C. RECOGNITION OF EMBEDDED MESSAGE

The result of embedded process is a stego image. Recognition process includes extraction of the PAN number from the stego image.
For recognition purpose discrete wavelet transform is applied to extract the hidden message from database image as shown in fig. 4.
Principle Component Analysis is used for fingerprint recognition. Principle component analysis is a way of identifying the patterns of
fingerprint image in order to highlight their similarities & differences. PCA is a useful method having use in face recognition and
image compression, and for finding patterns. Thus system uses PCA for finding fingerprint patterns. PCA representation is explained
by eigen values & eigen vectors. This system finds variance, covariance matrix & eigen values. To find out the above parameters one
should know about the standard deviation, covariance, eigenvectors and eigen values. Variance is nothing but the amount of data
extends in a data set. It is equal to the standard deviation. One can measure the covariance always between two dimensions. When we
get a set of data points, we divide it into eigenvectors and eigen values. Every eigenvector has a corresponding eigen value.
Voter Account
ID
Fingerprint Image &
Secret Key
PAN Decryption
& Recognition
Enter PAN
Number
Voting Panel
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

113 www.ijergs.org












Figure.4 Block Diagram for Extraction Process
Eigen vector with the highest eigen value is therefore the principal component. Finally comparison is done to find out the match using
euclidean distance. If match is found between database image & test image then the voter is authorized person.
RESULT & ANALYSIS

This system uses account identification number to maintain the voters account, fingerprint image as biometric security, PAN number
for authentication & secret key for cross verification of the database. Thus the system provides a multilevel security which is the
advantage over the earlier election system. Hence no fraudulent cases such as duplicate voting.
Steganographic Performance
Basically the least significant bit insertion technique is the method of data hiding by direct replacement i.e. spatial domain
technique. But there are disadvantages like low robustness to modifications made to the stego image & low imperceptibility. Hiding
data with the help of transform domain is the great benefit which appeared to overcome the robustness and imperceptibility problems
found in the LSB substitution techniques. The proposed system was applied to fingerprint images at each time & it achieved
satisfactory results. The performance of the proposed technique can be evaluated in terms of comparison between quality of the stego
image & original image. The comparison was done on the basis of imperceptibility.
Imperceptibility measures how much distortion was caused after data hiding in the cover image that is the quality of the image.
Where, high quality stego image reflects more invisible the hidden message. We can evaluate the stego-image quality by using Peak
Signal to Noise Ratio (PSNR). The PSNR ratio is used as a quality measurement between the cover image & stego image. The higher
the values of PSNR better the quality of the stego image. Typical values for the PSNR are between 30 and 50 dB, with the bit depth of
8 bit. The PSNR for size M x N image I and its noisy approximation K is calculated by
PSNR = 10 log
10
{255
2
/ MSE}
And
Stego Image
Read Stego Image
Apply DWT to image to
divide it into 4 sub bands
Extract secret message
Secret message
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

114 www.ijergs.org

M-1 N-1
MSE = 1/MN ∑ ∑ [I (i, j) - K (i, j)]
2
i=0 j=0
Where MSE - Mean Square Error

The PSNR was calculated for each stego image & PSNR ranges from 30 to 50 which give reasonable visual quality of the stego
image. After applying above formula one can comparatively find out the PSNR of the fingerprint images. Actual PSNR observed for
fingerprint image-1 is 40.92 dB. For figure-2 its 41.45 dB, Likewise PSNR of all fingerprint images can be calculated. In General, any
steganography technique is done either in spatial or frequency domain. Spatial domain techniques are easy to create and design. They
give an ideal reconstruction in the lack of noise. There are several techniques put forward in spatial domain like embedding utilizing
the luminance components, manipulating the Least Significant Bits for embedding, Image Differencing. But using the spatial domain
is not that much safe as it hide the secret data directly. On the other hand, In Frequency Domain, the cover image is subjected to a
transformation into the frequency domain where detail manipulations of the coefficients with perceptible degradation to the cover
image is possible. Thus the system supports two stages to hide data. First is transformation of fingerprint image from time domain to
frequency domain & then manipulation of least significant bit with the help of least significant bit insertion technique. Thus frequency
domain technique is better approach for hiding data.
CONCLUSION
Considering the difficulty of elections the system provides adequate proof of authenticity in terms of biometric protection as well as
multilevel security. The security level of system is very much enhanced by the idea of individuals fingerprint image as cover image for
each user. Fingerprint image and PAN number has been used to obtain high degree of authenticity. This methodology does not give
any idea for searching predictable modifications in the cover image. Countries with large population have to invest large amount of
money, time as well as man power for voting set up. But because of online voting system all the mentioned problems will be reduced
to great extent.
REFERENCES:
[15] Shivendra Katiyar, Kullai Reddy Meka, Ferdous A. Barbhuiya, Sukumar Nandi, ―Online Voting System Powered By Biometric
Security Using Steganography‖ Second International Connference on Emerging Applications of Information Technology 2011
[16] William Stallings, ―Cryptography and Network Security Principle and Practices‖, Third Edition, pp. 67-68 and 317-375, Prentice
Hall, 2003
[17] Sutaone, M.S. and Khandare, M.V., ―Image based steanography Using LSB insertion technique‖, IEEE WMMN, pp. 146-151,
January 2008.
[18] J.Samuel Manoharan,Dr.Kezi C.Vijila, A.Sathesh, ―Performance Analysis of Spatial & Frequency Domain‖ (4); Issue (3)
[19] Lindsay I Smith ―A tutorial on Principal Component Analysis‖ February 26, 2002
[20] R. EI Safy, H. H. Zayed, and A.EI Dessouki ―An Adaptive Steganographic Technique Based on Integer Wavelet Transform‖
[21] Mohit Kr. Srivastava, Sharad Kr. Gupta, Sushil Kushwaha, Brishket S. Trip athi ―Steganalysis of LSB Insertion method in
Uncompressed Images Using Matlab‖
[22] T. Morkel 1, J.H.P. Eloff 2, M.S. Olivier 3 ―An Overview of Image Steganography‖
[23] Yeuan-Kuen Lee and Ling-Hwei Chen ―An Adaptive Image Steganographic Model Based on Minimum-Error LSB Replacement‖
[24] Mehdi Kharrazi, Husrev T. Sencar, and Nasir Memon ―Image Steganography: Concepts and Practice‖
[25] Linu Paul, Anilkumar M.N. ―Authentication for Online Voting Using Steganography and Biometrics‖ International Journal of
Advanced Research in Computer Engineering & Technology (IJARCET) Volume 1, Issue 10, December 2012
[26] M. Sifuzzaman, M.R. Islam1 and M.Z. Ali ―Application of Wavelet Transform and its Advantages
Compared to Fourier Transform‖


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

115 www.ijergs.org

Morphological & Dynamic Feature Based Heartbeat Classification
N P. Joshi

,Shweta A.Tambe
1
, P.S. Topannavar
1

1
Scholar, ICOER, Pune, India

ABSTRACT - In this paper, a new approach for heartbeat classification is proposed. The system uses the combination of
morphological and dynamic features of ECG signal. Morphological features extracted using Wavelet transform and independent
component analysis (ICA). Each heartbeat undergoes both the techniques separately. The dynamic features extracted are RR interval
features. Support vector machine is used as a classifier, after concatenating the results of both the feature extraction techniques, to
classify the heartbeat signals into 16 classes.
Whole process is applied to both the lead signals and then the classifier results are fused to make final decision about the
classification. The overall accuracy in classifying the signals from MIT-BIH arrhythmia database should be 99% in ―class-oriented‖
evaluation and an accuracy more than 86% in the ―subject-oriented‖ evaluation.
Keywords—heartbeat classification, support vector machine, independent component analysis, wavelet transform, RR features, ECG
signal, evaluation schemes.
I. INTRODUCTION
Electrocardiogram (ECG) analysis is basically used to control cardiac disorders. Cardiac disorders are the conditions such as abnormal
behaviour of heart. if the medical emergencies are not provided properly then the subject can cause impulsive death.
There is another class of arrhythmias which is not critical for life but still should be given attention and treated. Classification of
heartbeats depending on classes of heartbeats is an important step towards treatment. Classes are based on consecutive heartbeat signal
[1]. To satisfy the requirements of real-time diagnosis, online monitoring of cardiac activity is preferred on human monitoring &
interpretation. Also automatic ECG analysis is preferred for online monitoring & detection of abnormal activity observed in heart.
Hence, automatic heartbeat classification using parameters or characteristic features of ECG signals is discussed in this paper.

II. DATASET

A .Classes of ECG signal

The MIT-BIH arrhythmia database [2] is the standard material used for training & testing of the algorithm developed for detection &
classification of arrhythmia ECG signals. By using this database we cancompare the proposed method with the approaches in
published results. MIT-BIH arrhythmia database is exploited for testing the system.
There are total 48 records. All signals are two lead signals denoted as lead A & lead B signal. These signals are filtered using BPF at
0.1 Hz - 100Hz. Sampling of thesesignals is performed at 360 Hz.

TABLE I
CLASSES OF ECG SIGNAL

Heartbeat type Annotation
Normal Beat N
Left Bundle Branch Block L
Right Bundle Branch Block R
Atrial Premature Contraction A
Premature Ventricular Contraction V
Paced Beat P
Aberrated Atrial Premature Beat A
Ventricular Flutter Wave !
Fusion of Ventricular and Normal Beat F
Blocked Atrial Premature Beat X
Nodal (Junctional) Escape Beat J
Fusion of Paced and Normal Beat F
Ventricular Escape Beat E
Nodal (Junctional) Premature Beat J
Atrial Escape Beat E
Unclassifiable Beat Q
TOTAL: 16
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

116 www.ijergs.org

All 48 records belong to one of the classes of ECG signal shown in TABLE I. According to the clinical terms, V1 to V6 leads
represent the area of heart. In 45 records, Lead A signal is a modified limb Lead II while Lead B is a modified Lead V1. In remaining
3 records, Lead A is from position V5 and Lead B signal is V2.

B. Evaluation schemes

Previous literature [3]-[9] is divided into two categories according to the evaluation scheme followed. Following evaluation schemes
are used:

1) Class-oriented evaluation.
2) Subject-oriented evaluation.

All 48 records contained in MIT-BIH arrhythmia database are not used. 4 ECG signals are excluded as they are paced beats. Each
ECG signal contains its own annotation file in database. Those annotations of QRS complex are used for segmentation of ECG signals
from which heartbeat segments can be obtained.44 ECG signals are divided into 2 datasets. One of them is used as a training dataset
and another one is used as testing dataset. This division is done for the experiment purpose.Above datasets are prepared by selecting a
random amount of fraction from each of the 16 classes. Training dataset constitutes following fractions of beats. Normal class
contributes 13% of the beats, 40% contribution is from each of the five bigger classes i.e. ‗L‘,‘A‘,‘R‘,‘V‘ & ‗P‘ while 50%
contribution is of all the small ten classes. Mapping of these 16 classes is done in 5 classes as shown in TABLE II.

TABLE II
MAPPING OF MIT-BIH CLASSES TO AAMI CLASSES

AAMI Classes MIT-BIH Classes
N NOR, LBBB, RBBB, AE, NE
S APC, AP, BAP, NP
V PVC, VE, VF
F VFN
Q FPN,UN

III. PROPOSED METHODOLOGY

Section I contains a brief introduction of the proposed automatic heartbeat classification system. In this section, we have discussed all
the theoretical details and the techniques used in the process. Fig. 2 shows the flow of the proposed system. The process has following
blocks Pre-processing, Heartbeat segmentation, Feature extraction, Classification, Two-lead fusion and Decision. Lead I & Lead II
signals are nothing but raw ECG signals. Artefacts contained in these raw ECG signals are removed by using the first block of the
process i.e. Pre-processing. After pre-processing, these ECG signals are divided to obtain heartbeat segments. For this purpose, we use
provided R peak locations.

We apply Wavelet transform (WT) and independent component analysis (ICA) separately to each heartbeat and concatenate
corresponding coefficients. Now we use Principal component analysis (PCA) and represent these coefficients in a lower dimensional
space. Now the resulting principal components that represent most of the variance are selected and a morphological descriptor of the
heartbeat is obtained by utilizing these components. RR interval features are derived, which give descriptive information about
dynamic features of the heartbeat.

After performing the feature extraction, the main classification algorithm is applied. Heartbeats are then classified into 16 above
classes by using a classifier based on Support vector machine (SVM) is used. According to the data given in [2] all the ECG signals
are two-lead signals, all the above process is separately applied to the signals from leads A & B. The two sovereign decisions for each
heartbeat are obtained, which then are fused to build the final composed decision of heartbeat classification. By integrating both leads
signals, confidence about classification can be improved for the final decision.

A. Pre-processing

It is necessary to perform the pre-processing of raw ECG signals as they can contain various types of noise. These noises must be
reduced so that signal-to-noise ratio (SNR) is improved. Improved SNR helps in detection of the subsequent fiducial point. Types of
noise like power-line interference, baseline wander, artifacts due to muscle contraction, and electrode movement affect the quality of
ECG signals. In this study, the pre-processing of ECG signals consists of baseline wander correction. The baseline wander is removed
by subtracting mean of the signal from signal itself. The pre-processed signals were used in subsequent processing.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

117 www.ijergs.org




Fig. 1 Flow of proposed system

B. Heartbeat segmentation

There are three waveform components of one heartbeat of ECG signal known as P wave, QRS complex and T wave. To have the full
segmentation of ECG signal, the boundaries and peak locations i.e. fiducial points should be properly detected. To obtain heartbeat
segments or R-peaks the annotations provided for R-peak locations are utilized. In real applications, an automatic R-peak detector may
be used so that the classification method can actually be fully automatic. But there are two disadvantages of this automatic R-peak
detector. 1. If some leading heartbeats are missed then error may get added in those heartbeat signals and hence they cannot be
classified correctly.

A number of heartbeat detection schemes do exist [7], [12], [13], which have capacity to detect heartbeat signals present in MIT-BIH
arrhythmia database with an error rate less than 0.5%. 2. The quality of RR interval features will be degraded to some extent because
of addition of the error by the automatic R peak detector. The sampling rate is given as 360 Hz. Hence in each heartbeat segment there
are 100 samples before the R peak location as the pre-R segment & 200 samples after the R peak as the pro-R segment, i.e., a total of
300 samples. The segment size is selected such that it includes most of the information of one heart cycle. The segment size of
heartbeat is kept fixed. The ratio of lengths of the pre-R segment and the pro-R segment is kept so that it matches with lengths of PR
interval & the QT interval. There is an advantage to keep the fixed segment size it avoids the detection of the P wave and T wave.

C. Wavelet transform- Morphological Feature extraction

ECG signals i.e. biomedical signals in real exhibit non-stationary nature. Non-stationary nature actually means the presence of some
statistical characteristics. These signals change over position or time. Due to this nature, they cannot be responsive and hence cannot
be analysed using classical Fourier transform (FT). Therefore, it becomes must to use wavelet transform (WT). Wavelet transform is
capable of performing analysis in both the domains i.e. time & frequency domains. It is possible to analyse ECG signal by using WT.

There are various purposes of using WT in ECG signal processing. It includes de-noising, heartbeat detection and feature extraction.
We use WT as a feature extraction method in this study. As can be seen, Daubechies wavelets of order 8 have most similar
characteristics as that of QRS complex, hence are selected. Since the sampling frequency is given to be 360 Hz, the maximum
frequency is 180 Hz.

C. Independent component analysis- Morphological Feature extraction

In this study, ICA is used for feature extraction[15].Five sample beats are randomly selected from every class for preparation of
training set. These training sets are used to compute Independent components. If the total number of beats in any of the recording is
less than five, then all beats are taken. This makes a training set of total 626 beats taken from all 16 classes. These beats are used for
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

118 www.ijergs.org

calculating ICs. The ICs obtained are used as source signals for ICA and hence applied to both the datasets viz. training and testing
datasets. To obtain actual number of ICs, tenfold cross validation is evaluated. Number of independent components are varied between
10 & 30, the ICA coefficients obtained after that are actually considered as features and given as input to SVM classifier. This process
is performed in 5 iterations. And average is taken. When average performance is observed, the accuracy increases at number of ICs
between 10 & 14 and afterwards it decreases. So number of ICs is selected to be 14.

D. Principal component analysis-Morphological feature extraction

The two features obtained i.e. ICA features and wavelet features are combined together and PCA is applied to obtain the reduction in
feature dimension. Then 10-fold cross validation is performed and final morphological features are obtained.

E. RR Interval Features

RR interval features are extracted to obtain dynamic information of the heartbeat signal input. These are known as ―dynamic‖ features.
There are four RR interval features namely, previous RR, post RR, local RR, and average RR interval features.
The previous RR feature is nothing but the interval between a present R peak and the previous R peak. Post RR feature is calculated as
the interval between current R peak and next R peak. The local RR interval is calculated by taking average of all the RR intervals
within past 10-s period of the given heartbeat. Likewise, the average RR interval is calculated as the average of RR intervals within
past 5-min period of the heartbeat.

In previous literature, the local RR & average RR feature extraction shows poor performance when applied in real-time application.
The local RR feature is calculated as average of consecutive 10 heartbeats whose centre will be at given beat. Whereas average RR
feature is calculated as average of all beats from same recording. In the proposed method, these features are calculated such that they
ensure to work at real-time.

F. Support vector machine

Support vector machines are nothing but binary classifiers. This classifier is given by Vapnik. It builds a prime hyperplane which
separates two classes from each other due to increase in margin between them. As this approach has an excellent ability to build the
classification model on general basis it is enough powerful to be used in many applications.A number of multiclass classification
strategies have been developed to extend SVM to address multiclass classification problem [14], such as heartbeat classification
problem.In this paper, the technique used for classification of the heartbeats is an SVM classifier which classifies the heartbeat under
consideration into one of the 16 classes.

The training set contains N examples. It is used in two-class classification problem. N examples are given as {(xi, yi), i = 1, . . . ,N},
where xi is nothing but d-dimensional feature vector of the ith example and xi ⋲ d whereas yi is the class label of ith example and yi€
{±1}. Now a decision function is to be constructed on the basis of the training set. This function is used to predict output class labels of
test examples. These are based on input feature vector. The resultant decisionfunction is given as

f (x) = sign ( ∑ α
i
y
i
K (xi, x) + b)
i ⋲SVs

where,K(., .) is kernel function. αiis Lagrangemultiplier for each training datasample. Few Lagrange multipliers are nonzero. The
examples of training set which are nonzero are known as Support vectors. These support vectors actually determine f(x). Two seperate
classifiers are applied to signals from lead A & lead B.

G. Two-lead fusion

As two different classifiers are applied, each classifier gives its seperate answer. Now the two answers are fused together to get a final
answer which actually gives the class of the heartbeat it belong to. Two seperate answers can be fused together by using rejection
approach.

IV. RESULTS& ANALYSIS

As seen in Fig.2 & Fig.5, the original lead 1 signal is shifted from its axis and offset is added in it. This happens because of patient‘s
movement or in-line interference. The pre-processing of signal results into reduction of these noises. The shift of axis is called as
baseline wander. The baseline wander is removed after pre-processing. Pre-processing also helps in R-peak detection. The amplitude
of the signal is compared with the threshold and hence the R peaks are found out from the pre-processed signal. These R peaks give
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

119 www.ijergs.org

post-R and pre-R features. While average of these R-peaks are found to get average-R feature. Segmentation gives the actual
separation of the QRS complex from whole recording. Sampling frequency is kept to be 180 Hz. We take 100 samples before and 200
samples after to get proper segment, which actually contains total QRS complex, P wave and T wave. This helps in finding out the
exact class of the ECG signal after applying feature extraction techniques and classifier. Hence segmentation size is kept fixed.



Fig. 2 Results showing pre-processing for lead 1 signal of 109 ECG recording.




Fig. 3 Results showing segmentation of lead 1 signal of 109 ECG recording









International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

120 www.ijergs.org

Fig.4 Results showing R-peak detection of lead 1 signal of 109 ECG recording

Fig.5 Results showing pre-processing of lead 2 signal of 109 ECG recording


Fig. 6 Results showing segmentation of lead 2 signal of 109 ECG recording


Fig.7 Results showing R-peak detection of lead 2 signal of 109 ECG recording
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

121 www.ijergs.org


Fig.8 Results showing 14 Independent components of 109 ECG recording
REFERENCES:
[1] Can Ye, B. V. K. Vijaya Kumar, Miguel Tavares Coimbra, ―Heartbeat Classification Using Morphological and Dynamic
Featuresof ECG Signals,‖ IEEE Trans. Biomed. Eng., vol. 59, no. 10, pp. 2930-2941, Oct. 2012.
[2] MIT-BIH Arrhythmias Database. Available online at: http://www.physionet.org/physiobank/database/mitdb/.
[3] M. Lagerholm, C. Peterson, G. Braccini, L. Edenbrandt, and L. Sornmo,―Clustering ECG complexes using Hermite functions
and self-organizing maps,‖ IEEE Trans. Biomed. Eng., vol. 47, no. 7, pp. 838–848, Jul. 2000.(6)
[4] P. de Chazal, M. O. Dwyer, and R. B. Reilly, ―Automatic classification of heartbeats using ECG morphology and heartbeat
interval features,‖IEEETrans. Biomed. Eng., vol. 51, no. 7, pp. 1196–1206, Jul. 2004. (8)
[5] S. Osowski, L. T. Hoa, and T. Markiewic, ―Support vector machine-based expert system for reliable heartbeat recognition,‖
IEEE Trans. Biomed.Eng., vol. 51, no. 4, pp. 582–589, Apr. 2004. (9)
[6] J. Rodriguez, A. Goni, and A. Illarramendi, ―Real-time classification of ECGs on a PDA,‖ IEEE Trans. Info. Tech. Biomed.,
vol. 9, no. 1, pp. 23–34, Mar. 2005. (10)
[7] P. Laguna, R. Jane, and P. Caminal, ―Automatic detection of wave boundaries in multilead ECG signals: Validation with the
CSE database,‖ Comput.Biomed. Res.,vol. 27, no. 1, 1994. (12)
[8]W. Jiang and G. S. Kong, ―Block-based neural networks for personalized ecg signal classification,‖ IEEE Trans. Neural
Networks, vol. 18, no. 6,pp. 1750–1761, Nov. 2007. (13)
[9] T. Ince, S. Kiranyaz, and M. Gabbouj, ―A generic and robust system for automated patient-specific classification of ECG signals,‖
IEEE Trans.Biomed. Eng., vol. 56, no. 5, pp. 1415–1426, May 2009. (14)
[10] M. Llamedo and J. P. Martinez, ―Heartbeat classification using feature selection driven by database generalization criteria,‖
IEEE Trans. Biomed.Eng., vol. 58, no. 3, pp. 616–625, Mar. 2011. (15)
[11] G. de Lannoy, D. Francois, J. Delbeke, andM. Verleysen, ―Weighted conditional random fields for supervised
interpatient heartbeat classification,‖ IEEE Trans. Biomed. Eng., vol. 59, no. 1, pp. 241–247, Jan. 2012. (17)
[12] V. X. Afonso, W. J. Tompkins, T. Q. Nguyen, and L. Shen, ―ECG beat detection using filter banks,‖ IEEE Trans. Biomed.
Eng., vol. 46, no. 2, pp. 192–202, Feb. 1999. (25)
[13] S. Kadambe, R. Murray, and G. F. Boudreaux, ―Wavelet transform-based QRScomplex detector,‖ IEEE Trans.Biomed. Eng.,
vol. 46, no. 7, pp. 838– 848, Jul. 1999. (27)
[14] C. Cortes and V. N. Vapnik, ―Support vector networks,‖ J. Mach. Learn.,vol. 20, pp. 1–25, 1995. (34)
[15] Jarno M.A. Tanskanen, Jari J. Viik, Independent Component Analysis in ECG Signal Processing, Finland, Tampere University
of Technology and Institute of Biosciences and Medical Technology.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

122 www.ijergs.org

Fuzzy logic power Control for Zigbee Cognitive Radio
P.Vijayakumar
1
, Sai Keerthi Varikuti
1

1
Department of Electronics and communication, SRM University
E-mail- saikeerthi.v@gmail.com

ABSTRACT - Spectrum sharing without interfering to the primary users is one of the challenging issue in cognitive networks, at the same time
power control is one of the feasible solution for spectrum sharing without disturbing the primary user to achieve required performance at cognitive
radio.
In this paper one Zigbee is configured as primary user and another Zigbee as secondary user which together forms a cognitive network.
Here the implementation of the designed setup is carried out by analyzing signal strength , transmit power level assignment and routing algorithm on
specific transceiver model of IEEE 802.15.4/Zigbee on Arduino board which leads to better performance of cognitive radio to access spectrum.
Keywords—CR (Cognitive Radio), PU (Primary User), SU (Secondary User), TPC(Transmit power control), (RSSI)Received signal strength
indicator.
INTRODUCTION
Today wireless systems take up more and more of the frequencies that are available. Most of them are licensed to high speed wireless
internet, telephone and mobile operators internet[1]. There is a need to develop the new system due to high demands on frequencies as
there seems to be lack of free frequencies while new wireless systems are developed One such technology is called Cognitive Radio,
which allow the re-use of spectrum.[2]
The idea for cognitive radio has come out of the need to utilize the radio spectrum more efficiently, it is possible to develop a radio
that is able to look at the spectrum, detect which frequencies are clear, and then implement the best form of communication for the
required conditions thus one can say Cognitive Radio (CR) is a form of wireless communication in which a transreciever can
intelligently detect which communication channels are in use and which are not, and instantly move into vacant channels while
avoiding occupied ones. This optimizes the use of available radio-frequency (RF) spectrum while minimizing interference to other
users. [2].
In an underlay CR system the secondary users (SUs) protect the primary user (PU) by regulating their transmit power to maintain the
PU receiver interference below a well defined threshold level. The limits on this received interference level at the PU receiver can be
imposed by an average/peak constraint [3].
Power control in CR systems presents its own unique challenges. In spectrum sharing applications, SU power must be allocated in a
manner that achieves the goals of the CR system while not adversely affecting the operation of the PU. In [4], a distributed approach
was used for power allocation to maximise SU sum capacity under a peak interference constraint. Fuzzy logic decision has been to
choose the most suitable access opportunity for various transmit power levels using Zigbee to dynamically adjust to various power
levels to analyze the interference scenario effectively[5]
In this paper hardware system is proposed with Zigbee modules in the ISM band and microcontroller ATMEL ATMEGA328P based
Arduino board to control the transmit power levels and spectrum sensing in order to provide a reliable communication by reducing the
interference w.r.t to the power levels to the primary user unit of the cognitive radio thus to design a system to limit the interference
w.r.t to transmit power levels
The outline of the paper is as follows. Section II describes system model of primary user system and cognitive secondary user system
with respect to receiver and transmitter and their working model. Section III shows the experimental setup. Experimental
measurements and results are shown and described in section IV. Section V tells about the future work.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

123 www.ijergs.org

SYSTEM MODEL
In this paper we consider scenario in which a primary system is licensed service and cognitive secondary system present in the same
area along with primary system using opportunistic radio spectrum access which should not increase level of interference observed by
primary system. Here the cognitive system consists of two units namely Surveilling Unit and Regulating/Supervising Unit.







Fig 1 : Surveilling Unit
Surveilling Unit consists two Zigbee module one for each primary user and secondary user which are connected to PC and monitored
through the X-CTU software.


















Fig 2 : Regulating/Supervising Unit

PC
PU Zigbee
Module
SU Zigbee
Module

PC
PC
Transmit
power level
assignment
PC
PU Zigbee
Module as
receiver

RSSI
SU Zigbee
Module as
transmitter

Fuzzy
Control
System
Transmit
Power control
and routing
algorithm
Data
Transmissio
n
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

124 www.ijergs.org

Regulating/Supervising Unit consists of a microcontroller to which two Xbee modems are connected one as primary user unit as a
receiver/end device and another as secondary user unit as a transmitter/coordinator. The microcontroller used is ATMEL
ATMEGA328P based Arduino board called Duemilanove programmed using wiring programming language operating at 16 MHz .
The RSSI of the last received packet i.e. the detected signal is evaluated. Received signal strength indicator (RSSI) is the signal
strength level of a wireless device measured in –dBm of the last received packet [6]. Next the transmit power level assignments are
done with the help of analyzation of RSSI values of various received signal packets. Transmit power level assignment is done with the
help of Friis transmission equation [7]. Using the Friis transmission equation, the ratio of the received power Pr to the transmission
power Pt can be expressed as

2
4
|
.
|

\
|
H
× × =
d
G G P P
r t t r
ì


where, Gt, Gr are gain of transmitter and gain of receiver respectively. λ is a wavelength, and d is the distance between the sender and
receiver, w.r.t square of the distance the signal strength degrades in free space. The Regulating/Supervising Unit each router then
sends its link quality data along with its battery charge to the coordinator, which performs the transmit power level assignment and
routing algorithm. Next the fuzzy logic is applied to dynamically adjust the transmit power control ratio of the specific secondary user
in cognitive network according to the changes in transmit power level assignment, transmit power control and routing algorithm.

EXPERIMENTAL SETUP
Fig1 and Fig 2 shows the experimental setup established in the paper. In Surveilling Unit two Xbee Series 2, 2mW modules from Digi
International, model XB24-ZB is used each as a primary user module and secondary user module. Each module is equipped with a
wire antenna. XBee offers transmission range of 40 m in indoor scenarios and 140 m in outdoor.
X-CTU, a free software provided which is provided by Digi International is used for programming each unit i.e. primary user unit and
secondary user unit. A user can renew the parameters, enhance the firmware and perform communication testing readily using this
software. Communication with XBee modules i.e. primary user unit and secondary user unit is done via XBee Interface board
connected using a USB cable to a personal computer (PC).
In Regulating/Supervising Unit one Xbee modem as a end device and another as a coordinator is connected to microcontroller where
programming is done in Arduino IDE software version 0022.

EXPERIMENTAL MEASUREMENTS AND RESULTS
Firstly transmission and reception of the signals are analyzed using Xbee-series 2 transreciever module and X-CTU software which
are shown in fig 3 and fig 4. Next the same transmission and reception of the signals is established between Xbee-series 2
transreciever and microcontroller ATMEL ATMEGA328P based Arduino board where programming is done in Arduino IDE software
version 0022 with the help of Xbee-series2 .

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

125 www.ijergs.org


Fig 3 : Transmission of signals using X-CTU

Fig 4 : Reception of signals using X-CTU

Next the RSSI values of the different signals is obtained and analyzed with the help of X-CTU and AT command ATDB w.r.t to the
distance between two Xbee's and without interference between them which is shown in the fig 5. From the graph one can interpret that
as the distance increases the RSSI decreases.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

126 www.ijergs.org


Fig 5: Signal strength of various signals without interference
with respect to distance.

Further RSSI values of different signals is obtained using XBee power levels for transmission with the help of AT command ATPL
which sets the XBee at one of its power level for transmission. Here the RSSI values that obtained are w.r.t to the distance at three
different values for transmit power Pt : (i) -8 dBm, (ii) -4 dBm, and(iii) 2 dBm. as shown in the fig 6


Fig 6. Measured RSSI values versus distance at different values of transmit power.

With respect to the transmit power levels the RSSI degrades with the square of the distance from the sender. The fluctuations on the
graph at distance between 200-300m with Pt of -8dBm can be associated with the presence of reflection and multipath phenomenality
due to the presence of interference like wall and from Wi-Fi Routers located between the Zigbee's. Thus reasonably increase in
transmit power leads to a better performance.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

127 www.ijergs.org

FUTURE WORK
The future implementation and application will be done to reduce the harmful interference to the primary user unit with the help of
transmit power control and suitable routing algorithm on specific transceiver model of IEEE 802.15.4/Zigbee on Arduino board which
could be one of the feasible solution for spectrum sharing with interference minimization upon that implementing the fuzzy logic to
dynamically adjust the transmit power control ratio of the specific secondary user in cognitive network both in homogeneous and
heterogeneous environment according to the changes of desired Zigbee parameters which could lead to achieving the required
performance at cognitive radio secondary users and minimizes battery consumption of mobile terminals for next generation wireless
networks and services.

REFERENCES:

[1] FCC, ―Et docket no. 08-260, second report and order and memorandum opinion and order,‖ Tech. Rep., 2008.
[2] S. Haykin, ―Cognitive radio: brain-empowered wireless communications, ‖IEEE J. Sel. Areas Commun., vol. 23, no. 2, pp. 201–
220, Feb.2005.
[3] A. Ghasemi and E. S. Sousa, ―Fundamental limits of spectrum-sharing in fading environments,‖ IEEE Trans. Wireless Commun.,
vol. 6, pp. 649–658, February 2007.
[4] Q. Jin, D. Yuan, and Z. Guan, ―Distributed geometric-programming based power control in cellular cognitive radio networks,‖ in
Proc. VTC 2009, April 2009, pp. 1–5.
[5] N. Baldo and M. Zorzi, "Cognitive Network Access using Fuzzy Decision Making", IEEE ICC 2007 Proceedings, pp. 6504-6510.
[6] D. International, "XBee User Manual," ed:Digi International, 2012, pp. 1-155.
[7] W. Dargie and C. Poellabauer. (2010, July 2010). Fundamentals of Wireless Sensor Networks: Theory and Practice

















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

128 www.ijergs.org

Energy Efficient Spectrum Sensing and Accessing Scheme for Zigbee
Cognitive Networks
P.Vijayakumar
1
, Slitta Maria Joseph
1

1
Department of Electronics and communication, SRM University
E-mail- vijayakumar.p@ktr.srmuniv.ac.in

ABSTRACT - We consider a cognitive radio network that makes access to spectrum licensed to the primary user. In this network, the
secondary user will be allowed to use the idle frequency channel of the primary user. It‘s primarily depending on the proper spectrum
sensing. If the channel is seems to be idle the secondary user can occupy the channel but whenever the primary user returns to their
frequency channel they have to either switch to the other idle channel or they can wait still on the same channel till it free. In this
paper we are considering a cognitive network with one primary user and a secondary user. Secondary user (SU) accessing multiple
channels via periodic sensing and spectrum handoff. In our design Implementation is done by utilizing the concept of energy detection
algorithm on specific transceiver model of 802.15.4/Zigbee based on Arduino board by analysis of RSSI values of Zigbee devices
according to the distance. Also include analyzing of the sensing duration and finding appropriate threshold value for sensing based on
Zigbee modems. Energy efficient design is being implemented by utilizing sleeping mode of the Zigbee devices.

Keywords— RSSI, Energy efficiency, Cognitive Radio,
INTRODUCTION
The Electromagnetic Radio Spectrum, a natural resource, is currently licensed by regulatory bodies for various applications. Presently
there is a severe shortage of the spectrum for new applications and systems. Recent studies of Federal communications commission
show that 70% of the channels are occupied in US and also found that 90 percentage of the time licensed frequency bands remain
unused [1]. To solve this scenario of the spectrum shortage, the concept of Cognitive radio is implemented. Cognitive radio enables
the temporary use of the unused spectrum knows as spectrum hole [2]. While if the secondary user who do not have the license, can
have the spectrum while its idle and whenever the primary user returns who have the license, secondary user have to return frequency
spectrum to the primary user the moment it returns and either it can wait till primary user again gets free or can go in search of other
idle channels. If there will be a delay to return there a collision will occur [3].
Most important thing in this is the channel sensing, it‘s a critical task. In some cognitive systems, channel sharing is facilitated through
periodic sensing [4]. For some system, their energy is critical in that cases it‘s not suitable to handoff frequently and some time the
secondary user choose to wait one the same channel and stop transmission at the cost of increased delay and reduced average
throughput[5]. In this paper we propose hardware system with microcontroller, based on Arduino board as to control the spectrum
sensing there after the switching of the channel using Zigbee modules in the ISM band and thus to design a system with very less
energy consumption.
In the rest of the paper is organized as follows: Section II describes about system model concerning about the transmitter and receiver
section and their working mechanism. Section III describes about the implementation of the hardware and software part. Simulation
results and discussions are shown in section IV.





International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

129 www.ijergs.org

System Model
A. Channel model

In this section, we will describe the channel model. The primary users are the licensed users who are the one to access the channel
same time secondary users are the one, who doesn‘t have the licensed spectrum and they will be seeking the opportunity to access the
channel which is not used by the primary. We assume that there is only one pair of secondary user transmitter and receiver. The
secondary user can sense only one channel at a time and access one channel for single transmission [8]. In this paper entire design
consist of two parts
1) Monitoring Section
2) Controlling Section
Monitoring section consists of two transceiver which is connected to the PC, can be monitored. Controlling section is fully controlled
by the microcontroller.







Figure.1 Monitoring part

Two transceivers are connected to the microcontroller to one has set as primary user receiver and other as secondary user transmitter.
From the receiver get the RSSI value and detecting the idle channel and data transmission on the sensed idle channel.
B. Sensing model
We consider secondary user as a single channel spectrum sensor. At each interval the secondary user will be checking the presence of
the PU. We employ the hypothesis of spectrum sensing by using the energy detection algorithm. In which microcontroller collects all
required data from the PU and makes its own decision. Microcontroller makes the final decision according to certain rule and solving
a hypothesis testing problem, i.e., the microcontroller determines whether a primary user system is transmitting, given by hypothesis
1
H , or not, given by hypothesis
0
H [10].
| |
| |
| | | |
0
1
under H
under H
w n
x n
s n w n
¦
¦
=
´
+
¦
¹
(1)
Here, n = 0, 1, 2 ... N-1, N represents the index of sample, [ ] w n specifies the noise and [ ] s n is the primary signal
required to detect.
0
H is the hypothesis which means that the received signal consists of the noise only. In case
PC
PU Zigbee
Modem
Zigbee
modem
SU Zigbee
Modem
Zigbee
modem
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

130 www.ijergs.org

of
0
H is true then the decision value will be less than the threshold set by microcontroller. So the controller will
be concluding that there is availability of the vacant spectrum. On the other hand, if
1
H is true then the received
signal has both signal and noise, the decision value will be larger than the threshold. So the microcontroller
concludes that the vacant spectrum is not available [6].
EXPERIMENTAL SETUP AND IMPLEMENTATION
The experimental setup used in this paper is illustrated in Fig.1 and Fig. 2. We make use of Xbee transceivers which is
based on the Zigbee protocol. This low power radio is designed for wireless personal area networks to provide a data rate
up to 250 Kbps in indoor/Urban at a range up to 100m [7]. Xbee is programmed for 802.15.4 transmission in the ISM 2.4
GHz frequency band. For monitoring part we are using two Xbee as shown in Fig. 1 one is configured as primary user
coordinator and other as the secondary user router/end device and these radios are monitored using the software X-CTU
provided by the Digi international Inc. we can see the software window Fig.3
Designed controlling part mainly consists of two Xbee modems and microcontroller. The microcontroller is ATMEL
ATMEGA328P based Arduino board called Duemilanove programmed using wiring programming language operating at
16 MHz.
Controller has been coded for 1) sensing 2) decision making 3) data transmission. In the controlling part we also have the
two Xbee modems configured as primary user router/end device and other one as secondary user coordinator.















Figure.2 Controlling part



Controller
PU
Zigbee
modem

Controller
SU
Zigbee
modem
Energy
detection
Threshold
Periodic
Detectio
n
Data
transmissio
n
Optimal
channel
switching
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

131 www.ijergs.org

A. Monitoring part

This part mainly consist of the two Xbee modules, they are connected to the personal computer and monitored through the
X-CTU software. In this part we will be able to communicate to the Xbees using the transparent /command mode. We
make use of AT commands to check current channel used by the Xbee modules in the transmission process and the Table
I is the list of all channels that a Xbee make use while it will be in the communication with each other. There is a total of
16 channels can occupied by Xbee in the ISM bands and they can utilize the frequency range of 2.405GHz to 2.480GHz.

B. Spectrum sensing part

There has been a lot research on spectrum sensing is going on. As our total design is meant for low power, we consider a
simple sensing technique based on energy detection in this paper.
The spectrum sensing part in the microcontroller solves a binary testing problem, choosing a threshold value in a
controlled environment observation [9].
Threshold has been set from the value which is obtained from the received signal strength indication (RSSI) it can
obtained from the RSSI pin of the Xbee module or either with help of the AT command. By the value received or sensed
from the Xbee is made to compare with threshold value set previously. It has been designed to sense RSSI periodically in
interrupt location with a interval of 90 seconds. It‘s the most critical part in the cognitive radio networks.

C. Detection and decision making part.
We can evaluate RSSI values obtained and, next are to make the decision to conclude that the primary user is present or
not. If the sensed value is less than the threshold value then primary user is absent and in other case channel not available.
In the design it has been coded to detect current channel of the primary user if the channel is available.
D. Switching and data transmission
In data transmission section after detection of the available channel the secondary user is allowed to access the channel
which is available. The secondary user is allowed to take over the channel used by the primary user and allow the
secondary user to change the channel with the help of the AT commands.
While switching to the idle channel the secondary user is allowed to sense whether the primary user returned if so the
secondary user have to switch to the another available channel in the ISM 2.4GHz band. The total process flow is Fig.4







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

132 www.ijergs.org

TABLE I. CHANNEL DETAILS
Channel in
hexadecimal
value
Frequency (GHz) SC mask
0X0B
2.405 0x0001
0X0C
2.410 0X0002
0X0D
2.415 0X0004
0X0E
2.420 0X0008
0X0F
2.425 0X0010
0X10
2.430 0X0020
0X11
2.435 0X0040
0X12
2.440 0X0080
0X13
2.445 0X0100
0X14
2.450 0X0200
0X15
2.455 0X0400
0X16
2.460 0X0800
0X17
2.465 0X1000
0X18
2.470 0X2000
0X19
2.475 0X4000
0X1A
2.480 0X8000


Programming is done in the Arduino IDE software version 0022 it‘s an open project written, debugged and supported by
Massimo Banzi, David cuartielles, Tom Igoe Gianluca martino and David mellis, Based on processing by Casey Reas and
Ben fry.
Microcontroller board having a serial port which is connected to the secondary user coordinator. Primary user has been
connected to the soft serial port which is assigned to the 2
nd
and 3
rd
pin of the controller board.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

133 www.ijergs.org



Figure.3 X-CTU software window


IV. PERFORMANCE ANALYSIS AND RESULTS
In this section, we evaluate the values obtained from the RSSI pin of the Xbee module and the RSSI value obtained from
the AT command ATDB. It has been observed that value obtained from the RSSI pin is always above the 600 even though
the distance varies Fig.6. Value from the ATDB is variable according to the distance and we can reach a relation between
these two.





International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

134 www.ijergs.org






























Figure.4 Flow chart of the total system


Start
Make initial setup for Xbee and microcontroller. Establish a
communication between them
Timer
expires?
Switch to free
channel
Collect Received signal
strength
Getting
channel
from the PU
and
switching
SU to that
channel
Compare average
power with
ththreshold power
Channel idle
Channel busy
Idle
channels
available ?
Yes
No
Yes
No
Less than
Greater than or
equal to
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

135 www.ijergs.org





Figure.5 Bar graph for RSSI pin value

As in the observation done using AT command it implies the as the distance varies the RSSI value will get decreased.



Figure.6 simulation result of relation between distance and RSSI
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

136 www.ijergs.org


In Xbee it will responds in hexadecimal value in the form of –dBm [6].Range for Xbee indoor we are getting 40 meter without
interference it should have a clear path between two Xbee modules then communication can extend few more meters. In channel
detection the Xbee almost preferring channel D in the starting and while switching they need some delay to switch to the other
channel. And modules needed a network resetting to follow the channel selected by the coordinator. In detection of the last received
packet will be checked within a specified time gap and the value is updated always. And the updated value compared to predefined
threshold value. It has been observed that, in switching case the Xbee sometimes not switching to the channel what we described but
rather it switching to channels which is more comfortable in the given sixteen channels in the Table.1.
FUTURE WORK
The future applications will be done to reduce the delay in the detection and can carry out for more applications on the including more
channels. And the more energy can be saved by enabling the sleep and wake up system in the end devices. When the primary user
presence is sensed for a long time it will be possible to make the Xbee modems to sleep for certain amount of time.

REFERENCES:

[1] FCC Spectrum Policy Task Force Fcc report of the spectrum efficiency working group. Nov. 2002.

[2] D. Cabric, S. M. Mishra, And R. W. Brodersen, ‖Implementation Issues In Spectrum Sensing For Cognitive Radios‖, In Proc. 38th
Asilomar Conference On Signals, Systems And Computers, Pp. 772776, Nov. 2004.
[3] C.-W. Wang, L.-C.Wang, and F. Adachi, ―Modeling and analysis for reactive decision spectrum handoff in cognitive radio networks,‖ in
Proc. IEEE Globecom, Dec 2010, pp. 1–6.
[4] Y.-C. Liang, Y. Zeng, E. C. Peh, and A. T. Hoang, ―Sensing-throughput tradeoff for cognitive radio networks,‖ IEEE Trans. Wireless
Commun., vol. 7, no. 4, pp. 1326–1337, Apr. 2008.
[5] S. Maleki, A. Pandharipande, and G. Leus, ―Energy-efficient distributedspectrum sensing for cogntive sensor netowrks,‖ IEEE Sens. J.,
vol. 11,no. 3, pp. 565–573, Mar. 2011.
[6] Dynamic spectrum acess and management in Cognitive Radio Networks-Hossain,Niyato and Han published by Cambridge University
Press.
[7] xbee product manual‖ XBee™ ZNet 2.5/XBee-PRO™ ZNet 2.5 OEM RF Modules‖
[8] He li, Xinxin Feng, Xiaoying Gan, Zhongren cao. ―Joint spectrum sensing and transmission strategy for energy-efficient cognitive radio
networks‖ 2013 8
th
international conference on cognitive radio oriented wireless networks.
[9] Sina maleki, Ashish Pandharipande, Geert leus. ― Energy-efficient distributed spectrum sensing for cognitive sensor networks ‖ IEEE sensors
journal, VOL. 11, No.3,March 2011.
[10] Sundeep Prabhakar chepuri, Ruden de Franscisco and Geert Leus. ―Performance evaluation of an IEEE 802.15.4 cognitive radio link in the
2360-2400 MHz band ‖ IEEE WCNC 2011.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

137 www.ijergs.org

Survey on Analysis of Various Techniques for Multimedia Data Mining
PriyankaA.Wankhade
1
, Prof. Avinash P.Wadhe
2

1
Research Scholar (M.E), CSE, G. H Raisoni college of engineering and Management, Amravati
2
Faculty, CSE, G. H Raisoni college of engineering and Management, Amravati, Email- Wankhade_priyanka.ghrcemamecse@raisoni.net

ABSTRACT – Data mining is an art which is used and applied discipline for grown out the various statistical pattern
recognition,learning macheine,and artificial intelligence which is combined with business decision making to optimize and enhance
it.Initially the techniques of Data mining have been applied to the already structured data from the database. The highly usage of
computers makes data mining affordable for small companies but on the other hand the invention of cheap massive memory and
digital recording electronic devices allowed the misuse of the private sector such as corporate,governmental and private documents.for
ex-e-mail message from customer,recording from telephonic conversation between customers and operators.to handle such condition
multimedia data mining is available.The aim of multimedia data mining is to process media data alone or combination with other data
for finding patterns usefull for business.
Keyword-data mining,multimedia,text mining,image mining,audio mining,video mining.

INTRODUCTION-Multimedia data mining is use to exploration of audio, video, image and text data together by automatic or
semi-automatic means in order to discover meaningful patterns and rules. When all the needed data get collected, computer program
analyse data and look for meaningful connection. This information is used by government sector, marketing sector etc…There are lot
of use of Multimedia data mining in Today‘s society.(e.g. The use of traffic camera footage, to show the traffic flow).Whenever the
planning of new street will going on, in that location this information can be used. Basically there are four types of Multimedia data
mining that are text, image, audio, video. All these four types of multimedia data mining use techniques for the further process. In the
following section the description of the process and techniques for the multimedia data mining is given. Multimedia Data mining is
the way of extracting useful data from the huge data. The unstructured or semi-structured data is sorted by the multimedia data
mining. Pravin M. Kamde, Dr.Siddu. P.Algur[5] describes that World wide web is an important and popular medium for knowing all
types of information which are related to sports, news, education, booking, business, science, engineering etc.. In today‘s competitive
world the ability to extracting hidden knowledge from such type of information is very important .the entire process of applying
computer methodology on such types of big information and extracting the useful knowledge from that is successfully done by
multimedia data mining. Xingquan Zhu, Member, Xindong Wu,AhmedK.Elmagarmid,Zhe Feng, and Lide Wu[12] explain that
organization which deals with the big digital assets they have a need of that type of tool which deals with the retrieving information
extraction from such collection. In
this situation the use of multimedia data mining is get processed. In fig 1 the basic process of multimedia data mining is shown.


Fig.1 Multimedia Data mining Process


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

138 www.ijergs.org

LITERATURE SURVEY-Bhavani Thuraisingham [10] explains that Multimedia data mining process is done by using some
important techniques. In figure 1 the basic process of Multimedia data mining with their techniques is given. In the figure text, audio,
video, and image are combine explained here the common process for mining all types of multimedia are shown. Starting point is the
selection of multimedia type .i.e. audio, video, Images, text; it can also be called as raw data. Then the goal of text feature, audio
feature, video feature, image feature is to discover important features from raw data. At this stage the data pre-processing is done.
Data pre-processing includes feature extraction and transformation. If the informative features can be identified at the feature
extraction stage. Detailed procedure depends highly on raw data. Finally the result of all these stages gets in the final stage.
Knowledge Interpretation and reporting and using knowledge. It is the post processing and result evaluation stage. S. Kotsiantis, D.
Kanellopoulos, P.Pintelas [11] describes that, as compare to data mining, multimedia data mining covers higher complexity resulting
from: i) The huge volume of data, ii) The variability and heterogeneity of the multimedia data and iii) The multimedia content. A.
Hema,E.Annasaro[1] explain All the views and ideas of all authors in field of multimedia data mining. The need ofimage mining is
mainly focussed. Image mining have the great importance in the geological field, biological field, and pharmacy field. Pattern
matching technique plays a vital role in Image mining. The process of extraction of useful Information hidden inside the image can be
retrieve by pattern matching technique also. Xinchen ,Mihaela Vorvoreanu, and Krishna Madhvan[2] give knowledgeful information
for those students or people who spend their more time on social media sites such as twitter, Facebook and you tube .And their elder
ones worry about them, but by mining video also student can be focus on their study also. The focus of the paper is highly on the
engineering students. NingZhong, Yuefeng Li, and Sheng-Tang Wu[3] describe the effective discovery of pattern which is used in text
mining. Digital data on the internet is growing day by day, for turning such type of data in useful form, the need of text mining is
occur. patterns can be discovered with Pattern Taxonomy model, Pattern deploying Method, inner pattern evolution. K. A.
Senthildevi, Dr.E.Chandra[4] deals with the all technique‘s used in audio mining. In the areas of speech, audio processing and dialog
the need of data mining is emerge. Speech processing process on the speech data mining voice data mining, audio data mining ,video
data mining and conversation data mining . Speech data mining is useful for improving system operation and extracting business
intelligence. And voice data mining (VDM) find and retrieve group of spoken documents such as TV or FM. audio of birds and pet
animals recorded. Video data mining is use for the surveillance video. Conversation data mining is used in call centre. All the
problems. Issue of caller gets understood. Pravin M.Kamde, Dr.Siddu.P.Algur [5] the diagrammatical representation of web mining
taxonomy, mining multimedia database, text mining, image mining ,video mining, multimedia mining process are explain.
Classification model, Clustering model and Association rule are some technique use for multimedia mining.Cory Mc Kay.David
Bainbridge[6] describe the greenstone digital library software for extraction of musical web mining audio. feature extraction
extension. JMIR is software tool is use for other resources. JMIR includes the components that are jAudio, jSymbolic , jWebminer2.0,
jLyrics, ACE 2.0, jMusic Meta Manager, lyric feature, jMIR utilities, ACE XML.

A.Text mining with Information extraction
Ning Zhong, Yue feng Li, and Sheng-Tang Wu [3] say that there is lot of information is in the textual form. This could be library data
or electronic books or web data. The one problem face by text data is, it is not well structured as relational data. In many cases it can
be unstructured or it may be semi-structured. So the ―Text Mining‖ is useful for describing the application of data mining techniques
to automated discovery of useful and interesting knowledge from unstructured or semi-structured text.)Raymond J. Mooney and Un
Yong Nahm [9] describes that, there are several techniques are proposed for text mining. That are conceptual structure, association
rule mining, episode rule mining, decision trees and rule induction method. with attachment to this Information Retrieval technique is
widely use for performing task such as document matching, ranking and clustering. From large text database, extraction of patterns
and association is done by text mining. For text document, identifying the keywordsthat summarizes the content is needed. Words can
occur frequently, Such as ―the‖, ―is‖, ―in‖, ―of‖ are no help at all, since they are avoided in every document. During the pre-processing
stage these common English words can be removed using ―stop-list‖Bhavani Thuraisingham [10] describe that One can form
association from the keywords. In one article the keyword can be ―Belgium, nuclear weapons‖ and keyword in another article can be
―Spain, nuclear weapons‖. The data mining could make the association that author from Belgium and Spain write articles on nuclear
weapon. Fig 2 shows the process of Text mining. Xinchen ,Mihaela Vorvoreanu,and Krishna Madhvan[2] give knowledgeful
information for those students or people who spend their more time on social media sites such as twitter, Facebook and you tube .And
their elder ones worry about them, but by mining video also student can be focus on their study also. The focus of the paper is highly
on the engineering students. Fig 2 shows the process of text mining.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

139 www.ijergs.org


Fig.2 converting unstructured data to structured data for mining.Bhavani Thuraisingham [10]

B.Image mining with Information extraction-Nearly the techniques use for all types of multimedia data mining are identical,
But the structure of various multimedia types are different, so according to that, the process of the mining of various multimedia type
is different. Sometimes question get arises, if there is an availability of image processing so exactly what is the use of image mining?
Image processing applications are in various domains, such as medical imaging for detection of cancer, Satellite images processing for
space and intelligence application. Images include the geographical area, structure of biology.Tao Jiang ,Ah-Hwee Tan[7] explains
that, important application of Image mining is, image mining not only detect the outcome from unusual pattern in image but also
identify recurring themes in image, both these thing are done at the level of raw images and with higher-level concept. to find
existence of pattern within a given description, the Matching technique is used.A.Hema,E. Annasaro[1] says that In the field of image
mining, image matching is the vital application. There are so many techniques have been developed till today and still research for
developing the optimized matching technique is going on. Nearest neighbourhood technique, least square method technique, co-
efficient of co-relation technique, relational graph isomorphism technique all these are matching techniques. Nearest neighbourhood
technique is an important technique used in applications where objects to be matched are represented as n-dimensional vector. Fig 3
shows the process of image mining

Fig.3.Image Mining Process.)Pravin M .Kamde, Dr. Siddu. P. Algur [5]

Video mining with feature extraction-Video mining is the third type of multimedia data mining. Video is the combination of
images so the first step for successful video mining is to have a good handle on image mining. Ajay Divakaran, Kadir A. Peker, Shih-
Fu Chang, Regunathan Radhakrishnan, Lexing Xie[11]says that, In terms of feature extraction, video feature extracted for each shot
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

140 www.ijergs.org

based on detected shot boundries. There are totally five video feature extracted for each shot,
namedas,pixel_change,histo_change,background_mean,background_var,anddominant_color_ratio.when the raw data is taken for the
information extraction in video mining these five features are help for mining the video.) Mei-Ling Shyu, Zongxing Xie, Min Chenand
Shu-Ching Chen[8] describes The basic techniques for video data mining, that are pre-processing of raw data, classification and
association. In pre-processing of raw data technique, the important terms are get considered, that are, video shot detection and
classification, video text detection and recognition, camera motion characterization, and salient audio event detection. Now in
association Mining technique there are three terms are get considered that are video data transformation, definition and terminology,
and video association mining. Video mining is day after day improving their techniques in various ways. Fig 4 shows the direct video
mining process.

Fig.4.Direct video mining Bhavani Thuraisingham [10]

Audio mining with feature extraction-In multimedia application, Audio data plays vital role.Cory McKay. David
Bainbridge[6]describes that Music information basically have two categories. a)Symbolic and b)Audio information. Audio is now
became the continuous media type like videos. The techniques used in audio mining is similar to techniques used in video mining.
audio data can be available in any form such as speech, music, radio, spoken language etc. The primary need for mining the audio data
is the conversion of audio into text, using speech transcription technique this process can be done. other techniques are also available
for this such as keyword extraction and then mining the text. Audio mining is that type of technique which is used to search audio
files. K.A.Senthildevi, Dr.E.Chandra[4] explains that there are two main approaches of audio mining. 1) Text based indexing and 2)
Phoneme based indexing. Text based indexing deals with the conversion process of speech to text. And Phoneme based indexing
doesn‘t deals with conversion from speech to text, but instead works only with sound. Fig 5 shows the process of Audio mining.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

141 www.ijergs.org


Fig.5.Mining text extracted from audio. [10]

APPLICATION of MULTIMEDIA DATA MINING
Multimedia data mining is the big application for all types of sectors and field. In today‘s society multimedia is the essential part of all
kinds of work. Some application of multimedia data mining is as follows.
A. To Know the geographical condition, agriculture, forestry, crops measurement, monitoring of urban growth, mapping of pollution,
mapping of ice for shipping. Identification of sky object the satellite data is used.[5]Pravin M. Kamde, Dr.Siddu.P.Algur
B.The use of Audio and video mining can be seen in movie mining system.
C. For vehicle identification, traffic flow, and the spatio-temporal relations of the vehicle at intersection, the mining of traffic video
sequences is used.[8]Mei-Ling Shyu,ZongxingXie, Min Chenand Shu-Ching Chen.
D. For detecting sports video, or in big shops, the video mining is used.

CONCLUSION and FUTURE SCOPE
In this paper the description of techniques which is needed by the multimedia data mining is given. In text mining two approaches are
used for information extraction. In first approach the general knowledge can extract from direct text. And in second approach, one can
extract structured data from text documents. In image mining matching technique is used for finding the existence pattern of an image.
To handle video mining, one should know all about the image.

REFERENCES:
[1]A.Hema, E.Annasaro. ―A SURVEY IN NEED OF IMAGE MINING TECHNIQUES‖, International Journal of Advanced Research in Computer and
Communication Engineering (IJARCCE) ISSN (Print) : 2319-5940 ISSN (Online) : 2278-1021, Vol. 2, Issue 2, February 2013.
[2]Xinchen,MihaelaVorvoreanu,andKrishnaMadhvan,―Mining Social Media Data for Understanding Students‘ Learning
Experiences‖, IEEE computer society.1939-1382(c)2013 IEEE.
[3]NingZhong, Yuefeng Li, and Sheng-Tang Wu, ―Effective Pattern Discovery for Text Mining‖, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA
ENGINEERING, VOL. 24, NO. 1, JANUARY 2012.
[4]K.A.Senthildevi, Dr.E.Chandra, ―Data Mining Techniques and Applications in Speech Processing - A Review‖, International Journal of Applied Research &
Studies(IJARS) ISSN 2278 – 9480Vol.I / Issue II /Sept-Nov, 2012/191.
[5]PravinM. Kamde, Dr.Siddu.P.Algur. ―A SURVEY ON WEB MULTIMEDIA MINING‖, the International Journal of Multimedia & Its Applications (IJMA) Vol.3,
No.3, August 2011.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

142 www.ijergs.org

[6] Cory McKay.David Bainbridge, ―A MUSICAL WEB MINING AND AUDIO FEATURE EXTRACTION EXTENSION TO THE GREENSTONE DIGITAL
LIBRARY SOFTWARE‖,12th International Society for Music Information Retrieval Conference (ISMIR 2011).
[7] Tao Jiang ,Ah-Hwee Tan, ―Learning Image-Text Associations‖, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 21, NO. 2,
FEBRUARY 2009.
[8] Mei-Ling Shyu,ZongxingXie, Min Chenand Shu-Ching Chen, ―Video Semantic Event/Concept Detection Using a
Subspace-Based Multimedia Data Mining Framework‖,IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 10, NO. 2, FEBRUARY 2008.
[9]Raymond J. Mooney and Un Yong Nahm, ―Text Mining with Information Extraction‖,Multilingualism and Electronic Language Management: Proceedings of the
4th International MIDP Colloquium,September 2003, Bloemfontein, South Africa, Daelemans, W., du Plessis, T.,Snyman, C. and Teck, L. (Eds.) pp.141-160, Van
Schaik Pub., South Africa, 2005.
[10]Bhavani Thuraisingham, ―Managing and Mining multimedia Database‖,International Journals on Artificial Intelligence Tools,Vol.13,No.3,739-759,20 March 2004.
[11] Raymond J. Mooney and RazvanBunescu. ―Mining Knowledge from Text Using Information Extraction‖,SIGKDD Explorations. Volume 7, Issue 1.
[12] Ajay Divakaran, Kadir A. Peker, Shih-Fu Chang, RegunathanRadhakrishnan, LexingXie, ―Video Mining: Pattern Discovery Versus Pattern Recognition‖,IEEE
International Conference on Image Processing (ICIP)TR2004-127 December 2004.
















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

143 www.ijergs.org

Design of Universal Shift Register Using Pulse Triggered Flip Flop
Indhumathi.R
1
, Arunya.R
2

1
Research Scholar (M.Tech), VLSI Design, Department of ECE, Sathyabama University, Chennai
2
Assistant Professor, VLSI Design, Department of ECE, Sathyabama University, Chennai
Email- r.indhumathi12@gmail.com
ABSTRACT – Universal shift registers, as all other types of registers, are used in computers as memory elements. Flip-flops are an
inherent building block in Universal shift registers design. In order to achieve Universal shift registers, that is both high performances
while also being power efficient, careful attention must be paid to the design of flip flops. Several fast low power flip flops, called
pulse triggered flip flop (PTFF), design is analyzed and designed the universal shift registers.. The paper presents a modified design
for explicit pulse triggered Flip-flop with reduced transistor count for low power and high performance applications. HSPICE
simulation results of Shift Register at a frequency of 1GHz indicate improvement in power-delay product with respect to the Existing
pulse triggered flip flop configurations using CMOS technology.

Keywords: MOSFET, Pulse triggered flip flop, universal shift registers, low power, delay, power delay product
INTRODUCTION
Flip Flops are the basic storage elements used in all types of digital circuit designs. Conventional master slave flip flops are made up
of two stages and are characterized by hard edge property. But pulse triggered flip flops reduce the two stages into one stage and are
characterized by soft edge property [10]. Nowadays Pulse triggered flip flops have been considered as an alternative to the
conventional master-slave [7]. A pulse triggered flip flop consists of a pulse generator for strobe signal and a latch for data storage.
Since the pulses are generated on the transition edges of the clock signals and very narrow pulse width, the latch acts like an edge
triggered flip flop [3]. PTFF uses a conventional latch design clocked by a short pulse train and it also acts as a flip flop. Advantages
of pulse triggered flip flop are that it is simpler in circuit complexity and leads to higher toggle rate for high speed operations and also
allows time borrowing across cycle boundaries. To achieve low power in high speed regions, the different low power techniques are
conditional capture, conditional precharge, conditional discharge, conditional data mapping and clocking gating technique [3]
EXISTING PULSE TRIGGERED FLIP FLOP
An explicit type pulse triggered structure and a modified true single phase clock latch based on a signal feed through scheme as shown
in Fig 1

Fig 1 Existing pulse triggered flip flop
The key idea was to provide a signal feed through from input source to the internal node of the latch, which would facilitate extra
driving to shorten the transition time and enhance both power and speed performance. The design was intelligently achieved by
employing a simple pass transistor. However, with the signal feed through scheme, a boost can be obtained from the input source via
the pass transistor and the delay can be greatly shortened.[3]



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

144 www.ijergs.org

PROPOSED PULSE TRIGGERED FLIP FLOP
The proposed system is designed with signal feed through scheme without feedback circuits that is only capable of designing the
sequential circuits that does not have feedback operation as shown in Fig.2. Added to the pass transistor in the existing system, a
pMOS transistor is used controlled by clock signal to reduce power

Fig 2 Proposed Pulse Triggered Flip Flop
UNIVERSAL SHIFT REGISTER
A universal shift register is an integrated logic circuit that can transfer data in three different modes designed using pulse triggered flip
flop as shown in the Fig 3. Like a parallel register it can load and transmit data in parallel.Like shift registers it can load and transmit
data in serial fashions, through left shifts or right shifts. In addition, the universal shift register can combine the capabilities of both
parallel and shift registers to accomplish tasks that neither basic type of register can perform on its own.

Fig 3: Universal Shift Register
For instance, on a particular job a universal register can load data in series and then transmit/output data in parallel. Universal shift
registers, as all other types of registers, are used in computers as memory elements.[11] Although other types of memory devices are
used for the efficient storage of very large volume of data, from a digital system perspective when we say computer memory we mean
registers. In fact, all the operations in a digital system are performed on registers. Examples of such operations include multiplication,
division, and data transfer. Due to increasing demand of battery operated portable handheld electronic devices like laptops, palmtops
and wireless communication systems (personal digital assistants and personal communicators) the focus of the VLSI industry has been
shifted towards low power and high performance circuits. Flip-flops and latches are the basic sequential elements used for realizing
digital systems like Universal shift Register
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

145 www.ijergs.org


PERFORMANCE ANALYSIS
In CMOS design, analysis of the average power, delay and power delay product of the ExistingPulse Triggered Flip Flop based
universal shift register using 130nm technology is shown in Table.1.
Table 1 Universal Shift Register Using Existing Pulse Triggered Flip Flop In 130nm Technology


DESIGN
PULSE TRIGGERED FLIP FLOP
POWER
(µW)
DELAY
(ps)
POWER DELAY
PRODUCT
(fJ)

UNIVERSAL
SHIFT
REGISTER


684.4
113.70 77.816
119.38 81.703
119.09 81.505
111.75 76.481

In CMOS design, analysis of average power, delay and power delay product of the Existing Pulse Triggered Flip Flop based
Universal shift register using 22nm technology is shown in Table 2.
Table 2 Universal Shift Register Using Existing Pulse Triggered Flip Flop In 22nm Technology



DESIGN
PULSE TRIGGERED FLIP FLOP

POWER
(µW)

DELAY
(ps)
POWER
DELAY
PRODUCT
(fJ)

UNIVERSAL
SHIFT
REGISTER


13.46
14.399 0.1938
14.825 0.1995
15.089 0.2030
13.839 0.1862

In CMOS design, analysis of the average power, delay and power delay product of the existing Pulse Triggered Flip Flop based
Universal shift register using 16nm technology is shown in Table 3.
Table 3 Universal Shift Register Using Existing Pulse Triggered Flip Flop In 16nm Technology



DESIGN
PULSE TRIGGERED FLIP FLOP

POWER
(µW)

DELAY
(ps)
POWER
DELAY
PRODUCT
(fJ)

UNIVERSAL
SHIFT
REGISTER


6.473
10.699 0.0069
12.012 0.0077
13.416 0.0086
12.239 0.0079
CONCLUSION
The pulse triggered flip flop based on signal feed through scheme is used to design universal shift registers. The universal shift
registers are designed using existing and proposed pulse triggered flip flop using CMOS design with nanometer Technology to
achieve low power, less delay and power delay product

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

146 www.ijergs.org

REFERENCES:

[1] Guang-Ping Xiang,Ji-Zhang Shen,Xue-Xiang Wu and Liang Geng (2013), ―Design of a low power Pulse Triggered Flip-Flop
withconditional clock techniques‖,IEEE, pp.122-123.

[2] Jin-Fa-Lin (2012), ―Low power pulse-TriggeredFlip Flop design based on a signal feed-through scheme‖, IEEE Trans. Very
Large Scale Integr.(VLSI) Syst, pp.1-3,2012.

[3] James Tschanz, Siva Narendra, Zhan Chen, ShekharBorkar, ManojSachdev, VivekDC,―Comparative delay and energy of single
edge triggered &dual edge triggered pulsed flip flopsfor high performance microprocessors‖,2001.

[4] Jinn-ShyanWang,Po-Hui Yang (1998) ,―A pulse triggered TSPC flip flop for high speed low power VLSI design applications
‖,IEEE,pp-II93- II95.

[5] Jin-FaLin,Ming-HwaSheu and Peng-Siang Lang (2010) ,―A low power dual-mode pulse triggered flip-flop using pass transistor
logic‖,IEEE, pp-203-204.

[6] KalarikalAbsel ,Lijo Manuel ,and R.K.Kavitha (2001), ―Low power dual dynamic mode pulsed hybrid flip-flop featuring efficient
embedded logic‖ .

[7] Logapriya.S.P, Hemalatha.P (2013), ―Design and analysis of low power pulse triggered flip flop‖,International Journal of
Scientific and Research Publications, Volume 3, Issue 4, April 2013 pp.1-3.

[8] Mathan.N, T.Ravi, E.Logashanmugam, (2013), ―Design And Analysis Of Low Power Single Edge Triggered D Flip Flop Based
Shift Registers‖ Volume 3, Issue 2 .

[9] SusruthaBabuSukhavasi, SuparshyaBabuSukhavasi, K.Sindhur, Dr. Habibulla Khan (2013), ―Design of low power & energy
proficient pulse triggered flip flops‖, International Journal of Engineering Research and Applications (IJERA) ,Vol. 3, Issue 4,
Jul- Aug 2013, pp-2085-2088.

[10] Saranya.M, V.Vijayakumar, T.Ravi, V.Kannan,‖ Design of Low Power Universal Shift Register‖,International Journal of
Engineering Research & Technology (IJERT),Vol. 2 Issue 2, February- 2013.


[11] T.Ravi, Mathan.N, V.Kannan, " Design and Analysis of Low Power Single Edge Triggered D Flip Flop", International Journal of
Advanced Research in Computer Science and Electronics Engineering, Volume 2, Issue 2, February 2013, ISSN: 2277 – 9043, pp
172-175.
[12] Venkateswarlu. Adidapu, Paritala. AdityaRatnaChowdary, Kalli. Siva Nagi Reddy (2013),―Pulse Triggered flip-flops power
optimizationtechniques for future deep sub-micronapplications‖, International Journal ofEngineering Trends and Technology (IJETT)
–Volume 4 Issue 9- September 2013,pp.4261-4264





International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

147 www.ijergs.org

First Record on Serological Study of Anaplasma marginale Infection in Ovis
aries by ELISA, in District Peshawar, Khyber Pakhtunkhwa, Pakistan
Muhammad Kashif
1
, Munawar Saleem Ahmad
1
, Iftikhar Fareed
2

1
Department of Zoology, Hazara University, Mansehra-23100, Pakistan
2
Department of Natural Resource Engineering and Management, University of Kurdistan, Kurdistan, Iraq
E-mail- saleemsbs@gmail.com, Contact- +92-3224000024

ABSTRACT – Geographical sero-prevalence of Anaplasma marginale (T) in sheep, Ovis aries (L) was done from January-May,
2012 in district Peshawar which is a crowded area of Pakistan. In this area sheep‘s infection with A. marginale is not reported before.
For this purpose, 376 serum samples were obtained conveniently from 4 different breeds of sheep, from different geographical areas
of Peshawar. An indirect ELISA using recombinant MSP-5 as antigen of A. marginale, was performed. Totally, 92/376 (24.47%) of
the overall sheep sera were positive. In 6 areas of Peshawar, Peshthakhara and Mashokhel area were found highly infected i.e. 32.00%
and 32.00% respectively, while Ghazi baba area was less infected comparatively. While age wise, adults were highly infected
specially turkai ones. This is the first record of A. marginale showing high rate infection in sheep in Peshawar, Pakistan, This research
should be useful in epidemiological applications.

Keywords: Sheep; epidemiology; A. marginale; MSP-5; indirect ELISA; Peshawar.

Introduction:
Peshawar, the capital city of Khyber Pakhtunkhwa is the administrative center and central economic hub for the Federal
Administrated Tribal Areas (FATA) of Pakistan. It is situated in a large valley near the eastern end of the Khyber Pass, between the
eastern edge of the Iranian Plateau and the Indus valley strategically it has an imoptant location on the crossroads of central Asia and
South Asia. Peshawar under Koppen‘s climate classification features has a semi-arid climate with very hot summers and mild winters.
It is located at 34°01′N and 71°35′E, area with on of 1,257 km2 and population of 3,625,000
[9]
(Figure 1). Sheep, Ovis aries (L) is
one of the initial animals, domesticated for agricultural purposes; it is raised for meat, (hogget or mutton, lamb) milk and fleece
production. These quadru-pedal ruminant mammals are members of the order Artiodactyla, the even-toed ungulates typically kept as
livestock. It has great economic potential because of their early maturity and high fertility as well as their adaptability to moist
environment
[7]
. However, the benefits derived are too low from the expected due chiefly to low productivity. Numerous factors are
involved in this low productivity, in which the major one is disease
[2]
.
Diseases caused by heamoparasites are most apparent. These heamoparasites are parasites found in the blood of mammals in which A.
marginale is also include. Ticks are biological vectors of Anaplasma sp.; tick, mammalian or bird hosts with persistent Anaplasma sp.
infection can serve as reservoir of infection naturally. Anaplasma sp. is intracellular, gram-negative bacteria and representatives of the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

148 www.ijergs.org

order Rickettsiales classified into Rickettsiaceae and Anaplasmataceae families
[5]
. The tick vector distribution is the factor influencing
the transmission of tick-borne diseases
[3]
. However, for A. marginale, mechanical transmission through contaminated hypodermic
needles and biting flies plays an important role
[9]
.
Erythrocytes are phagocyted by reticulo-endothelial cells during infection. Animals may die older than 2 years due to the infection
[7]
.
Nevertheless, concerning ovine anaplasmosis, little information is available, in despite of the expressive number of sheep, goat and
expansion of small ruminant herds in this country. Diagnosis of anaplasmosis in small ruminants mainly based in the identification of
the rickettsia in stained blood smears. However, below 0.1% rickettsemias in chronic carriers are not detected by this method
[9]
.
Serological assays, based on Major Surface Protein 5 (MSP-5) of A. marginale have been successfully used, for the detection of
antibodies against Anaplasma sp.
[11]
. In this study, we observed for the first time sero-prevalence of Anaplasma sp., in different
breeds of sheep using an indirect ELISA based on MSP5 recombinant of A. marginale, in Peshawar, Pakistan. This research should be
particularly useful for epidemiological applications such as prevalence studies, awareness, education, research and control programs in
this region.
Materials and Methods:
Samples collection: Conveniently, 376 blood sampling was collected from the overall sheep population of
different areas of Peshawar from January to May 2012. About 5 ml blood samples were collected from the jugular vein of each sheep
with a sterile hypodermic syringe into an evacuated tube containing gel and clot activator. Some information like breed, age, and sex
were noted. The blood sample was then centrifuge for 5 minutes at 12000 rpm to separate serum and stored at −35

C until further use
[6]
. The SVANOVIR
®
A. marginale-Ab ELISA kit (Svanova Biotech AB, Uppsala, Sweden) was used for the diagnosis of specific
antibodies against A. marginale in bovine serum samples. The kit procedure was based on the Indirect Enzyme Linked
Immunosorbent Assay (Indirect ELISA). The whole procedure was done according to the protocol given with the kit.
Protocol for Indirect Enzyme Linked Immunosorbent Assay (iELISA):
All reagents were equilibrated to room temperature 18 to 25 °C before use. Pre-dilution of control and samples 1/40 in PBS-tween
buffer (e.g., 10 µl sample in 390 µl of PBS-tween buffer). Hundred micro liter of pre-diluted serum sample was added to selected
wells. The plate was then seal and incubate at 37 °C for 30 minutes. The plate was rinse 4 times with PBS-tween buffer. Hundred
micro liter of conjugate dilution was added to each well and then sealed the plate and incubate on 37 °C for 30 minutes. Again, the
plate was rinse 4 times with PBS-tween buffer. Hundred micro liter substrate solution was added to each well and then incubated for
30 minute at room temperature (18 to 25 °C). Hundred micro liter of stop solution was added to each well and mixed thoroughly. The
optical density (OD) of the controls and sample was measured at 405 nm in a micro-plate photometer (BIOTEK Instruments Inc.,
Winooski, Vermont, U.S.A.). Mean OD values were calculated for each of the control and samples.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

149 www.ijergs.org

Data analysis:
The following formula was used for the percent positivity (PP):
PP= [(Sample OD ×100)/Mean positive control OD]
Interpretation of the results:
The calculated percent positivity (PP) if less than 25%, the sample was consider as negative and if PP was equal or more than 25%,
then the sample was consider as positive.

Results:
There were overall 92 (24.47%) positive samples for A. marginale of O. aries. In Ghazi Baba 19 (19.00%) positive cases were
detected in which 6 (13.33%) were Balkhai, 4 (16.00%) Watanai, 1 (16.67%) Punjabai and 8 (30.77%) Turkai. In Warsak road 17
(22.66%) positive cases were detected in which 3 (12.00%) were Balkhai, 7 (31.82%) Watanai, 3 (18.75%) Punjabai and 4 (33.33%)
Turkai. In Badabher 19 (25.33%) positive cases were detected in which 5 (20.83%) were Balkhai, 6 (50.00%) Watanai, 4 (16.67%)
Punjabai and 4 (26.67%) Turkai. In Peshtakhara 16 (32.00%) positive cases were detected in which 4 (40.00%) were Balkhai, 1
(8.33%) Watanai, 3 (23.10%) Punjabai and 8 (53.33%) Turkai. In Mashokhel 16 (32.00%) positive cases were detected in which 4
(33.33%) were Balkhai, 4 (25.00%) Watanai, 3 (33.33%) Punjabai and 5 (38.46%) Turkai. In Barha 8 (32.00%) positive cases were
detected in which 2 (28.57%) were Balkhai, 3 (27.027%) Watanai, 3 (37.5%) Punjabai and 0 (0.00%) Turkai (Table 1).
The infection was high in Peshtakhara, Mashokhel and Barha, while lower in Ghazi baba as compare to other areas. In total 17
(18.28%) positive Balkhai males, 9 (16.07%) were adult and 8 (21.62%) were young, in 21 (29.57%) positive Watanai males, 16
(40.00%) were adult and 5 (16.13%) were young, in total 12 (20.68%) positive Punjabai males, 7 (22.58%) were adult and 5 (18.52%)
were young and in total 22 (33.85%) positive Turkai males, 6 (14.28%) were adult and 14 (60.87%) were young (Table 2).
In total 6 (19.35%) positive Balkhai females 5 (41.67%) were adults and 1 (5.26%) was young, in 5 (19.23%) positive Watanai
females 2 (14.28%) were adults and 3 (25.00%) were young, in 4 (22.22%) positive Punjabai females 1 (12.50%) was adults and 3
(30.00%) were young and in 7 (50.00%) positive Turkai females 4 (50.00%) were adults and 3 (50.00%) were young (Table 3).
Discussion:
The research on sheep anaplasmosis (A. marginale) is rare and little literature is available. The frequency of sero-positivity of sheep
anaplasmosis in this research were (24.47%) which is very low as compared to the prevalence of sero-positive sheep found by Hornok
et al.
[6]
(99.4%) in Hungry and high as compared to the prevalence of sero-positive sheep found by Cabral et al.
[4]
(8.92%). Sero-
prevalence were found by Ramos et al.
[10]
(16.17%) in Ibimirim county, semi-arid region of Pernambuco State, Brazil using
monoclonal antibody ANAF16C1 and De La Fuente et al.
[5]
(75.0%), in sicily, Italy, using competitive ELISA, based on recombinant
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

150 www.ijergs.org

MSP-5 of A. marginale. The low sero-prevalence rate in this research work can be the cause of low tick vector population in Peshawar
area. However, some ticks were also observed in sheep during blood samples collection. This result represents the first description of
antibodies for Anaplasma sp. in sheep from Peshawar, Pakistan. Further studies are require to know the epidemiology of Anaplasma
sp. infection in sheep, in Pakistan, particularly to define which species is involved, possible impacts and vectors in animal production
and in public health.

















Figure 1. Map of Distract Peshawar, Pakistan (Google, 2012)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

151 www.ijergs.org


Table 1. Area wise collected and positive blood samples for A. marginale by indirect Enzyme Linked
Immunosorbent Assay (iELISA) in sheep during January -May, 2012 in Peshawar, Pakistan.

n
1
, n
2
, n
3
and n
4
: Shows the total number of collected samples of Balkhai, Watanai, Punjabai and Turkai breed
respectively.
P: Indicate the positive samples for A. marginale.


S No. Area
Total
sample
Positive
(%)
Balkhai Watanai Punjabai Turkai
n
1

P

(%) n
2
P

(%) n
3

P

(%)
n
4
P (%)
1
Ghazi Baba,
Ring road
100 19 (19.00) 45
6
(13.33)
25
4
(16.00)
6
1
(16.67)
26
8
(30.77)
2 Warsak Road 75 17 (22.66) 25
3
(12.00)
22
7
(31.82)
16
3
(18.75)
12 4 (33.33)
3 Badabher 75 19 (25.33) 24
5
(20.83)
12
6
(50.00)
24
4
(16.67)
15
4
(26.67)
4 Peshtakhara 50 16 (32.00) 10
4
(40.00)
12
1
(8.33)
13
3
(23.10)
15
8
(53.33)
5 Mashokhel 50 16 (32.00) 12
4
(33.33)
16
4
(25.00)
9
3
(33.33)
13 5 (38.46)
6 Barha 26
8
(32.00)
7
2
(28.57)
11
3
(27.27)
8
3
(37.5)
0
0
(0.00)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

152 www.ijergs.org

Table 2. Male age wise collected and positive blood samples for A. marginale by indirect Enzyme Linked
Immunosorbent Assay (iELISA) in sheep during January -May, 2012 in Peshawar, Pakistan.
*More than one year
**Less than one year


Table 3. Female age wise collected and positive blood samples for A. marginale by indirect Enzyme Linked
Immunosorbent Assay (iELISA) in sheep during January -May, 2012 in Peshawar, Pakistan.
*More than one year
**Less than one year

Acknowledgments:
We are grateful to Dr. Ghufran Ullah, Dr. Ikhwan Khan and Dr. Ijaz Khan, Senior Researchers, Veterinary Research Institute (VRI),
Peshawar for their full support and cooperation at every step in current research work. The experiments comply with the current laws
of the country in which they were performed.

S
No.
Breeds
Total
samples
Male
samples
Male +v
(%)
Male
Total
*adult
Adult +v
(%)
Total
**young
Young +v
(%)
1 Balkhai
124 93 17(18.28) 56 9 (16.07) 37 8 (21.62)
2 Watanai 97 71 21(29.57) 40 16 (40.00) 31 5 (16.13)
3 Punjabai 76 58 12(20.68) 31 7 (22.58) 27 5 (18.52)
4 Turkai 81 65 22(33.85) 42 6 (14.28) 23 14(60.87)
S No. Breeds
Total
samples
Female
samples
Females
+v (%)
Female
Total
*adult
Adult +v
(%)
Total
**young
Young +v
(%)
1 Balkhai 124 31 6 (19.35) 12 5 (41.67) 19 1 (5.26)
2 Watanai 97 26 5 (19.23) 14 2 (14.28) 12 3 (25.00)
3 Punjabai 76 18 4 (22.22) 8 1 (12.50) 10 3 (30.00)
4 Turkai 81 14 7 (50.00) 8 4 (50.00) 6 3 (50.00)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

153 www.ijergs.org

REFERENCES:
1. Akerejola, O.O., Schillhorn van V.T.W., Njoku, C.O. 1979. Ovine and Caprine diseases in Nigeria: a review of economic
losses. Bulletin of Animal Health and Production in Africa 27, 65 – 70.
2. Bazarusanga, T., Geysen, D., Vercruysse, Madder, M. 2007a. An update on the ecological distribution of Ixodid ticks
infesting cattle in Rwanda: country-wide cross-sectional survey in the wet and the dry season. Experimental and Applied
Acarology, 43, 279–291.
3. Cabral, D.A., Araújo, Flábio Ribeiro de, RAMOS, Carlos Alberto do Nascimento, Alves L.C., Porto W.J.N., Faustino M.A.
da Gloria. 2009. Serological survey of Anaplasma sp. in sheep from State of Alagoas, Brazil, Revista Brasileira Saúde
Producao Animal, 10(3), 708-713.
4. Dumler, J.S., Barbet, A.F., Bekker, C.P.J., Dasch, G.A., Palmer, G.H., Ray, S.C., Rikihisa, Y. and Rurangirwa, F.R. 2001.
Reorganization of genera in the families Rickettsiaceae and Anaplasmataceae in the order Rickettsiales: unification of some
species of Ehrlichia with Anaplasma, Cowdria with Ehrlichia and Ehrlichia with Neorickettsia, descriptions of six new
species combinations and designation of Ehrlichia equi and ‗HGE agent‘ as synonyms of Ehrlichia phagocytophila.
International Journal of Systematic Evolutionary Microbiology, 51, 2145–2165.
5. Hornok, S., Elek, V., De La Fuente, J., Naranjo, V., Farkas, R., Majoros, G., Foldvári, G. 2007. First serological and
molecular evidence on the endemicity of A. ovis and A. marginale in Hungary. Veterinary Microbiology, 122(4), 316-322.
6. Kashif M. and Ahmad M.S. 2014. Geographical seroprevalence of A. marginale infection by ELISA in O. aries, in district
Peshawar, Pakistan. Journal of Zoology Studies., 1(2): 15-18.
7. Kocan, K.M., de la Fuente J., Guglielmone, A.A., Mele´ndez, R.D. 2003. Antigens and alternatives for control of A.
marginale infection in cattle. Clinical Microbiology Reviews, 16, 698–712.
8. Palmer, G.H. 1992. Development of diagnostic reagents for anaplasmosis and babesioses. In: Dolan, T.T. Recent
developments in the control of anaplasmosis babesioses and cowdriosis. English Press, International Laboratory for Animal
Diseases, Nairobi: pp 56-66.
9. Perveen, F. and Kashif, M., 2012. Comparison of infestation of gastrointestinal helminth parasites in locally available equines
in Peshawar, Pakistan. Res. Opin. Animal. Vet. Sci., 2(6): 412-417.
10. Potgieter, F.T., Stoltsz, W.H. 2004. Bovine anaplasmosis. In: Coetzer JAW, Tustin RC, (Eds.), Infectious Diseases of
Livestock, vol. I. Oxford University Press, Southern Africa, Cape Town, pp. 594–616.
11. Ramos, R.A.N., Ramos, C.A.N., Araújo, F.R., Melo, E.S.P., Tembue, A.A.S., Faustino, M.A.G., Alves, L.C., Rosinha,
G.M.S., Elisei, C. and Soares, C.O. 2008. Detecção de anticorpos para Anaplasma sp. em pequenos ruminantes no semi-árido
do Estado de Pernambuco, Brasil. Revista Brasileira de Parasitologia Veterinária, 17(2), 115-117.
12. Strik, N.I., Alleman, A.R., Barbet, A.F., Sorenson, H.L., Wamsley, H.L., Gaschen, F.P., Luckschander, N., Wong, S., Foley,
J.E., Bjoersdorff, A. and Stuen, S. 2007 Characterization of A. phagocytophilum major surface proteins 5 and the extent of its
cross-reactivity with A. marginale. Clinical and Vaccine Immunology, 14(3), 262-268.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

154 www.ijergs.org

Mixing Wind Power Generation System with Energy Storage Equipments
Mohammad Ali Adelian
1

1
Research Scholar, Email- Ma_adelian@yahoo.com

ABSTRACT – — with the advance in wind turbine technologies, the cost of wind energy becomes competitive with other fuel-
based generation resources. Due to the price hike of the fossil fuel and the concern of the global warming, the development of wind
power has rapidly progressed over the last decade. The annual growth rate of the wind generation installation has exceeded 26% since
1990s. Many countries have set goal for high penetration levels of wind generations. Recently, several large-scale wind generation
projects have been implemented all over the world. It is economically beneficial to integrate very large amounts of wind capacity in
power systems. Unlike other traditional generation facilities, using wind turbines present technical challenges. Electric power. The
distinct feature of the wind energy is its nature of ―intermittent‖. Since it is difficult to predict and control the output of the wind
generation, its potential impacts on the electric grid are different from the traditional energy sources. At high penetration level, an
extra fast response reserve capacity is needed to cover shortfall of generation when a sudden deficit of wind takes place. However, this
requires capital investment and infrastructure improvement. To enable a proper management of the uncertainty, this paper presents an
approach to make wind power become a more reliable source on both energy and capacity by using energy storage devices. Mixing
the wind power generation system with energy storage will reduce fluctuation of wind power. Since it requires capital investment for
the storage system, it is important to estimate reasonable storage capacities for desired applications. In addition, energy storage
application for reducing the output variation and improving the dynamic stability during the gust wind and severe fault are also
studied.

Keywords— Wind Power Generation, Conversion System, Energy Storage , Batteries, Pumped Water, Compressed Air, Steady
State Power Flow, Model of the Wind Turbine and Energy Storage .
INTRODUCTION
The development of wind power has rapidly growth over the last decade, largely due to the improving in the technology, the provision
of government energy policy, the public concern about global warming, and concerned on the limited resource of conventional fuel
based generation [1]. As the fossil fuel causes the serious problem of environmental pollution, the wind energy is one of the most
attractive clean alternative energy sources. Wind power is one of the most mature and cost effective resources among different
renewable energy technologies. Wind energy has gained an extensive interest and become one of the most promising renewable
energy alternatives to the conventional fuel based power resources. Despite various benefits of the wind energy, the integration of
wind power into the grid system is difficult to manage. The distinct feature of the wind energy from other energy resources is that its
produced energy is ―intermittent‖. Due to the wind power is an unstable source, its impact on the electric grid are different from the
traditional energy sources.

Challenge
Due to its intermittent in nature and partly unpredictable, wind power production introduces more uncertainty into operating a power
grid. The major challenge to use wind as a source of power is that wind power may not be available when electricity is needed. the
excess wind power has driven the wholesale electricity price to the negative territory in the morning while reduction of the wind
generation has caused price spike in the afternoon. Thus uncertainty wind power may create the other issues for power system
operation. For that reason, this paper studies the use of ―Energy Storage Equipment‖ to reduce the uncertainty and negative impact of
the wind generation. The integration of energy storage system and wind generation will enhance the grid reliability and security.
Energy storage system can shift the generation pattern and smooth the variation of wind power over a desired time horizon. It is also
be used to mitigate possible price hikes or sags. However, this requires significant capital investment and possible infrastructure
improvement. It is important to perform cost benefit analysis to determine proper size of energy storage facilities for the desired
operations.

Wind Power Generation
The amount mechanical power of a wind turbine is formulated as:
Where ρ is the air density, R is the turbine radius, v the wind speed and CP is
the turbine power coefficient which represents the power conversion efficiency of a wind turbine. Therefore, if the air density, swept
area, and wind speed are constant, the power of wind turbine will be a function of power coefficient of the turbine.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

155 www.ijergs.org


Wind Generator Modeling
There are many different generator technologies for wind-power applications in use today. The main distinction can be made between
fixed-speed and variable-speed wind-generator concepts.
Fixed Speed Wind Generator:
A fixed-speed wind-generator is usually equipped with a squirrel cage induction generator whose speed variations are only very
limited (see figure 2.3). Power can only be controlled through pitch-angle variations. Because the efficiency of wind-turbines
(expressed by the power coefficient CP) depends on the tip-speed ratio λ, the power of a fixed-speed wind generator varies directly
with the wind speed. Since induction machines have no reactive power control capabilities, fixed or variable power factor correction
systems are usually required for compensating the reactive power demand of the generator.

Figure 2.3 Fixed speed induction generator

Variable Speed Wind Generator: Doubly-Fed Induction and Converter-Driven Generator (DFIG)

In contrast to fixed-speed, variable speed concepts allow operating the wind turbine at the optimum tip-speed ratio λ and hence at the
optimum power coefficient CP for a wide wind-speed range. Varying the generator speed requires frequency converters that increase
investment costs. The two most-widely used variable-speed wind-generator concepts are the doubly-fed induction generator (figure
2.4) and the converter driven synchronous generator (figure 2.5 and figure 2.6). Active power of a variable-speed generator is
controlled electronically by fast power electronics converters, which reduces the impact of wind-fluctuations to the grid. Additionally,
frequency converters (self-commutated PWM-converters) allow for reactive power control and no additional reactive power
compensation device is required.

Figure 2.4 Doubly-fed induction generator Figure 2.5 Converter-driven synchronous generator


Figure 2.6 Converter-driven synchronous generator (Direct drive)

Figure 2.5 and figure 2.6 show two typical concepts using a frequency converter in series to the generator. Generally, the generator
can be an induction or a synchronous generator. In most modern designs, a synchronous generator or a permanent magnet (PM)
generator is used. In contrast to the DFIG, the total power flows through the converter. Its capacity must be larger and cost more
compare to the DFIG with the same rating. Figure 2.6 shows a direct drive wind-turbine that works without any gear box. This
concept requires a slowly rotating synchronous generator with a lot of pole-pairs [9].
Energy Storage
Energy storage is the storing of some form of energy that can be drawn upon at a later time to perform some useful operations.
―Energy storages‖ are defined in this study as the devices that store energy, deliver energy outside (discharge), and accept
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

156 www.ijergs.org

energy from outside (charge). Energy storage lets energy producers send excess electricity over the electricity transmission grid to
temporary electricity storage sites that become energy producers when electricity demand is greater. Grid energy storage is
particularly important in matching supply and demand over a 24 hour period of time. Energy storage system can shift the generation
pattern and smooth the variation of wind power over a desired time horizon. These energy storages, so far, mainly include chemical
batteries, pumped water, compressed air, flywheel, thermal, superconducting magnetic energy, and hydrogen.
Batteries:
Battery storage has been used in the very early days of direct-current electric power networks. With the advance in power electronic
technologies, battery systems connected to large solid-state converters have been used to stabilize power distribution networks for
modern power systems. For example, a system with a capacity of 20 megawatts for 15 minutes is used to stabilize the frequency of
electric power produced on the island of Puerto Rico.Batteries are generally expensive, have maintenance problems, and have limited
life spans. One possible technology for large-scale storage is large-scale flow batteries. For example, sodium-sulfur batteries could be
implemented affordably on a large scale and have been used for grid storage in Japan and in the United States. Battery storage has
relatively high efficiency, as high as 90% or better.
Pumped Water:
In many places, pumped storage hydroelectricity is used to even out the daily demand curve, by pumping water to a high storage
reservoir during off-peak hours and weekends, using the excess base-load capacity from coal or nuclear sources. During peak hours,
this water can be used for hydroelectric generation, often as a high value rapid-response reserve to cover transient peaks in demand.
Pumped storage recovers about 75% of the energy consumed, and is currently the most cost effective form of mass power storage. The
main constraint of pumped storage is that it usually requires two nearby reservoirs at considerably different heights, and often requires
considerable capital expenditure. Recently, a new concept has been reposed to use wind energy to pump water in pumped-storage.
Wind turbines that direct drive water pumps for an 'energy storing wind dam' can make this a more efficient process, but are again
limited in total capacity and available location.
Compressed Air:
Another grid energy storage method is to use off-peak or renewably generated electricity to compress the air, which is usually stored
in an old mine or some other kind of geological feature. When electricity demand is high, the compressed air is heated with a small
amount of natural gas and then goes through expanders to generate electricity.
Model of the Wind Turbine and Energy Storage:
A study system consisting of wind turbine and energy storage connected to a power system is modeled using the Power System
Simulation for Engineering (PSS/E) software by Power Technologies Incorporation. In the PSS/E, the wind turbine model is equipped
with an IPLAN program that guides the user in preparing the dynamic modules related to this model. The collection of wind turbines,
wind speed information, wind turbine parameters, generator parameters, and the characteristics of the control systems are included
[16]. This study uses the wind package of PSS/E to simulate and combine the wind power generation system with energy storage
equipments integrated into a power grid.The dynamic model is shown in Figure 3.3. A user-written model can be used to simulate a
wind gust by varying input wind speed to the turbine model. The GE 3.6 machine has a rated power output of 3.6 MW. The reactive
power capability of each individual machine is ±0.9 pf, which corresponds to Qmax = 1.74 MVAR and Qmin = -1.74 MVAR, and an
MVA rating of 4.0 MVA. The minimum steady-state power output for the WTG model is 0.5 MW. In this study, the GE wind turbine
models are used for simulation following the manufacturer‘s recommendations [17].

Figure 3.3 Dynamic model of GE 3.6 MW wind turbine
For energy storage model, EPRI battery model CBEST of PSS/E is used for simulation in this study. It simulates the dynamic
characteristics of a battery. This model simulates power limitations into and out of the battery as well as AC current limitations at the
converter. The model assumes that the battery rating is large enough to cover entire energy demand that occurs during the simulation
[18].
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

157 www.ijergs.org

Typical Variation of Wind Power:
Figure 4.1 to 4.4 show that storage capacity requirement to maintain the output of the wind farm as constant from one hour to one day
under a typical variation of wind power. The storage capacities are 2.036MWh, 5.508MWh, 16.233MWh and 103.451MWh
respectively. The maximum charging or discharging power ratings are 7.39MW, 10.66MW, 13.53MW and 17.58MW respectively for
different desired operation scenarios. Summary of these estimated values relative to energy storage in typical variation of wind power
scenario are shown Table 4.1.





Smaller Variation of Wind Power:
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

158 www.ijergs.org

As shown in Figure 4.5 to 4.8, they present simulation results for the combined system with storage capacity from one hour to one day
when the wind speed is relative stable. As one can see, the required storage capacities and charging/discharging power ratings are
smaller than the previous case. The storage capacities are 0.870MWh, 1.690MWh, 3.160MWh and 10.435MWh and the
charging/discharging power ratings are 4.63MW, 4.69MW, 5.74MW and 6.26MW respectively. Summary of these estimated values
relative to energy storage in smaller variation of wind power scenario are shown Table 4.2.





Larger Variation of Wind Power:
Figure 4.9 to 4.12 show that the behavior of the system for one hour to one day storage capacity when there is large variation of the
wind speed. The required storage capacities are 5.164MWh, 10.524MWh, 22.819MWh and 137.863MWh respectively. Maximum
charging/discharging power rating requirements are 16.20MW, 23.31MW, 27.94MW and 26.69MW respectively. Summary of these
estimated values relative to energy storage in smaller variation of wind power scenario are shown Table 4.3.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

159 www.ijergs.org





Steady State Power Flow Result:
The purpose of this power flow study is to observe the potential system impact during normal and contingency conditions after the
39.6 MW proposed wind farm is interconnected with the grid system. The contingency analysis considers the impact of the new wind
power on transmission line loading, transformer facility loading, and transmission bus voltage during outages of transmission line
and/or transformers. This study assumes that the energy storage systems is to keep 39.6 MW power output from wind collected bus
350 to grid. Therefore, the power flow result with energy storage equipments is the same as without them. To keep power system
operates safely and reliably, the power flow result need to comply with the Taipower Grid Planning Standards [20]. The single line
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

160 www.ijergs.org

diagram of system near the wind farm is shown in Figure 4.14. Table 4.4 compares the steady state and single contingency (N-1)
power flow results before and after the installation of the wind farm. All power flows in the list are expressed in MVA. For N-1
analysis, the obtained result showed no negative impact of the wind farm on the power system. The analysis indicated that an
installation of the 39.6 MW wind power has very little effect on the grid system.





Acknowledgment
I want to say thank you to my family, specially my mother for supporting me during my study in M tech and all my friend which help
me during this study. I have to also thanks for my college to support me during my m tech in electrical in Bharati Vidyapeeth deemd
university college of engineering.


CONCLUSION
Wind generation is the fastest growing renewable energy source all over the world with an average annual growth rate of more than 26
% since 1990 [22]. Annual wind generation markets have been increasing by an average of 24.7% over the last 5 years. Global Wind
Energy Council (GWEC) predicts that the global wind market will reach 240 GW of total installed capacity by the year 2012 [23].
Based on information from studies and operational experience, the report of European Wind Energy Association (EWEA) concludes
that it is perfectly feasible to integrate the targeted wind power capacity of 300GW in 2030 – corresponding to an average penetration
level of up to 20% [24, 25]. For high penetration levels of wind power, optimization of the integrated system should be explored. One
has to establish strategies to modify system configuration and operation practices to accommodate high level wind penetration. For
storage capacity option, our study reveals that more energy storage capacity and power rating are required if longer stable wind power
output is desired. For simulation result during wind gust, combining the wind power generation system with proper energy storage
equipments can reduce most of power system fluctuation.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

161 www.ijergs.org

REFERENCES
[1] Ming-Shun Lu, Chung-Liang Chang, and Wei-Jen Lee, ―Impact of Wind Generation on a Transmission System,‖ Power
Symposium, 2007 39th NAPS 2007.
[2] Chai. Chompoo-inwai, W.J. Lee, P. Fuangfoo, M. Williams, and J.Liao, ―System Impact Study for the Interconnection of Wind
Generation and Utility System,‖IEEE I&CPS conference, Clearwater Beach, Florida, May 1-6, 2004, Industry Application, IEEE
Transaction, Jan.-Feb. 2005.
[3] Energy Efficiency and Renewable Energy web site ― Wind energy topic‖ ttp://www1.eere.energy.gov/windandhydro/index.html
[4] ERCOT Report, ―Analysis of Transmission Alternatives for CompetitiveRenewable Energy Zones in Texas‖, December 2006.
[5] M. Hashem Nehrir, ―A Course on Alternative Energy Wind/PV/Fuel Cell PowerGeneration,‖ IEEE PES General Meeting, June
2006.
[6] Debosmita Das, Reza Esmaili, Longya Xu, and Dave Nichols, ―An Optimal Design of a Grid Connected Hybrid voltaic/Fuel Cell
System for Distributed Energy Production,‖ IEEE IES, IECON 2005 31st Annual Conference, Nov 2005.
[7] Z. Chen, Y. Hu, ―A Hybrid Generation System Using Variable Speed Wind Turbines and Diesel Units,‖ IEEE IES, IECON 2003
29th Annual Conference, Nov2003.
[8] W.J. Lee, ―Wind Generation and its Impact on the System Operation,‖ Renewable Energy Course Presentation at UTA, August
2007
[9] Markus Pöller and Sebastian Achilles, ―Aggregated Wind Park Models for Analyzing Power System Dynamics‖.
[10] Wikipedia, the free encyclopedia, ―Energy storage‖ http://en.wikipedia.org/wiki/Energy_storage
[11] Wind resource of Taiwan Map from Renewable Energy in Taiwan.http://re.org.tw/com/f1/f1w1.aspx
[12] Taiwan Power Company, ―Tatan wind speed data between Feb 1, 2006 and May 31, 2007 ‖.
[13] K. Methaprayoon, C. Yingvivatanapong, W.J. Lee, and J.Liao, ―An Integration of ANN Wind Power Estimation into
UConsidering the Forecasting Uncertainty,―IEEE Industrial and Commercial Power System Technical Conference, May 2005.
[14] Michael R. Behnke, and William L. Erdman, ―Impact of Past, Present and FutureWind Turbine Technologies on Transmission
System Operation and Performance,―PIER Project Report, March 9, 2006.
[15] Taipower Company 2006-2007 study planning No.006-2821-02, ―The system study of the Taipower system with the rapid
increased wind power generation capacity‖, Middle report in 2007,3.
[16] ―GE Wind 1.5 MW and 3.6MW Wind Turbine Generators, PSS/E Dynamic Models Documentation‖ PTI., Issue 3.0, June 10,
2004.
[17] Nicholas W.Miller, William W.Price, Juan J.Sanchea-Gasca, ―Dynamic Modeling of GE 1.5MW and 3.6MW Wind Turbine-
Generators‖, October 27, 2003, Version3.0.
[18] PSS/E Power Operation Manual and Program Application Guide, PTI., August 2004.
[19] Ming-Shun Lu, Chung-Liang Chang, Wei-Jen Lee, and Li Wang, ―Combining the Wind Power Generation System with Energy
Storage Equipments‖ IEEE IAS 43thAnnual Meeting, October 2008.
[20] Taiwan Power Company, ―TPC‘s Grid Planning Standards‖, Octorber , 2005.
[21] Kyung Soo Kook, McKenzie K.J., Yilu Liu and Atcitty S., ―A study on applications of energy storage for the wind power
operation in power systems,‖ IEEE PES General Meeting, June 2006.
[22] Global Wind Energy Council (GWEC), ―Global Wind 2005 Report‖, August, 2005.
[23] Global Wind Energy Council (GWEC), ―Global Wind 2007 Report‖, Second Edition, May , 2008.
[24] EWEA, ―Large scale integration of wind energy in the European power supply: analysis‖, December, 2005. ttp://www.ewea.org/
[25] First results of IEA collaboration ―Design and Operation of Power Systems with Large Amounts of Wind Power‖, Global Wind
Power Conference September 1821,2006, Adelaide, Australia.







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

162 www.ijergs.org

Low Power Test Pattern Generation in BIST Schemes
Yasodharan S
1
, Swamynathan S M
2

1
Research Scholar (PG), Department of ECE, Kathir college of Engineering, Coimbatore, India
2
Assistant Professor, Department of ECE, Kathir college of Engineering, Coimbatore, India
Email- yasodharanece@rediffmail.com

ABSTRACT – — BIST is a viable approach to test today's digital systems. During self-test, the switching activity of the Circuit
Under Test (CUT) is significantly increased compared to normal operation and leads to an increased power consumption which often
exceeds specified limits. The proposed method generates Multiple Single Input Change (MSIC) vectors in a pattern. The each
generated vectors are applied to a scan chain is an SIC vector. A class of minimum transition sequences is generated by the use of a
reconfigurable Johnson counter and a scalable SIC counter. The proposed TPG method is flexible to both the test-per-scan schemes
and the test-per-clock. A theory is also developed to represent and analyze the sequences and to extract a class of MSIC sequences.
The proposed BIST TPG decreases transitions that occur at scan inputs during scan shift operations and hence reduces switching
activity in the CUT. As the switching activity is reduced, the power consumption of the circuit will also be reduced.
Keywords— Built-in self-test (BIST), Circuit Under Test (CUT), Low Power, Single-Input Change (SIC), Test Pattern Generator
(TPG), Linear Feedback Shift Register (LFSR).
INTRODUCTION
A digital system is tested and diagnosed during its lifetime for several times. Test and diagnosis techniques applied to the system
must be speedy and have very high fault coverage. One method to ensure this is to specify test as system functions, so it becomes Built
In Self Test. It reduces the complexity and difficulty in VLSI testing, and thereby decreases the cost and reduces reliance upon
external (pattern-programmed) test equipment. Test pattern generators (TPGs) comprising of linear feedback shift register (LFSR) are
used in the conventional BIST architectures. The major negative aspect of these architectures is that the pseudorandom patterns
generated by the LFSR results in the high switching activities in the CUT. It can damage the circuit and the lifetime, product yield will
be reduced. In addition, the target fault coverage is achieved by generating very long pseudorandom sequences by LFSR.
A. Prior Work
Several advanced BIST techniques have been studied and applied. The first class is the LFSR tuning. Girard et al. analyzed the
impact of an LFSR‘s polynomial and seed selection on the CUT‘s switching activity, and proposed a method to select the LFSR seed
for energy reduction.
The second class is low-power TPGs. One approach is to design low-transition TPGs. Wang and Gupta used two LFSRs of
different speeds to control those inputs that have elevated transition densities [5]. Corno et al. provided a low power TPG based on the
cellular automata to reduce the test power in combinational circuits [6]. Another approach focuses on modifying LFSRs. The scheme
in [7] reduces the power in the CUT in general and clock tree in particular. In [8], a low-power BIST for data path architecture is
proposed, which is circuit dependent. So the nondetecting subsequences must be determined for each circuit test sequence. Bonhomme
et al. [9] used a clock gating technique where two nonoverlapping clocks control the odd and even scan cells of the scan chain so that
the shift power dissipation is reduced by a factor of two. The ring generator [10] can generate a single-input change (SIC) sequence
which can effectively reduce test power. The third approach focuses on reducing the dynamic power dissipation during scan shift
through gating of the outputs of a portion of the scan cells. Bhunia et al. [11] inserted blocking logic into the stimulus path of the scan
flip-flops to prevent the propagation of the scan ripple effect to logic gates. The need for transistors insertion, however, makes it
difficult to use with standard cell libraries that do not have power-gated cells. In [12], the efficient selection of the most suitable subset
of scan cells for gating along with their gating values is studied.
The third class makes use of the prevention of pseudorandom patterns that do not have new fault detecting abilities [13]–[15].
These architectures apply the minimum number of test vectors required to attain the target fault coverage and therefore reduce the
power. However, these methods have high area overhead, need to be customized for the CUT, and start with a specific seed.
Gerstendorfer et al. also proposed to filter out nondetecting patterns using gate-based blocking logics [16], which, however, add
significant delay in the signal propagation path from the scan flip-flop to logic.
Several low-power approaches have also been proposed for scan-based BIST. The architecture in [17] modifies scan-path
structures, and lets the CUT inputs remain unchanged during a shift operation. Using multiple scan chains with many scan-enable (SE)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

163 www.ijergs.org

inputs to activate one scan chain at a time, the TPG proposed in [18] can reduce average power consumption during scan-based tests
and the peak power in the CUT. In [19], a pseudorandom BIST scheme was proposed to reduce switching activities in scan chains.
Other approaches include LT-LFSR [20], a low-transition random TPG [21], and the weighted LFSR [22]. The TPG in [20] can
reduce the transitions in the scan inputs by assigning the same value to most neighboring bits in the scan chain. In [21], power
reduction is achieved by increasing the correlation between consecutive test patterns. The weighted LFSR in [22] decreases energy
consumption and increases fault coverage by adding weights to tune the pseudorandom vectors for various probabilities.
B. Contribution and Paper Organization
This paper presents the theory and application of a class of minimum transition sequences. The proposed method generates SIC
sequences, and converts them to low transition sequences for each scan chain. This can decrease the switching activity in scan cells
during scan-in shifting. The advantages of the proposed sequence can be summarized as follows.
1) Minimum transitions
2) Uniqueness of patterns
3) Uniform distribution of patterns
4) Low hardware overhead consumed by extra TPGs
The rest of this paper is organized as follows. In Section II, the proposed MSIC-TPG scheme is presented. The principle of the new
MSIC sequences is described in Section III. In Section IV, the properties of the MSIC sequences are analyzed. In Section V,
experimental methods and results on test power, fault coverage, and area overhead are provided to demonstrate the performance of the
propsoed MSIC-TPGs. Conclusions are given in Section VI.

PROPOSED MSIC-TPG SCHEME
This section develops a TPG scheme that can convert an SIC vector to unique low transition vectors for multiple scan chains. First,
the SIC vector is decompressed to its multiple codewords. Meanwhile, the generated codewords will bit-XOR with a same seed vector
in turn. Hence, a test pattern with similar test vectors will be applied to all scan chains. The proposed MSIC-TPG consists of an SIC
generator, a seed generator, an XOR gate network, and a clock and control block.
















Fig. 1.Symbolic simulation of an MSIC pattern for scan chains

A. Test Pattern Generation Method
Assume there are m primary inputs (PIs) and M scan chains in a full scan design, and each scan chain has l scan cells. Fig. 1 shows
the symbolic simulation for one generated pattern. The vector generated by an m-bit LFSR with the primitive polynomial can be
expressed as S(t) = S0(t)S1(t)S2(t), . . . , Sm−1(t) (hereinafter referred to as the seed), and the vector generated by an l-bit Johnson
counter can be expressed as J (t) = J
0
(t)J
1
(t)J
2
(t), . . . , J
l−1
(t).
In the first clock cycle, J = J
0
J
1
J
2
, . . . , J
l−1
will bit-XOR with S = S
0
S
1
S
2
, . . . , S
M−1
, and the results X
1
X
l+1
X
2l+1
, . . . , X
(M−1)l+1
will
be shifted into M scan chains, respectively. In the second clock cycle, J = J
0
J
1
J
2
, . . . , J
l−1
will be circularly shifted as J = J
l−1
J
0
J
1
, . .
. , J
l−2
, which will also bit-XOR with the seed S = S
0
S
1
S
2
, . . . , S
M−1
. The resulting X
2
X
l+2
X
2l+2
, . . . , X
(M−1)l+2
will be shifted into M scan
chains, respectively. After l clocks, each scan chain will be fully loaded with a unique Johnson codeword, and seed S
0
S
1
S
2
, . . . , S
m−1

will be applied to m PIs.
Since the circular Johnson counter can generate l unique Johnson codewords through circular shifting a Johnson vector, the circular
Johnson counter and XOR gates in Fig. 1 actually constitute a linear sequential decompressor.
B. Reconfigurable Johnson Counter
According to the different scenarios of scan length, this paper develops two kinds of SIC generators to generate Johnson vectors
and Johnson codewords, i.e., the reconfigurable Johnson counter and the scalable SIC counter.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

164 www.ijergs.org











(a)
















(b)

Fig. 2. SIC generators. (a) Reconfigurable Johnson counter. (b) Scalable SIC counter.

For a short scan length, we develop a reconfigurable Johnson counter to generate an SIC sequence in time domain. As shown in
Fig. 2(a), it can operate in three modes.
1) Initialization: When RJ_Mode is set to one and Init is set to logic zero, the reconfigurable Johnson counter will be initialized to
all zero states by clocking CLK2 more than l times.
2) Circular shift register mode: When RJ_Mode and Init are set to logic one, each stage of the Johnson counter will output a
Johnson codeword by clocking CLK2 l times.
3) Normal mode: When RJ_Mode is set to logic zero, the reconfigurable Johnson counter will generate 2l unique SIC vectors by
clocking CLK2 2l times.

C. Scalable SIC Counter
When the maximal scan chain length l is much larger than the scan chain number M, we develop an SIC counter named the
―scalable SIC counter.‖ As shown in Fig. 2(b), it contains a k-bit adder clocked by the rising SE signal, a k-bit subtractor clocked by
test clock CLK2, an M-bit shift register clocked by test clock CLK2, and k multiplexers. The value of k is the integer of log2 (l−M). The
waveforms of the scalable SIC counter are shown in Fig. 2(c). The k-bit adder is clocked by the falling SE signal, and generates a new
count that is the number of 1s (0s) to fill into the shift register. As shown in Fig. 2(b), it can operate in three modes.
1) If SE = 0, the count from the adder is stored to the k-bit subtractor. During SE = 1, the contents of the k-bit subtractor will be
decreased from the stored count to all zeros gradually.
2) If SE = 1 and the contents of the k-bit subtractor are not all zeros, M-Johnson will be kept at logic 1 (0).
3) Otherwise, it will be kept at logic 0 (1). Thus, the needed 1s (0s) will be shifted into the M-bit shift register by clocking CLK2 l
times, and unique Johnson codewords will be applied into different scan chains.













(a)

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

165 www.ijergs.org














(b)


Fig. 3. MSIC-TPGs for (a) test-per-clock and (b) test- per-scan schemes.

D. MSIC-TPGs for Test-per-Clock Schemes
The MSIC-TPG for test-per-clock schemes is illustrated in Fig. 3(a). The CUT‘s PIs X1 − Xmn are arranged as an nxm SRAM-like
grid structure. Each grid has a two-input XOR gate whose inputs are tapped from a seed output and an output of the Johnson counter.
The outputs of the XOR gates are applied to the CUT‘s PIs. A seed generator is an m stage conventional LFSR, and operates at low
frequency CLK1. The test procedure is as follows.
1) The seed generator produces a new seed by clocking CLK1 one time.
2) The Johnson counter generates a new vector by clocking CLK2 one time.
3) Repeat 2 until 2l Johnson vectors are generated.
4) Repeat 1–3 until the expected fault coverage or test length is achieved.
E. MSIC-TPGs for Test-per-Scan Schemes
The MSIC-TPG for test-per-scan schemes is illustrated in Fig. 3(b). The stage of the SIC generator is the same as the maximum
scan length, and the width of a seed generator is not smaller than the scan chain number. The seed generator and the SIC counter
produces the vectors which are given as the inputs of the XOR gates and their outputs are applied to M scan chains, respectively. The
outputs produced by the seed generator and XOR gates are given to the CUT‘s PIs, respectively. The test procedure is as follows.
1) The seed circuit generates a new vector by clocking CLK1 one time.
2) RJ_Mode is set to ―0‖. The reconfigurable Johnson counter will operate in the Johnson counter mode and generate a Johnson
vector by clocking CLK2 one time.
3) After a new Johnson vector is generated, RJ_Mode and Init are set to one. The reconfigurable Johnson counter will operates as a
circular shift register, and generates l codewords by clocking CLK2 l times.
4) Repeat the 2–3 steps until 2l times of Johnson vectors are generated.
5) Repeat 1–4 until the expected fault coverage or test length is achieved.

PRINCIPLE OF MSIC SEQUENCES
The main objective of the proposed algorithm is to reduce the switching activity. In order to reduce the hardware overhead, the
linear relations are selected with consecutive vectors or within a pattern, which can generate a sequence with a sequential
decompressor, facilitating hardware implementation.
Finally, uniformly distributed patterns are desired to reduce the test length (number of patterns required to achieve a target fault
coverage) [21]. This section aims to extract a class of test sequences that meets these requirements.

PROPERTIES OF MSIC SEQUENCES
A. Switching Activity Reduction
For test-per-clock schemes, M segments of the CUT‘s primary inputs are applied with M unique SIC vectors. The mean input
transition density of the CUT is close to 1/l. For test-per-scan schemes, the CUT‘s PIs are kept unchanged during 2l2 shifting-in clock
cycles, and the transitions of a Johnson codeword are not greater than 2. Therefore, the mean input transition density of the CUT
during scan-in operations is less than 2/l.
B. Uniform Distribution of MSIC Patterns
If test patterns are not uniformly distributed, there might be some inputs that are assigned the same values in most test patterns.
Hence, faults that can only be detected by patterns that are not generated may escape, leading to low fault coverage. Therefore,
uniformly distributed test patterns are desired for most circuits in order to achieve higher fault coverage [5].
C. Relationship Between Test Length and Fault Coverage
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

166 www.ijergs.org

The test length of conventional LFSR methods is related to the initial test vector. In other words, the number of patterns to hit the
target fault coverage depends on the initial vector in conventional LFSR TPGs [21].
PERFORMANCE ANALYSIS
To analyze the performance of the proposed MSIC-TPG, experiments on ISCAS‘85 benchmarks and standard full-scan designs of
ISCAS‘89 benchmarks are conducted. The performance simulations are carried out with the Xilinx ISE 12.3 and ISIM Simulator.
Fault simulations are carried out with ISIM Simulator. Synthesis is carried out with Xilinx ISE 12.3 based on 45-nm typical
technology. The test frequency is 100 MHz, and the power supply voltage is 1.1 V. The test application method is test-per-clock for
ISCAS‘85 benchmarks.


(a)





(b)







(c)
Fig. 4. Waveforms of (a) LFSR (b) Reconfigurable Johnson counter (C) Multiple Single Input Change
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

167 www.ijergs.org

Table 1: Total and Peak power Reduction of CUTs









CONCLUSION
This paper has proposed a low-power test pattern generation method that could be easily implemented by hardware. It also developed
a theory to express a sequence generated by linear sequential architectures, and extracted a class of SIC sequences named MSIC.
Analysis results showed that an MSIC sequence had the favorable features of uniform distribution, low input transition density, and
low dependency relationship between the test length and the TPG‘s initial states. Combined with the proposed reconfigurable Johnson
counter or scalable SIC counter, the MSIC-TPG can be easily implemented, and is flexible to test-per-clock schemes and test-per-scan
schemes. For a test-per-clock scheme, the MSIC-TPG applies SIC sequences to the CUT with the SRAM-like grid. For a test-per scan
scheme, the MSIC-TPG converts an SIC vector to low transition vectors for all scan chains. Experimental results and analysis results
demonstrate that the MSIC-TPG is scalable to scan length, and has negligible impact on the test overhead

REFERENCES

[1] Y. Zorian, ―A distributed BIST control scheme for complex VLSI devices,‖ in 11th Annu. IEEE VLSI Test Symp. Dig. Papers,
Apr. 1993, pp. 4–9.
[2] P. Girard, ―Survey of low-power testing of VLSI circuits,‖ IEEE Design Test Comput., vol. 19, no. 3, pp. 80–90, May–Jun.
2002.
[3] A. Abu-Issa and S. Quigley, ―Bit-swapping LFSR and scan-chain ordering: A novel technique for peak- and average-power
reduction in scan-based BIST,‖ IEEE Trans. Comput.- Aided Design Integr. Circuits Syst., vol. 28, no. 5, pp. 755–759, May
2009.
[4] P. Girard, L. Guiller, C. Landrault, S. Pravossoudovitch, J. Figueras, S. Manich, P. Teixeira, and M. Santos, ―Low-energy BIST
design: Impact of the LFSR TPG parameters on the weighted switching activity,‖ in Proc. IEEE Int. Symp. Circuits Syst., vol.
1. Jul. 1999, pp. 110–113.
[5] S. Wang and S. Gupta, ―DS-LFSR: A BIST TPG for low switching activity,‖ IEEE Trans. Comput.-Aided Design Integr.
Circuits Syst., vol. 21, no. 7, pp. 842–851, Jul. 2002.
[6] F. Corno, M. Rebaudengo, M. Reorda, G. Squillero, and M. Violante, ―Low power BIST via non-linear hybrid cellular
automata,‖ in Proc. 18th IEEE VLSI Test Symp., Apr.–May 2000, pp. 29–34.
[7] P. Girard, L. Guiller, C. Landrault, S. Pravossoudovitch, and H. Wunderlich, ―A modified clock scheme for a low power BIST
test pattern generator,‖ in Proc. 19th IEEE VTS VLSI Test Symp., Mar.–Apr. 2001, pp. 306–311.
[8] D. Gizopoulos, N. Krantitis, A. Paschalis, M. Psarakis, and Y. Zorian, ―Low power/energy BIST scheme for datapaths,‖ in
Proc. 18th IEEE VLSI Test Symp., Apr.–May 2000, pp. 23–28.
[9] Y. Bonhomme, P. Girard, L. Guiller, C. Landrault, and S. Pravossoudovitch, ―A gated clock scheme for low power scan testing
of logic ICs or embedded cores,‖ in Proc. 10th Asian Test Symp., Nov. 2001, pp. 253–258.
[10] C. Laoudias and D. Nikolos, ―A new test pattern generator for high defect coverage in a BIST environment,‖ in Proc. 14th
ACM Great Lakes Symp. VLSI, Apr. 2004, pp. 417–420.
[11] S. Bhunia, H. Mahmoodi, D. Ghosh, S. Mukhopadhyay, and K. Roy, ―Low-power scan design using first-level supply gating,‖
IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 13, no. 3, pp. 384–395, Mar. 2005.
[12] X. Kavousianos, D. Bakalis, and D. Nikolos, ―Efficient partial scan cell gating for low-power scan-based testing,‖ ACM Trans.
Design Autom. Electron. Syst., vol. 14, no. 2, pp. 28-1–28-15, Mar. 2009.
[13] P. Girard, L. Guiller, C. Landrault, and S. Pravossoudovitch, ―A test vector inhibiting technique for low energy BIST design,‖
in Proc. 17
th
IEEE VLSI Test Symp., Apr. 1999, pp. 407–412.
[14] S. Manich, A. Gabarro, M. Lopez, J. Figueras, P. Girard, L. Guiller, C. Landrault, S. Pravossoudovitch, P. Teixeira, and M.
Santos, ―Low power BIST by filtering non-detecting vectors,‖ Aided Design Integr. Circuits Syst., vol. 28, no. 5, pp. 755–759,
May 2009.
CUT Total Power µW Peak Power µW
MSIC LFSR MSIC LFSR
C2670 19.9 38.55 312.4 433.1
C3540 46.6 81.44 755.5 918.3
C5315 55.1 110 821.8 1157
C6288 274.8 366.2 1994 2363
C7552 69.6 137 1012 1502
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

168 www.ijergs.org

Review of a Digital Circuit Using Power Gating Techniques to Reduce Leakage
Power
Priyanka Singhal
1
, Nidhi raghav
2
, Pallavi Bahl
3

1
Research Scholar (M.Tech), Department of ECE, BSAITM, Faridabad, haryana, India
2
Guide, Lecturer, Department of ECE, BSAITM, Faridabad, haryana, India
3
Co-guide, Lecturer, Department of ECE, BSAITM, Faridabad, haryana, India

ABSTRACT - Power dissipation is kept in consideration while implementing a digital circuit, On the other hand the process of
scaling is used to analyze the output of that circuit. The process of scaling has its own limitations as the leakage current can flow out
of the circuit due to scaling. The power dissipated from the circuit can be increased by making use of leakage current. The Power
gating techniques are used to compensate the leakage current flowing through the digital circuit. This paper consist the nanometer
technology being used to get different results. The process discussed above can be implemented and simulated by making use of
TANNER suit using s-edit and T-SPICE at 130nm.
Key Words: Power gating circuits, Ground bounce noise, sleep methods, T- SPICE, H- SPICE.

1. Introduction:
The VLSI design system has increased the efficiency of our technological equipments by amending the various parameters such as
reduction in power supply voltage by applying the process of scaling in fabrication process of CMOS design. These parameters has
reduced the power dissipation but could not overcome the problems related to leakage current and circuit delay. To reduce the delay in
circuit lower threshold voltage can be applied while at the same time leakage current can be reduced by CMOS logic. The use of a
multi-threshold CMOS circuit, called a power gating structure widely use in all the portable devices. Power gating technique makes
use of high threshold and low leakage devices such as sleep transistors, which isolates the idle blocks from the power supply and
ground, or from the both. This technique uses higher Vt sleep transistors which disconnect VDD from a circuit block when the block
is not switching.Power gatingis more beneficial than the clock gating. It increases the delay in time as the circuits modified with the
power gating are to be safely entered and exited through power gated modes. Architecture experiences some power exchangesbetween
leakagepowersused for designing and the power dissipation for entering and exiting the low power modes. The blocks can be shut
down by the hardware or software. Power reduction operations can be can be optimized by the driver software. An alternate for this
can be a power management controller. The power gating can be used to achieve leakage power reduction for long term by connecting
an external power supply. An externally switched power supply is a very basic form of power gating to achieve long term leakage
power reduction. Power gating is much more suitable for closing the blocks for a short span. CMOS switches are used to provide the
power to the digital circuit and these CMOS switches are controlled by the power gating controllers.that provide power to the circuitry
are controlled by power gating controllers..









Fig 1. Power gated Circuit [1]


1.2 Ground Bounce
Due to shrinkness of a device at 130 nm and below, the signal integrity becomes a severe problem in VLSI circuits and is increasingly
as per the reduction of size of circuit.The circuit noise is mainly caused due to inductive noise.The imposition of Moore‘s law results
in faster clock speeds and larger number of I/O devices whereas it also results in higher amount of noise in power and ground
planes.This inductive noise is sometimes referred to as the simultaneous switching noise because it is most pronounced when a large
number of I/O drivers switch simultaneously.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

169 www.ijergs.org


2. Ground bounce reduction
In the general circuits using sleep transistors, some logic has separate power and ground pads, while other logic may share the power
and ground pads. A certain length of PCB (Printed Circuit Board) transmission line connects each pad with the real power or ground.
If the PCB has poorly layout, the transmission lines will contribute large parasitic capacitors and inductances, which can deteriorate
the ground bounce effect when the sleep transistors are switched on. The parasitic capacitors, inductances depend largely on what
types of the pads are and how the PCB layout is, however, many empirical data shows that these parasitic parameters can be quite
considerable. The equivalent circuit of the logic using sleep transistors is shown in Figure 2. There are four parts of the equivalent
circuit. Part I is the intrinsic capacitor, inductance and resistor of the power pad and the corresponding on-board transmission lines;
Part II is the equivalent circuit of the functional logic; the sleep transistor is modeled as two resistors in Part III, where RST, ON
<<RST, OFF. When the sleep transistor is turned on, it equals a small resistor which has negligibleeffect on the normal function of the
circuit. When the sleep transistor is turned off, its resistor becomes hugeand cuts off the leakage path of the logic. Part IV is
theintrinsic capacitor, inductance and resistor of the ground pad and the corresponding on-board transmission lines.



Fig 2. Ground bounces reduction logic [4]




Fig 3. Equivalent circuit. [4]



4. Conclusion

We have done a review of scaling of power dissipation using power gating techniques. These power gating techniques are used to
reduce the leakage current, circuit delay and ground bounce etc.

REFERENCES:
[1] Suhwan Kim, Memb, Stephen V. Kosonocky, Daniel R. Knebel, Kevin Stawiasz, and Marios C. Papaefthymiou, ―A Multi-Mode
Power Gating Structure for Low-Voltage Deep-Submicron CMOS ICs‖, IEEE Transactions On Circuits and System, 2007.

[2]Velicheti Swetha,S Rajeswari ―Design and Power Optimization of MTCMOScircuits using Power Gating ‖International Journal of
Advanced Research in Electrical,Electronics and Instrumentation Engineering, 2013
[3]Payam Heydari, Massoud Pedram ―Ground Bounce in Digital VLSI Circuits,‖ IEEE. J. of Solid-State Circuits.
[4]Ku He , Rong Luo, Yu Wang ―A Power Gating Scheme for Ground Bounce Reduction during Mode‖ Transition‖ IEEE Trans. on
VLSI systems,2007.

[5] R.Divya,J.Muralidharan ―Leakage Power Reduction Through Hybrid Multi-Threshold CMOS Stack Technique In Power Gating
Switch‖ International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), 2013.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

170 www.ijergs.org

Socio-technical Interactions in OSS Development
Jasveen Kaur
1
, Amandeep Kaur
1
, Prabhjot Kaur
1
1
Scholar, Guru Nanak Dev University, Amritsar
E-mail-jasveenkaur.1990@gmail.com

ABSTRACT -This study is going to provide directions to open source practitioners to better organize their projects to achieve
greater performance. In this research, we try to understand socio-technical interactions in a system development context by examining
the joint effect of developer team structure and open source software architecture on OSS development performance. We hypothesize
that developer team structure and software architecture significantly moderate each other‘s effect on OSS development performance.
Empirical evidence supports our hypotheses and suggests that Larger teams tend to produce more favorable project performance when
the project being developed has a high level of structural interdependency while projects with a low level of structural
interdependency require smaller teams in order to achieve better project performance. Moreover, centralized teams tend to have a
positive impact on project performance when the OSS project has a high level of structural interdependency. However, when a project
has a low level of structural interdependency, centralized teams can impair project performance.

Keywords—Open source software, collaboration network, social network analysis, software architecture, software project
performance, network centralization, software structural interdependency

I) INTRODUCTION
In recent years, Open Source Software(OSS) development has caused great changes in software world. The software developers
collaborate voluntarily to develop software that they or their organizations need [1]. Compared with traditional software development,
OSS development is unique in that it is self-organized by voluntary developers. Moreover, OSS projects automatically generate
detailed and public logs of developer activities and project outputs in the form of repositories, allowing a clear view of their inner
working details [1]. These unique aspects of OSS have inspired studies regarding motivations of individual participants, governance of
OSS projects [3],organizational learning in OSS projects[2], architecture of OSS code [4], in OSS projects. These OSS studies have
increasingly pointed toward the inseparable role of the social and the technical aspects in shaping OSS development processes and
outcomes. Previous OSS studies suggest that OSS development is particularly suited for an examination of combined effects of the
social and the technical in a system development context since it promotes interactions between software developers and software
artifacts. This study focuses on OSS developer team structure as the social aspect and software architecture as the technical aspect of
OSS projects. Our general research question is: what is the joint effect of developer team structure and OSS project architecture on
OSS development performance? The answer to our research question can serve as a step towards integrating the separate lines of
work on OSS development‘s social and technical dimensions into a coherent research literature and also helping OSS practitioners to
understand the strengths and weaknesses of the OSS development process
II) INSEPARABLE ROLE OF THE SOCIAL AND THE TECHNICAL ASPECTS
Researchers have long recognized the relationship between social processes and technical properties in an organizational work
context. The organizational information processing theory (OIPT) provides a widely cited perspective of this composite relationship:
to achieve optimal performance, there should be a match between information processing capabilities of organizational structure
(social processes) and information processing needs of a given task (technical properties)[6]. In the organizational literature,
information processing capabilities are typically assessed by looking at the collaboration structure of the workforce while information
processing needs are evaluated by examining the level of interdependency among task units. Cataldo et al developed a social–
technical congruence measure that captures the proportion of collaboration activities actually occurred in development teams relative
to the total number of collaboration activities required by interdependency among software development task assignments. A
significant impact of this social–technical congruence on project productivity manifests the equally important roles of organizational
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

171 www.ijergs.org

structure and task characteristics in determining software project performance [9]. In addition, Kim and Umanath [7] employed a
multiplicative interaction model in examining the relationship between development team structure, software task characteristics, and
project performance. This modeling approach revealed that team structure and task characteristics served mutually moderating roles in
affecting project performance outcomes.
In summary, both the organizational and software engineering literatures emphasize the inseparable role of organizational structure
and task characteristics in organizational work performance. Taken together, prior organizational and software engineering research
suggests that social processes and technical properties can play equally important and mutually moderating roles in software
development performance. An understanding of the mutually moderating roles of team structures and project architecture can help
OSS practitioners to realize the social and the technical aspects of OSS development altogether and harvest project performance gains
from their joint effect.
III) SOCIO-TECHNICAL INTERACTIONS IN OSS
1)Social—Development Team Structure:
Open source software development is a kind of distributed software development that has a large amount of contributors and because
of using Internet and make sharing freely it is so successful and useful that developers can communicate over the distance. Due to
having variety of contributors in OSS projects and the story of knowledge sharing among the project can be more powerful and this
might lead to improve even the position of contributors. For example they can move to the developers group from users or even in
developers group move to the core developers. Core developers are a small amount of expert developers, integerated to control and
manage the system. Codevelopers are the people who have directly impact on software development in the project, also they effect on
the code base and they can find issues on licensing.
Following prior research, we view OSS development team structure as an important social aspect of OSS development since it
manifests information processing capabilities of OSS workforce[5]. We conceptualize the development team structure according to
social network theory[11]. Social network theory models individual actors as nodes of a graph joined by their relationships depicted as
links between the nodes [8].When relationships are defined as collaborations on a task, the social network is specified as a
collaboration network.We choose to generate collaboration networks on an intraproject level; that is, each collaboration network
include developers of a single OSS project as nodes and collaboration incidences on tasks (i.e. source code files) of the same project as
links.An intraproject collaboration network is a close-up view of relationship structures within a particular project. Compared with an
interproject (i.e., community-level) network, an intraproject (i.e., project-level) network is more relevant to our research question since
it allows us to evaluate how organizational structure of a particular OSS project team affects performance of the corresponding
project.
In OSS projects, the basic unit of work is a file in the OSS distributed version control system (DVCS) like git. Hence, we view
collaboration tasks as the files in the DVCS system. A collaboration incidence occurs when two developers make code commits to the
same sourcecode file. A collaboration network refers to the graph made of open source developers as nodes and the collaboration
incidences on the same file as links.
We characterize OSS collaboration network structure by two commonly used measures: network size and network centralization.
Network size is the number of nodes in a graph. It indicates the overall scope of the collaboration network.
Network size captures the open nature of OSS development where a large number of developers collaborate to develop software in
contrast to traditional software development where the number of developers is comparatively lower. Network centralization indicates
the extent to which a network is centralized around one or a few nodes. Centralization of a network is measured in comparison to the
most central network—the star network. In the star network, one central node connects to all of the other nodes while all other nodes
are only connected to the central node. Any deviation from this structure indicates a reduction in network centralization. Although
there are other metrics of collaboration network structure, we chose to focus on the two measures because network size and
centralization reflect the major difference between the two alternative philosophies for organizing programming teams: the chief
programmer team and the egoless team. Such teams are intentionally small and centralized around a few programming experts. In
contrast, the egoless team reflects decentralized communication and collaboration among programmers and is less concerned about
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

172 www.ijergs.org

team size. An examination of network size and centralization of OSS teams helps us to understand the link between the unique
characteristics of OSS collaboration network structure and OSS development performance.
2)Technical—Software Architecture:
In recent years, the study of software architecture (SA) has emerged as an autonomous discipline requiring its own concepts,
formalisms, methods, and tools. SA represents a very promising approach since it handles the design and analysis of complex
distributed systems and tackles the problem of scaling up in software engineering. Through suitable abstractions, it provides the means
to make large applications manageable.
An important technical aspect of software development projects is the structural interdependency among processing elements of the
software being developed. Software structural interdependency is ―the strength of association established by a connection from one
module to another.‖ [4]. In other words, software architecture is a formal way to describe the structural interdependency of a software
system in terms of components and their interconnections. By decomposing the overall task into parts and then designing,
implementing, or maintaining each individual part, software architecture or software structural interdependency provides a feasible
way to develop and manage large systems. It reduces the complexity of software development projects to associations.
Prior research shows that software architecture is an important indicator of the information processing need of a software project. In
general, the load and diversity of information needed to be processed by a software project increases with the level of structural
interdependency among its processing elements.
3)Interaction Between Team Structure and Software Architecture:
In this study, interaction between the social and the technical aspects in an organizational work context refers to the mutually
moderating or mutually contingent relationship between team structure (network size and centralization) and software architecture
(structural interdependency) in OSS development. In other words, we conceptualize socio-technical interactions in OSS projects as
multiplicative interaction of team structure and software architecture.
IV) HYPOTHESES
We develop our hypotheses following the central argument of OIPT that organizational work performance is jointly determined by
information processing capabilities of the workforce and information processing needs of the task. As discussed before, structure of
the team collaboration network reflects the information processing capabilities of a development team. Information processing need of
the software development process is represented by the structural interdependency of the software being developed. The mutually
dependent effects of development team structure and software structural interdependency on project performance are specified below :
A. Network Size and Structural Interdependency
Network size has mixed implications to information processing capabilities of a team. On one hand, larger networks incur higher
coordination and communication cost.On the other hand, larger networks carry more diverse expertise and are better at specialization
and division of labour among team members.The overall effect of network size on task performance depends on the structural
interdependency of OSS projects. Projects with a lower level of structural interdependency do not take full advantage of the diverse
expertise and perspectives in a large team while these projects have to bear the increased communication cost in such a team. Network
size can therefore have a negative impact on the project performance of these projects. In a project with a high level of structural
interdependency, capabilities of a large team in processing a heavy load of diverse information can produce salient project
performance gains, compensating for the communication cost associated with a large team. The negative impact of network size on
project performance can therefore be reduced in this scenario.
Reciprocating effects of network size on project performance, impact of software structural interdependency on project performance
can vary across development teams with different network size. In traditional software development, where team size tends to be
small, software structural interdependency is often found to increase software development effort which in turn can impair project
performance. However, recent OSS research suggests that OSS development may resist the negative effect of software structural
interdependency on development effort due to its self-organized nature. With the motive to adjust development team structure
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

173 www.ijergs.org

according to project characteristics, an OSS team can recruit new members when a high level of structural interdependency is
perceived. This will allow the project to take advantage of information processing capabilities afforded by a large collaboration
network. On the other hand, when a team is unwilling or unable to recruit additional members for a project with a high level of
structural interdependency, project performance may be impaired as a result of insufficient information processing capabilities in this
team. Therefore,
Hypothesis1: Collaboration network size and software structural interdependency mutually and positively moderate each other‘s
impact on OSS project performance; that is, impact of network size on project performance is more likely to be positive when
software structural interdependency is higher, and impact of software structural interdependency on project performance is more likely
to be favorable when network size increases.
B. Network Centralization and Structural Interdependency
Similar to network size, network centralization has mixed effects on information processing capabilities of a team. A centralized team
is better at identifying and consolidating expertise in a team. It incurs lower coordination cost than a chain-like network (low network
centralization). However, a centralized team structure imposes significant information processing load on the central nodes. This can
hamper the effectiveness of the whole team. Projects with a low level of structural interdependency do not require much consolidation
among knowledge domains. Thus centralization of project team with low structural interdependency has no much importance, leading
to suboptimal project performance. However, as the structural interdependency of a project increases, the advantage of a centralized
team structure in identifying and consolidating expertise from a wider range of knowledge domains becomes important. This
advantage enables a more centralized team to achieve better performance.
On the other hand, the tendency for software structural interdependency to negatively affect project performance can be particularly
strong in a team with chain-like structure (low centralization) since such a team is relatively ineffective for knowledge
consolidation.This tendency can be reduced by a centralized team since such a team can identify diverse information and coordinate
information processing activities. Therefore,
Hypothesis 2: Collaboration network centralization and software structural interdependency mutually and positively moderate each
other‘s impact on OSS project performance; that is, impact of network centralization on project performance is more likely to be
positive in projects with a higher level of structural interdependency, and impact of software structural interdependency on project
performance is more likely to be favorable when network centralization increases.
V) IMPLEMENTATION
The data for the study were collected from Github.com. Git is a distributed version control and source code management (SCM)
system with an emphasis on speed. Every Git working directory is a full-fledged repository with complete history and full version
tracking capabilities, not dependent on network access or a central server. When you get a copy of the repository, you do not just get
the snapshot, but the whole repository itself.
A. Data Collection
From hundreds of OSS projects hosted at Github, we selected a sample of 15 projects for analysis. We selected the sample projects
that were registered between Jan 2005 and Nov 2005 and that have atleast ten developers in the collaboration network. This ensures
that the sampled projects have sufficient elapsed time since starting so that significant amount of development activity has already
taken place. Collaboration network measures are sensitive to the size of the network. Particularly, when the network size is small,
some network measures, such as centralization, become meaningless. Therefore, we have restricted our sample to projects with at least
ten developers in the collaboration network.
1) Collaboration Network Structure: As discussed earlier, we use network size and network centralization to measure collaboration
network structure. Network Size is measured by the total count of nodes in the network. The network centralization measure follows
the approach proposed by Freeman [8]. It expresses the degree of inequality in a network as a percentage of that of a perfect star
network of the same size. The higher the value, the more centralized the network is. We employed the widely used social network
analysis software UCINET6[12] to compute the structure metrics for the collaboration networks in our sample.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

174 www.ijergs.org

2) Software Structural Interdependency: Although automatic tools(e.g., Lattix) are available for evaluating software architecture,
these tools are usually limited to a few programming languages such as C/C++ and Java. The overwhelming amount of source code
and the wide range of programming languages in our sample prevent us from measuring software structural interdependency either
manually or using the automatic tool. In a study about OSS code MacCormack et al. pointed out that, ―in software designs,
programmers tend to group source files of a related nature into ‗directories‘ that are organized in a nested fashion.‖ This suggests that
the Git file tree structure should be considered in gauging the level of software structural interdependency. So we measure software
structural interdependency by taking the average number of source code files per folder in the Git tree. A large number of task files per
folder indicates a high level of structural interdependency since files grouped into the same folder are typically related.
3) OSS Development Performance: In OSS projects, performance cannot be measured by parameters such as cost of development and
development within schedule because OSS projects generally do not involve a budget or a deadline. Prior research on OSS
development has employed various OSS project performance measures such as OSS developers‘ perceived project success, the
percentage of resolved bugs, increase in lines of code (LoC), promotion to higher ranks in an OSS project, the number of subscribers
associated with a project, and number of code commits. Among these measures, we choose the number of code commits per developer
per day as our measure of OSS development performance. A code commit refers to a change in the working directory through adding,
deleting, or changing software code. The number of code commits is an objective measure of project performance that has been
repeatedly used by prior studies regarding effects of social processes on technical success of OSS projects. These studies indicate that
the number of code commits is particularly suited for an examination of both social and technical factors in OSS development. Our
measure, the number of code commits per developer per day, allows us to compare performance across projects with different team
size and duration.
4) Control Variables: We controlled for the following variables based on the previous literature: Product Size: In the software
engineering literature, product size has been identified as an important factor in manpower, time, and productivity of a project.
Therefore, product size is used as a control variable. According to the literature, we measure product size as the total LoC of an OSS
project.
Programming Language: Software programming language is another well-recognized factor that may affect software performance.
Many projects employ more than one language. Java,C++, and C are the most frequently used. Due to the limited sample size, we
created four dummy (binary) variables: ―Java,‖ ―C++,‖ and ―C‖ to account for the top three most frequently used languages, and
―other‖ to represent all the other languages.A project receives a value of 1 for a language dummy variable if it uses the language in
consideration, a value of 0 otherwise. In other words, a project has four language values, one for each of the four dummy variables.
The ―other‖ language variable was left out of the regression model in order to prevent dummy variable trap.
License Type: OSS projects use different licensing schemes and the specific license type used may affect developer motivation and
project success due to commercial or noncommercial nature of a license. License type is measured as a binary variable. All projects
with the most popular OSS license, the GPL (general public license, usually indicating a noncommercial OSS product) license, are
given a value of 1. All other projects have a value of 0.
B. Data analysis
We apply linear regression model to verify our hypotheses. Linear regression is a statistical technique for relating a dependent variable
to one or more independent variables(predictors). This model employs the ordinary least squares (OLS) technique in hypothesis
testing. It captures a linear relationship between the expected value of dependent variable and each independent variable (when the
other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either
the independent or dependent variables in the regression model to improve the linearity. Here, the number of code commits per
developer per day, network size and product size are log transformed to account for the nonlinear relationship between project
performance and network size and product size.Other variables remain in their original form, because it is possible for them to take
―0‖ as a value. A log transformation of these variables would arbitrarily truncate out meaningiful data points. We have used IBM
SPSS statistical tool for analysis and results.
C. Results
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

175 www.ijergs.org

The model incorporates several interaction effects. Interaction effects represent the combined effects of variables on the dependent
measure. When an interaction effect is present, the impact of one variable depends on the level of the other variable. We have taken
code commits per developer per day as the dependent variable and product of software structural interdependency and network size as
a first interaction variable and product of software structural interdependency and network centralization as the second interaction
variable. In such models, multicollinearity is a common problem.We attempted to correct for multicollinearity by centering the
interacting variables on their mean. Centering just means subtracting a single value (here mean) from all of your data points. The
descriptive statistics for the resulting dataset are shown in Table I. The results of the regression analysis are reported in Table II and
Table III. Note that the mean values in Table I are uncentered values.




TABLE I
Descriptive Statistics

N
Mean Std. Deviation Minimum Maximum Valid Missing
codecommits 15 0 .2989 .4789 .0098 1.9000
Network size 15 0 2.6557 .8992 -1.7807 1.8893
N/w centralization 15 0 .1694 .1441 -.1495 .4305
SSI 15 0 3.9189 3.9189 -6.3811 6.0189
Product size 15 0 10.4221 1.8959 8.0380 13.2020
Licence 15 0 .4667 .5164 .0000 1.0000
C 15 0 .2000 .4140 .0000 1.0000
Cplus 15 0 .2667 .4577 .0000 1.0000
Java 15 0 .2000 .4140 .0000 1.0000




TABLE II
Model Summary
Model R R Square
Adjusted R
Square Std. Error of the Estimate
1 .890 .792 .272 .408635




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

176 www.ijergs.org


TABLE III
RESULTS OF LINEAR REGRESSION MODEL
a

Variable
Unstandardized Coefficients
Standardized
Coefficients
Significance Hypothesis B Std. Error Beta
1 (Constant) -.9857 .1309 .0433
SSI * Networksize .0465 .0495 .4021 .0401 Supported
SSI * N/w centralization .1140 .5570 -.1419 .0848 Supported
SSI -.0062 .0597 -.5050 .0300
Network size -.6325 .2870 1.1874 .0920
Network centralization -.2036 .9232 -.6126 .0350
Product size -.0579 .1115 -.2294 .0631
Licence -.3221 .3766 -.3473 .0441
C -.0311 .4419 -.0268 .0947
Cplus .1046 .5625 .1000 .0861
Java -.2838 .3973 -.2453 .0515
a. Dependent Variable: codecommits
H1 says that collaboration network size and software structural interdependency mutually and positively moderate each other‘s impact
on OSS project performance. This concerns the coefficient of the interaction term of network size and software structural
interdependency. As shown in Table III this coefficient is positive and significant (β = 0.0465) in our model testing results. Hence H1
is supported.
Following Aiken and West [10], we calculated simple slopes of the effect of network size on the number of code commits per
developer per day at three values of software structural interdependency (SSI): SSI-high = 3.918, SSI-mean = 0, SSI-low = −3.918.
These three values are one standard deviation above the mean, the mean, and one standard deviation below the mean of centered SSI
values, respectively.
To gain a complete view of the mutually moderating effect of network size and SSI, we also computed the simple slopes of the effect
of SSI on the number of code commits per developer per day at one standard deviation above the mean (NS-high = 0.8992), the mean
(NS-mean = 0), and one standard deviation below the mean (NS-low = −0.8992) values of network size.

Fig. 1. Simple Slopes of Network Size Fig.2. Simple Slopes of SSI
Fig.1 shows that network size tends to have a negative effect on project performance in terms of the number of code commits per
developer per day. However, this negative effect can be mitigated by the interaction between network size and software structural
interdependency since the negative simple slopes of network size become less steep as the structural interdependency values increase.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

177 www.ijergs.org

In order to find the SSI value at which the effect of network size on project performance turns from negative to positive,we equate the
derivative of the number of code commits with respect to network size to zero( -0.6325+ 0.0465∗SSI =0). Since the result of this
equation is centered SSI value, we add the mean of SSI to this result to gain a precise view of the inflection point. The result indicates
that when there are more than 18 files per folder in a project, the effect of network size of project performance turns positive.
With respect to the effect of software structural interdependency on project performance, Fig 2 reveals that this effect is negative in
small development teams but positive in large teams. Therefore, the interaction of network size and structural interdependency plays a
key role in the relationship between project characteristics and project performance. By equating the derivative of the number of code
commits with respect to SSI to zero (−0.0062 + 0.0465∗Network-Size = 0) and adding the mean value of network size to this result,
we found that when there are more than 16 members in a project team,the effect of SSI on project performance becomes positive.
H2 proposes that collaboration network centralization and software structural interdependency mutually and positively moderate each
other‘s impact on OSS project performance. As shown in Table III, the coefficient of the interaction term of network centralization
and software structural interdependency is positive and significant (β = 0.1140). Thus, H2 is supported. Following the same simple
slope finding approach for H1, we calculated the simple slopes of the effect of network centralization (NC) on the number of code
commits per developer per day at SSI-high, SSI-mean, and SSI-low values .

Fig. 3. Simple Slopes of Centralization. Fig. 4. Simple Slopes of SSI
Meanwhile, the simple slopes of the effect of SSI on the number of code commits per developer per day at one standard deviation
above the mean (NC-high = 0.1441), the mean (NC-mean = 0), and one standard deviation below the mean (NC-low = −0.1441)
values of network centralization are calculated.
Figs. 3 and 4 suggest that network centralization and software structural interdependency reciprocally remedy each other‘s negative
effect on project performance. Setting the derivative of the number of code commits with respect to network centralization to zero
(−0.2036 + 0.1140 ∗ SSI = 0), we found that at a moderate level of software structural interdependency (more than 6 CVS files per
folder) the effect of network centralization turns from negative to positive. The derivative analysis of the number of code commits
with respect to SSI (−0.0062 + 0.1140 ∗ Network-Centralization = 0) reveals that 0.22 degree of network centralization is the
inflection point where the effect of structural interdependency turns from negative to positive.
VI) CONCLUSION
A key motivation leading us to undertake this study is the lack of research directed at socio- technical interactions in a system
development context. To OSS practitioners, the main implication of our findings is that they can gain the best of both worlds by
adopting a hybrid software development process that incorporates strengths of both traditional software development model and recent
OSS model. Our empirical analysis demonstrates a feasible way for OSS practitioners to quantify their team structure and soft ware
architecture in order to achieve better development performance. For example, Mozilla was redesigned toward a lower level of
structural interdependency so that a less focused team distributed across geographic and organizational boundaries could contribute to
it. Moreover, the inflection points found in our study can be used as qualitative benchmarks for OSS practitioners to evaluate the
socio-technical interactions in their projects.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

178 www.ijergs.org

REFERENCES
[1] Sajad Shirali-Shahreza,Mohammad Shirali-Shahreza, ―Various Aspects of Open Source Software Development‖, 2008 IEEE

[2] C. L. Huntley, ―Organizational learning in open-source software projects: an analysis of debugging data,‖ IEEE Trans. Eng.
Manage., vol. 50, no. 4, pp. 485–493, Nov. 2003.

[3] E. Capra, C. Francalanci, and F. Merlo, ―An empirical study on the relationship among software design quality, development
effort, and governance in open source projects,‖ IEEE Trans. Softw. Eng., vol. 34, no. 6, pp. 765–782, Nov./Dec. 2008.

[4]Caryn A. Conley, Lee Sproull, ―Easier Said than Done: An Empirical Investigation of Software Design and Quality in Open Source
Software Development‖, Proceedings of the 42nd Hawaii International Conference on System Sciences – 2009

[5]Kevin Crowston, ―An exploratory study of open source software development team structure‖, ECIS 2003, Naples Italy

[6] J. R. Galbraith, ―Organization design: An information processing view.‘Interfaces, vol. 4, no. 3, pp. 28–36, 1974.

[7] K. K. Kim and N. S. Umanath, ―Structure and perceived effectiveness of software development subunits: A task contingency
analysis,‖ J. Manage.Inform. Syst., vol. 9, pp. 157–181, 1992
[8] L. C. Freeman, ―Centrality in social networks: Conceptual clarification,‖ Social Netw., 1979.
[9] M. Cataldo, J. D. Herbsleb, and K. M. Carley, ―Socio-technical congruence: A framework for assessing the impact of technical and
work dependencies on software development productivity,‖ in Proc. Second ACM-IEEE Int. Symp. Empirical Softw. Eng. Meas.,
2008, pp. 2–11.
[10] S. G.West and L. S. Aiken, Multiple Regression: Testing and Interpreting Interactions. Newbury Park, CA: Sage, 1991.
[11] G.Madey,V. Freeh, andR. Tynan, ―The open source software development phenomenon: An analysis based on social network
theory,‖ in Proc. Amer.Conf. Inform. Syst., 2002, pp. 1806–1813
[12] S. P. Borgatti,M. G. Everett, and L. C. Freeman, UCINET IV Version 1.00. Columbia, SC: Analytic Technologies, 1992










International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

179 www.ijergs.org

Environmental Monitoring and Controlling Various Parameters in a Closed
Loop
R Vijayarani
1
, S. Praveen Kumar
1

1
Scholars, SRM University, Kattankulathur, Chennai
E-mail- vijiece29@gmail.com

ABSTRACT – A smart temperature monitoring and controlling has been implemented with the use of standard technology, which
actively monitor the environmental conditions. The system allows for a user to input the desired conditions regarding the surrounding
atmosphere`s temperature requirements. This paper incorporates design and development of monitoring the temperature and
controlling it. The objective of the project is to develop a system, which demonstrates intelligent monitoring and controlling system
This system uses ZigBee technology for communication. A temperature effect on devices and heavy machines is a major concern for
many in the industrial and domestic applications. In such applications monitoring temperature and controlling it through some external
solutions like coolants and heaters is done. In order to overcome these problems many industries and domestic users have been
implementing many solutions. The project consists of two modules. One is the parameter monitoring and the other one is the
parameter controlling. Monitoring and controlling physical parameters like temperature is of outmost importance. A temperature
sensor LM35 will be used for the purpose of measuring temperature. By our project we are demonstrating a cost effective, user
friendly system. ZigBee offers many advantages like Low cost, Range and obstruction issues avoidance, Multi-source products, Low
power consumption and a huge network of more than 64,000 devices can be connected. It offers secured environment for
communication. A main target for this system is to have it designed and implemented as cost efficient as possible.

Keywords— Microcontroller, Sensor, LM35, ZigBee, control test, peltier, PWM
1. INTRODUCTION
In recent years, the rapid advancements in embedded system technologies can be seen, to bring a great impact on the industries and
hence sophisticated society is evolving. We are going to use the embedded knowledge in warehousing and industries in order to
measure and control it. Temperature control is a process to maintain the temperature at certain level. This process is commonly use in
all area of the world. Recently in globalization era, this process becomes important element because there are many applications in
daily life, like warehousing and industries which depends on temperature measurements.
During the process, they are needed to be monitored frequently in order to ensure its functional and efficiency especially on
temperature. It is important to study the level of temperature recommended in particular area. Good temperature control is important
during the research, reaction, separation, processing, and storage of products and feeds and is thus a key to product quality. It is also of
importance for environmental control and energy conservation. Temperature is an important quality in daily life, science and industry.
Just about all processes depend on temperature because heat makes molecules move or vibrate faster, resulting in faster chemical
reactions. Accurate measurement of the temperature of products in retail frozen food cabinets requires particular care. Small items
warm up quickly when removed from the cabinet or handled: drilling a hole, even with a precooled drill, will cause errors unless this
can be done without removing the package from its position in the cabinet. If the product is loosely packed, it is easier and quicker to
insert the sensor into the centre of the package, with minimum handling and without moving the package from its original position.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

180 www.ijergs.org

The temperature of stacked packets may be measured by inserting a thin probe between packets, without disturbance, and allowing
sufficient time for constant temperature to be reached. Provided a rapid-response sensor is used. The temperature measurements place
major role in industries, warehousing, and hospitals

2. BACKGROUND:
A microcontroller is a small and low-cost computer built for the purpose of dealing with specific tasks, such as displaying information
in a microwave LED or receiving information from a television‘s remote control. Microcontrollers are mainly used in products that
require a degree of control to be exerted by the user. Microcontroller can be regarded as a single-chip special-purpose computer
dedicated to execute a specific application. As in general-purpose computer, microcontroller consists of memory (RAM, ROM, Flash),
I/O peripherals, and processor core. However, in a microcontroller, the processor core is not as fast as in general purpose-computer,
the memory size is also smaller. Microcontroller has been widely used in embedded systems such as, home appliances, vehicles, and
toys. There are several microcontroller products available in the market, for example, Intel's MCS-51 (8051 family), Microchip PIC,
and Atmel's Advanced RISC Architecture (AVR).

2.1 ATMEGA 8:
ATmega8535 is an 8-bit AVR microcontroller. It consumes less power and produce high performance. It follows advanced RISC
architecture. ATMEGA 8 contains 28 pins, 23 Programmable I/O Lines, 512Bytes EEPROM, 1Kbyte Internal SRAM, Two 8-bit
Timer/Counters, One 16-bit Timer/Counter, 8-channel ADC in TQFP package, 6-channel ADC in PDIP package, operating voltage is
5v.It contain 3 ports with each ports contain 8 pins. By executing powerful instructions in a single clock cycle, the ATmega8 achieves
throughputs approaching 1MIPS per MHz, allowing the system design to optimize power consumption versus processing speed.
Details of ATmega8535 microcontroller are described in [1].

2.2 LM35
The LM35 series are precision integrated-circuit temperature sensors, with an output voltage linearly proportional to the Centigrade
temperature. This LM35 has advantage over the linear temperature sensors calibrated in ° Kelvin, as the user is not required to subtract
a large constant voltage from the output to obtain convenient Centigrade scaling. Low cost is assured by trimming and calibration. The
low output impedance, linear output, and precise inherent calibration of the LM35 make interfacing to readout or control circuitry
especially easy[2]. The device is used with single power supplies. As the LM35 draws only 60 μA from the supply, it has very low
self-heating of less than 0.1°C in still air. The LM35 is rated to operate over a −55°C to +150°C temperature range.


2.3 ZIGBEE
ZigBee is a specification for a suite of high level communication protocols. ZigBee is based on an IEEE 802.15 standard. It uses low
power. Though low-powered, ZigBee devices can transmit data over long distances by passing data through intermediate devices to
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

181 www.ijergs.org

reach more distant ones, creating a mesh network; i.e., a network with no centralized control or high-power transmitter/receiver able to
reach all of the networked devices. The decentralized nature of such wireless ad hoc networks makes them suitable for applications
where a central node can't be relied upon. It is used in applications that require only a low data rate, long battery life, and secure
networking. It has a defined rate of 250 Kbit/s, best suited for periodic or intermittent data or a single signal transmission from a
sensor or input device. Applications include wireless light switches, electrical meters with in-home-displays, traffic management
systems, and other consumer and industrial equipment that requires short-range wireless transfer of data at relatively low rates. ZigBee
specification is intended to be simpler and less expensive than other WPAN such as Bluetooth or Wi-Fi.
2.4 PELTIER DEVICE
Peltier create a temperature differential on each side [9]. One side gets hot and the other side gets cool. Therefore, they can be used to
either warm something up or cool something down. Depending on which side we use. We can also take advantage of temperature
differential to generate electricity. This peltier works very well as long as you remove the heat from the hot side. After turning on the
device, the hot side will become hot quickly, the cold side will cool quickly.
3. RELATED WORK
Zhu and Bai [3] proposed a system for monitoring the temperature of electric cable interface in power transmission, based on Atmel
AT89C51 microcontroller. The system consists of a central PC machine, host control machines, and temperature collectors. Several
temperature collectors are connected to a host control machine through RS-485 communication network, and the host control machine
communicates and exchanges data with the central PC machine using General Packet Radio Service (GPRS) connection. The
temperature collector itself consists of sensor temperatures (Maxim's DS18B20, 1-wire digital thermometer), decoders, and other
circuits for interfacing purpose. Each temperature collector saves the temperature in SRAM and sent the temperature information
back to the host control machine when requested. Each host control machine also stores this temperature data in its memory (SRAM),
and send it back to the central PC machine when requested. In this system, the communication using RS-485 network is limited by
cable length (1200 meters). In [4], Loup et al. developed a Bluetooth embedded system for monitoring server room temperature. When
the room temperature is above threshold, the system sends a message to each server via Bluetooth to shut down the server.
There are also some works on wireless temperature monitoring system based on Zigbee technology [5, 6, 7]. Bing and Wenyao [5]
designed a wireless temperature monitoring and control system for communication room. They used Jennic's JN5121 Zigbee wireless
microcontroller and Sensirion's SHT11 temperature sensor. The system proposed in [6] uses Chipcon's CC2430 Zigbee System-on-
Chip (SoC) and Maxim's 18B20 temperature sensor. In [7] Li et al. developed a wireless monitoring system based on Zigbee, not only
for temperature, but also humidity.

Different from our system, we use personal computer. The values transmitter and received through the zigbee will be passed to
personal computer so that we can change temperature from distance. This can be accurately done by the extension of the zigbee
range. This system controls both heater and peltier cooler[10].



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

182 www.ijergs.org

4. DESIGN AND IMPLEMENTATION:
PC
CP2102
ZIGBEE
TRANSCEIVER
ATMEGA8
TEMPERATURE
SENSOR1
TEMPERATURE
SENSOR2
ATMEGA8
ZIGBEE
TRANSCEIVER
COOLING
DEVICE
HEATING
DEVICE
ROOM1 ROOM2
5

Figure 1: Block diagram

4.1 SPECIFICATION:
We define our system to have specification as follows.
1. Display room temperature
2. Set the required temperature
This project clearly focuses on monitoring and controlling the temperature. The required temperature will be set by the PC apps which
have been developed. Alarm will be raised when the required set temperature exceeds or get reduced. The system consist of two parts
1. Hardware
2. Software

4.2 HARDWARE:
The hardware used here is temperature sensor, ZigBee, peltier cooler, Atmega8, cp2102 [8]. The specification of each product and the
connections will be enclosed below. LM35 is the temperatures Sensor used in this system[12]. Zigbee which is used is XBee series 2.
Peltier specification is TEC1-12706[9]. Set the required temperature which we actually required in our room to save particular items
or to prepare chemicals in the industries. This required temperature reading will be passed from pc to the controller. This setted
temperature will be maintained and watched regularly. Now the current temperature from room is again transmitted from room to the
pc via ZigBee. Both the transmission and reception of temperatures are done by ZigBee. Initially transmitter circuit is prepared. This
transmitter circuit is as follows

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

183 www.ijergs.org


Figure 2 : Transmitter circuit
In monitoring part, the current temperature is monitored through sensor and transmitted via Atmega and zigbee.
In controlling part, the required set temperature can be controlled. The temperature which we require is transmitted from laptop or
computer to transmitter circuit board. The controller will send the required temperature through transmitter ZigBee to receiver ZigBee.
Meanwhile the controller in the board will start to generate PWM. In this transmitter circuit we also have L293D IC which is used to
drive the peltier cooler and heater. The ZigBee and microcontroller works with voltage of less than 5v. To drive the 5v to 12v, this IC
is used. This is used to switch the voltage from low to high. They take a low-current control signal and provide a higher-current signal.
This high current signal is used to drive the cooling device. This will help the peltier to cool or heat at the required level. Photograph
of the system is shown in Fig 3. This can be extended by introducing authentication over the system. The user name and password
authentication is done using .net. Only the authenticated user can access the personal computer to set the particular temperature which
is required to maintain the products or chemicals. The chemicals in the industries have the capability to leak the gas. This gas will be
detected by the sensor and it will reported via ZigBee to personal computer. This passage of message can be done by .net at the
backend. So that the system can be authenticated. This can be used effectively in case of dancer. Similarly with the use of upgraded
version of Atmega 8, GSM message can be sent in case of any emergencies. This system ensure safety to the food products, chemicals
and medicines. The PWM generation of the microcontroller made the efficiency to increase compared to other temperature control
projects. Due to the use of PWM, accuracy is maintained exactly
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

184 www.ijergs.org


Figure 3: Hardware photograph
4.3 SOFTWARE:
The software which is used for simulation of Atmega 8 is code vision AVR. The software used to display is XCTU. This will display
continuously the temperature values.[11]
CODE VISION AVR:
The software has four main parts: 1) read the temperature from ADC, 2) control the temperature at various situations
1. READ TEMPERATURE FROM ADC
// Read the AD conversion result
unsigned int read_adc(unsigned char adc_input)
{
ADMUX=adc_input | (ADC_VREF_TYPE & 0xff);
// Delay needed for the stabilization of the ADC input voltage
delay_us(10);

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

185 www.ijergs.org

// Start the AD conversion
ADCSRA|=0x40;
// Wait for the AD conversion to complete
while ((ADCSRA & 0x10)==0);
ADCSRA|=0x10;
return ADCW;
}
.
.
while (1)
{
raw_temp=read_adc(3);
putchar(raw_temp);
delay_ms(500);
}
2. CONTROL SITUIATION AT VARIOUS SITUIATIONS:
When set temperature is less than current temperature

if(set_temp<current_temp)
{
OCR1A=0;
OCR1B=255;
}

When set temperature is greter than current temperature

else if(set_temp>current_temp)
{
OCR1A=0;
OCR1B=0;
}
When set temperature is equal to current temperature

else if(set_temp==current_temp)
{
OCR1A=0;
OCR1B=127;

The flow of the coding is as follows
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

186 www.ijergs.org








































START
INITIALISE
ADC,PORTS,TIMERCOUNTER, SPI

CONFIGURE
USCRA=0x00;
USCRB=0x18;
USCSRC=0x86;
UBRRH=0x00;
UBRRL=0x47;

READ ADC_VALUE FROM ADCH (i.e)
TEMP
TEMP

TEMP= read-adc(3)
CALCULATE AVERAGE
VALUE OF CURRENT
TEMPERATURE

PELTIER
ON

PELTIER
OFF

IF SET
TEMP <
CURRENT
TEMP

yes no
If set
temp==current
temp

PELTIER
MILD ON

STOP
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

187 www.ijergs.org


5. RESULT AND DISCUSSION:

The result of the system is as follow. First step is to install XCTU Software and cp2102 USB to UART converter driver. As soon
as dB9 connector is connected from ZigBee receiver board, to the PC using a cable to anyone of the port, that particular serial port
will be selected. Then the start monitor button will be pressed. This enables to find the current temperature. After every time the
refresh button in the transmitter circuit is pressed, value will be updated. Then the set temperature button is used to set the
particular temperature which we required in the industries. As soon as the required temperature is set, the current temperature of
the industry will be changed as 19.
The figure 4 shows the output which
checked with peltier cooling.


Figure 4: output
6. CONCLUSION

In this paper, we have designed and implemented a microcontroller-based system for monitoring and controlling temperature in
industries. We utilized Atmel AVR Atmega8 microcontroller and LM35 temperature sensor. Based on the testing results, the system
works according to our predefined specification. This system can be used to help the administrator to monitor and controlling
temperature of the industries. The system also can raise an alarm and send a text message to warn the administrator if the fire or gas
leaked over the industries specially in case of chemical industries and warehousing where the storage of all materials are possible.
This project is used to prevent the materials from damage.

REFERENCES:
[1] Atmel Corp. 2006 ATmega8 Datasheet. http://www.atmel.com/images/atmel-2486-8-bit-avr-microcontroller-
atmega8_l_datasheet.pdf
[2] National Semiconductor Corporation, LM35 datasheet, precision centigrade temperature sensors, Atmel data book, November
2000 update.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

188 www.ijergs.org


[3] HongLi Zhu and LiYuan Bai. 2009. Temperature monitoring system based on AT89C51 microcontroller. In IEEE International
Symposium on IT in Medicine Education. ITIME (August 2009), volume 1, 316-320.
[4] T.O. Loup, M. Torres, F.M. Milian, and P.E. Ambrosio. 2011. Bluetooth embedded system for room-safe temperature monitoring.
Latin America Transactions, IEEE (Revista IEEE America Latina) (October 2011), 9(6):911-915.
[5] Hu Bing and Fan Wenyao. 2010. Design of wireless temperature monitoring and control system based on ZigBee technology in
communication room. In 2010 International Conference on Internet Technology and Applications (August 2010), 1-3.
[6] Lin Ke, Huang Ting-lei, and Li Iifang. 2009. Design of temperature and humidity monitoring system based on zigbee technology.
In Control and Decision Conference. CCDC (June 2009).Chinese , 3628-3631.
[7] Li Pengfei, Li Jiakun, and Jing Junfeng. Wireless temperature monitoring system based on the ZigBee technology. 2010. In 2010
2nd International Conference on Computer Engineering and Technology (ICCET), volume 1 (April 2010), V1-160-V1-163.
[8] cp2102/9 silicon labs http://www.silabs.com/Support%20Documents/TechnicalDocs/CP2102-9.pdf
[9] peltier device https://www.sparkfun.com/products/10080
[10] Kooltronics, ―Basic cooling methods.‖
[11] Code vision AVR https://instruct1.cit.cornell.edu/courses/ee476/codevisionC/cvavrman.pdf
[12] Basic workings of temperature sensor http://electronicsforu.com/electronicsforu/circuitarchives/view_article.asp?
sno=1476&title%20=%20Working+With+Temperature+Sensors%3A+A+Guide&id=12364&article_type=8&b_type=new











International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

189 www.ijergs.org

Design and Testing of Solar Powered Stirling Engine
Alok kumar
1
, Dinesh Kumar
1
,Ritesh Kumar
1

1
Scholar, Mechanical Engineering Department, N.I.T patna,
Email- kumargaurav4321@gmail.com

Abstract-This report presentsdifferent components and its various configurations along with the feasibility of using solar energy as a
potential source of heat for deriving a stirling engine. In addition to this it contains the design details of various parts of stirling engine
and details of materials used.Engine parts being of mild steel, aluminium and cast iron so turning, facing, grinding, cutting, threading,
tapping operations were used in the fabrication of stirling engine. There is design calculation of different components of stirling
engine and parabolic dish as hot cylinder calculations, hot (Displacer) piston calculations, cold cylinder calculations, cold piston
calculations, connecting rod calculations, calculations of flywheel, parabolic dish calculations is performed.
Keyword-Joint board, hot cylinder, displacer piston, cold cylinder, cold piston, connecting rod, flywheel, slider, crank, rotating disc,
connecting pins, shaft, frame, dish, piston holder, sealing nipple.
1. Introduction
Energy crisis is a harsh reality in the present scenario. Conventional fossil fuels like coal, natural gas, petroleum products etc. get
exhausted in the near future and also the prices of these fuels are increasing day-by-day. Pollutionand global warming are drawback
with the use of conventional fossil fuels. So, use of alternative sources which provide clean and green energy is important.This report
demonstrates that stirling engine which is an external heat engine can be used as an efficient and clean way of producing energy with
help of concentrating a parabolic reflector.. It is used in some very specialized applications, like in submarines or auxiliary power
generators. A stirling engine was first invented by Robert Stirlinga Scottish in 1816.
A Stirling engine is a heat engine operating by cyclic compression and expansion of the working fluid (air or other gas) at different
temperature levels such that there is a net conversion of heat energy to mechanical work.When the gas is heated, because it is in a
sealed chamber, the pressure rises and this then acts on the power piston to produce a power stroke. Whenconfine gas is cooled, the
pressure drops and then piston to recompress the gas on the return stroke, giving a net gain in power available on the shaft. The
working gas flows cyclically between the hot and cold heat exchangers. The Stirling engine contains a fixed amount of gas that is
transferred back and forth between a cold end and a hot end. The displacer piston moves the gas between the two ends and the power
piston is driven due to the change in the internal volume as the gas expands and contracts. This report presents an external combustion
engine. The engine is designed so that the working gas (air) is generally compressed in the colder portion of the engine and expanded
in the hotter portion resulting in a net conversion of heat into work. So, aStirling engine system has at least one heat source, one heat
sink and heat exchangers and transmitted from a heat source to the working fluid by heat exchangers and finally to a heat sink.
There are three types of Stirling engines that are distinguished by the way they move the air between the hot and cold sides of the
cylinder is alpha, beta and gamma types, In a beta configuration similar to the engine used in this study, A beta Stirling has a single
power piston arranged within the same cylinder on the same shaft as a displacer piston The displacer piston shuttle the working gas
from the hot heat exchanger to the cold heat exchanger. The displacer is a special-purpose piston; used in Beta and Gamma type
Stirling engines, to move the working gas back and forth between the hot and cold heat exchangers. The working gas is pushed to the
hot end of the cylinder so; it expands and pushes the power piston. The displacer is large enough to insulate the hot and cold sides of
the cylinder thermally and to displace a large quantity of gas.

2. Calculations
2.1 Hot cylinder calculations:
Assuming a pressure of 2 bar = .2MN/m
2

External diameter of hot cylinder (D
o
)= 50mm
Thickness of cylinder(T
hc
)= P*D/2ζ
t
=.2*50/2*48
T
hc
= .104 mm≈1.5mm (due to standard size of tube)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

190 www.ijergs.org

Internal diameter of hot cylinder (D
i
)= 50-2*1.5=47mm
Length of hot cylinder (L
h
)= 3* D
i
=141mm≈140mm
2.2 Hot(Displacer) piston calculations:
Diameter of hot piston (D
p
)= 47-2=45mm (1mm clearance on each side)
Thickness of hot piston (T
hp
)=.03*D
p
= 1.35mm≈.25mm (due to standard size of aerosol bottle)
Length of hot piston (L
p
)= 80mm
2.3 Cold cylinder calculations:
Assuming a pressure of 2 bar = .2MN/m
2

External diameter of cold cylinder (d
o
)= 32mm
Thickness of cold cylinder (t
cc
)=P*d
o
/2*ζ
t
= .2*32/2*68
t
cc
=.047mm=1.5mm (due to standard size of tube)
Internal diameter of cold cylinder (d
i
)= 32-3= 29mm
Length of cold cylinder (l
c
)= 67mm
2.4 Cold piston calculations:
Diameter of cold piston (d
p
)= 29mm
Thickness of cold piston (t
cp
)= .03*d
cp
= .87mm≈ 1mm (due to standard thickness of tube)
Length of cold piston (l
p
)= 35mm
2.5 Connecting rod calculations:
Diameter of connecting rod(d
1
)= 6mm
Length of connecting rod part1(l
1
)= 11.5mm
Now radius of gyration of the rod (k)= d/4= 6/4 = 1.5mm
Also we have constant K= 4/25000
Crippling stress on the rod (f
cr1
)= f
c
/[1+K*(l/k)] = 213/[1+4/25000*(11.5/1.5)]
=212.7MN/m
2
<268MN/m
2
which is yield strength of mild steel
Hence the design is safe
Similarly for connecting rod parts 2, 3, 4 the lengths are as follows
Length of connecting rod part2(l
2
)=8.5mm
Length of connecting rod part3(l
3
)= 5.5mm
Length of connecting rod part4 (l
4
)= 4.8mm
Crippling stress values for part 2, 3, 4 are as follows
f
cr2
=213/[1+4/25000*(8.5/1.5)]
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

191 www.ijergs.org

= 212.8MN/m
2

f
cr3
=213/[1+4/25000*(5.5/1.5)]
=212.8MN/m
2

f
cr3
=213/[1+4/25000*(4.8/1.5)]
=212.8MN/m
2
2.6 Calculations of flywheel:
Shaft diameter (D
s
)=15mm
Diameter of the flywheel(D
f
)= 118mm
Width of the rim (B) = 25mm
Thickness of the rim (t
f
) = 5mm
Hub diameter (d
h
)= 2*D
s
= 30mm
Length of the hub (l
h
)= 2*D
s
= 30mm
Taking a speed of 600 RPM
We have speed (n)= 600/60 = 10rev/s
Change in energy E= C
E
*P/n = .29*5/10
= .145J
Weight of the flywheel = .75Kg
Velocity of the wheel = π*D
f
*n = π*118*10
= 3707.1mm/s= 3.71m/s
Mass density of cast iron (ρ) =7200Kg/m
3
Centrifugal force on one half of the rim = 2*B*t
f
*ρ*v
2
/10
6

= 2*25*5*7200*3.71
2
/10
6
= 24.78N
Tensile stress at rim section due to centrifugal force = ρ*v
2
/10
6

=7200*3.71
2
/10
6
= 99.1KN/m
2

2.7 Parabolic dish calculations:

f = ( D * D ) / ( 16 * c )
where
f= Focal length
c=Depth of dish
D=Diameter
For D=420mm & c=37mm
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

192 www.ijergs.org

f = ( 420 * 420) / ( 16 * 37 )=297.97mm~298mm
Length of minor axis=420mm
Length of major axis=525mm
Area of disc=π*a*b=π*525*420=692721.2 mm
2
=69.3 cm
2

2.8 Calculation for direct radiation:
Latitude (let)= 30 ̊
Hour angle= 0 ̊
Reflectivity of the material = .96
Tilt angle Σ =90 ̊
Declenation , d = 23.5 ̊
Altitude angle β at solar noon β
max
= 90-(l-d) =90- (30-23.5)
= 83.5 ̊
At solar noon solar azimuth angle γ =180 ̊
Wall azimuth angle α = 180 – (γ-ξ) = 0
Incident angle θ overall = Cos
-1
(Cosβ*Cosα) =Cos
-1
(Cos89.53*Cos180)
= 90.47 ̊
Direct radiation I
DN
=A*exp(-B/Sinβ) =1080*exp(-.21/Sin83.5)
= 874W/m
2

I
DN
* Cosθ =874*Cos90.47 =-7.16W/m
2

Diffuse radiation, I
d

View factor F
ws
= (1+CosΣ)/2
Diffuse radiation, I
d
=C*I
DN
*F
ws
=0.135*874*0.5
= 58.99W/m
2

Reflected radiation for .96 (ρ
g
) I
r
=(I
DN
+I
d
)*ρ
g
*F
wg

= (874+59.99)*.96*0.5
=448.31W/m
2

3. Fabrication Details
The fabrication details of different parts of the engine are given below with the detail of the operations performed
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

193 www.ijergs.org


Fig.1 TheStirling Engine
3.1 Joint Board -Joint board is a cast iron rectangular slab of two overlapping holes were made on both sides of the slab. The holes
were overlapped in order to provide transition for working fluid from hot cylinder to cold cylinder. Hole for hot cylinder was 50mm
diameter and 32mm for hot cylinder. Tapping of M16 was performed on both holes to provide internal threads so as to fit cylinders in
them. Tapping of M16 gave us 2mm pitch for threads.
3.2 Hot Cylinder - Hot cylinder is 140mm long cylinder with 50mm external diameter. Cylinder thickness for hot cylinder is 2mm.
External threads were provided on it to so as to fit it on respective hole with M16. Threads were provided upto 50mm from front side.
On the posterior side a circular aluminium plate was welded to the cylinder where heat absorption will take place.
3.3 Cold Cylinder - Cold cylinder is 67mm long mild steel cylinder with 32mm external diameter. Threading was provided to one
side of it which was to be fit into the joint board hole. Other side was for piston which was connected to crank with a connecting pin.
3.4Hot Piston - An aluminium pesticides bottle was used as piston for hot cylinder. Connecting rod was placed in it with help of
teflon which was placed in it with internal threading. It was grinded to provide for smooth surface finish so as to provide easy
movement inside a hot cylinder.
3.5 Connecting Rod - Mild steel rod was used connecting rod for both of the pistons. For hot cylinder the connecting rod was fitted to
the hot cylinder with help of threads, with internal threads for teflon block and external threads for connecting rod. For cold piston
connecting rod was fitted to the piston with help of a movable pin of 1mm.
3.6 Final Assembly - All the components were then assembled on a board with a proper alignment with help of welding. Then the
final assembly was placed onto a frame so that it can be properly focused on with help of parabolic dish.
3.7 Parabolic Dish - A parabolic dish of 420mm minor axis and 525mm major axis with 37mm depth. Focal point of the dish is
298mm. The dish was first mended for minor flaws with hammering. Then the dish was first cleaned with emery paper and a layer of
reflective paper was placed on it for reflectivity. A convex lens was further procured to get a better focus of the incident light on the
hot cylinder.The focal length of the lens is 6 inchs.
4. Conclusion
It is concluded that the simple design analysis of stirling engine operated in two heat source with help of solar energy. The shaft rotates when
solar energy imparted on hot zone of the stirling engine. This design of has low hot-side temperatures archive as compared to operated at
traditional Stirlingengine so overerallefficiency is low. Friction between different mating parts and proper lubrication are also more important
to increase the overall efficiency.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

194 www.ijergs.org

REFERENCES:
[1] Snyman H., Harms T.M. and Strauss J.M., (2008) examination of Design analysis methods for Stirling engines. Journal of energy
in South Africa, Vol.-19 No.-3, page 4-19.South Africa
[2] Khan K.Y., Ivan N.A.S., Ahmed A.S., Siddique A.H. and Debnath D. (2011) examination of solar dish stirling system and its
economic prospect in Bangladesh.International journal of electrical & computer sciences IJECS-IJENS Vol: 11 No: 04, page 7-
13.Bangladesh

[3] Mancini T.R. examination of solar-electric dish stirling system development. USA
[4] Kyei-Manu F. andObodoako A. (2005) examination of solarstirling-engine water pump proposal draft. Page 1-15.

[5] Heand M. and Sanders S. examination of design of a 2.5kW low temperature stirling engine for distributed solar thermal
generation. American institute of aeronautics and astronautics, page 1-8, USA
[6] Sukhatme S.P. (2007) Principles of thermal collection and storage. McGraw Hill, New Delhi
[7] Duffie,J.A and Beckman (2006) Solar engineering of thermal processes. John willy& sons, INC., London
[8] Rai, G.D (2011) solar energy utilisation. Khanna publishers, India
[9] Renewable Energy focus handbook (2009), ELSEVIER page 335
[10] Valentina A. S., Carmelo E. M., Giuseppe M. G., MiliozziAdio and Nicolini Daniele (2010) New Trends in Designing Parabolic
trough Solar Concentrators and Heat Storage Concrete Systems in Solar Power Plants. Croatia, Italy

[11] FOLARANMI, Joshua (2009) Design, Construction and Testing of a Parabolic Solar Steam Generator.Journal of Practices and
Technologies ISSN 1583-1078. Vol-14, page 115-133, Leonardo

[12] Xiao G. (2007) A closed parabolic trough solar collector. Version 2 Page 1-28

[13] Brooks, M.J., Mills, I and Harms, T.M. (2006) Performance of a parabolic trough solar collector. Journal of Energy in Southern
Africa, Vol-17, page 71-80 Southern Africa










International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

195 www.ijergs.org

Wireless Sensor Network Protocol Implementation by Using Hybrid
Technology
Rupesh Raut
1
, Prof. Nilesh Bodne
2

1
Scholar, Rastrasant tukodoji maharaj, Nagpur University
2
Faculty, Rastrasant tukodoji maharaj, Nagpur University
Email- me.rupesh_raut@rediffmail.com

ABSTRACT – A multi hop wireless sensor network is composed of large number of nodes and consecutive link between them.
Wireless sensor network normally consist of large number of distributed nodes. In WSN one of the main problems is related to power
issue because every node is operated by external battery. To have a large network life time all nodes need to minimize their power
consumption. Node is composed of small battery so energy associated with this is very less so replacing or refilling of battery is not
possible which is very costly. Hence some technique are applied through which power associated with each node can be conserved. In
this paper we proposed design for implementation of wireless sensor network protocol for low power consumption by using power
gating signal.

Keywords— Wireless sensor network, power consumption, node, battery, life time of network, protocol, inactive state.

INTRODUCTION
The term "wireless" has become a generic and all-encompassing word used to describe communications in which
electromagnetic waves to carry a signal over part or the entire communication path. Wireless technology can able to reach virtually
every location on the surface of the earth. Due to tremendous success of wireless voice and messaging services, it is hardly surprising
that wireless communication is beginning to be applied to the domain of personal and business computing. [2].Ad- hoc and Sensor
Networks are one of the parts of the wireless communication. In ad-hoc network each and every nodes are allow to communicate with
each other without any fixed infrastructure. This is actually one of the features that differentiate between ad-hoc and other wireless
technology like cellular networks and wireless LAN which actually required infrastructure based communication like through some
base station. [3].
Wireless sensor network are one of the category belongs to ad-hoc networks. Sensor network are also composed of nodes.
Here actually the node has a specific name that is ―Sensor‖ because these nodes are equipped with smart sensors [3]. A sensor node is
a device that converts a sensed characteristic like temperature, vibrations, pressure into a form recognize by the users. Wireless sensor
networks nodes are less mobile than ad-hoc networks. So mobility in case of ad-hoc is more. In wireless sensor network data are
requested depending upon certain physical quantity. So wireless sensor network is data centric. A sensor consists of a transducer, an
embedded processor, small memory unit and a wireless transceiver and all these devices run on the power supplied by an attached
battery [2].
Traditional development of wireless sensor network mote is generally based on SoC platform but here we are going to
implement protocol on FPGA platform so we use the term hybrid technology.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

196 www.ijergs.org


Fig.1 Wireless Sensor Network

Battery Issues

The battery supplies power to the complete sensor node and hence plays a vital role in determining sensor node lifetime.
Batteries are complex devices whose operation depends on many factors including battery dimensions, type of electrode material used,
and diffusion rate of the active materials in the electrolyte. In addition, there can be several non idealities that can creep in during
battery operation, which adversely affect system lifetime. We describe the various battery non idealities and discuss system level
design approaches that can be used to prolong battery lifetime.[1]

Rated Capacity Effect

The most important factor that affects battery lifetime is the discharge rate or the amount of current drawn from the battery.
Every battery has a rated current capacity, specified by the manufacturer. Drawing higher current than the rated value leads to a
significant reduction in battery life. This is because, if a high current is drawn from the battery, the rate at which active ingredients
diffuse through the electrolyte falls behind the rate at which they are consumed at the electrodes. If the high discharge rate is
maintained for a long time, the electrodes run out of active materials, resulting in battery death even though active ingredients are still
present in the electrolyte. Hence, to avoid battery life degradation, the amount of current drawn from the battery should be kept under
tight check. Unfortunately, depending on the battery type (lithium ion, NiMH, NiCd, alkaline, etc.), the minimum required current
consumption of sensor nodes often exceeds the rated current capacity, leading to suboptimal battery lifetime. [4]

Relaxation Effect

The effect of high discharge rates can be mitigated to a certain extent through battery relaxation. If the discharge current from
the battery is cut off or reduced, the diffusion and transport rate of active materials catches up with the depletion caused by the
discharge. This phenomenon is called the relaxation effect and enables the battery to recover a portion of its lost capacity. Battery
lifetime can be significantly increased if the system is operated such that the current drawn from the battery is frequently reduced to
very low values or is completely shut off [5].

Proposed Method for Implementation of Wireless Sensor Protocol Implementation

The sensor node‘s radio enables wireless communication with neighboring nodes and the outside world. In general, radios can
operate in four distinct modes of operation: Transmit, Receive, Idle, and Sleep. An important observation in the case of most radios is
that operating in Idle mode results in significantly high power consumption, almost equal to the power consumed in the Receive mode
[6]. Thus, it is important to completely shut down the radio rather than transitioning to Idle mode when it is not transmitting or
receiving data. Another influencing factor is that as the radio‘s operating mode changes, the transient activity in the radio electronics
causes a significant amount of power dissipation. For example, when the radio switches from sleep mode to transmit mode to send a
packet, a significant amount of power is consumed for starting up the transmitter itself [7].
Therefore our idea to keep wireless sensor network node in inactive (shut down) mode until it get power gating signal. For
implementing this idea we first consider the transmitter and receiver section design consideration and then develop node (i.e.
transmitter plus receiver) by using power gating signal.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

197 www.ijergs.org


In design consideration of transmitter, it is generally in ideal state and wait for RX_BEACON signal from receiver. After receiving
RX_BEACON signal from receiver it transmit data and cyclic redundancy check (CRC) to receiver until it get acknowledgement from
receiver side after gating ACK signal transmitter again enter in idle mode. The flow diagram for transmitter working is as shown in fig.2.

























Fig.2. Transmitter Flow Diagram




























Idle
RX_BEACON
=1?
SEND
DATA + CRC
RX_ACK
=1?
YES
NO
NO YES
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

198 www.ijergs.org


In design consideration of receiver, when it want to receive data it send RX_BEACON signal to transmitter after receiving data
depend upon data information it decide status of various devices (e.g. ON/OFF). Also it send ACK signal after receiving desired data. The
flow diagram for receiver working is as shown in fig.3.




















Fig.3. Receiver Flow Diagram

After taking consideration of Transmitter and receiver design we are going to develop node with power gating
signal. In this design we put node in Inactive Mode (shut down mode) after gating active low POWER GATING signal it will enter in
Control Mode where it will wait for RX_BEACON signal from receiver. If it get active high signal it will transmit with CRC until it
get acknowledgement from receiver depending upon nature of data it will command the device and after getting acknowledgement
from receiver it again enter in Inactive mode. The flow diagram for node is shown in fig.4







Start
Data received
Send
RX_BEACON
If data>128
byte
Device On
Device Off
Send Ack
YES
NO
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

199 www.ijergs.org
























Fig.4.Flow diagram of node with power gating signal



CONCLUSION

POWER
GATING
SIGNAL =0?
INACTIVE
MODE
SEND
DATA + CRC
CONTROL MODE
RX_BEACON
=1?
RX_ACK
=1?
NO
YES
YES

NO
NO
YES
Device Status
On/Off
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

200 www.ijergs.org

This paper describes challenges face by wireless sensor network and present design for low power transmitter. Present
techniques that are available are complicated and economically costly to implement .The design technique that we have used in our
paper is robust, low cost and easy to implement. The use of Power Gating signal enables our system to meet the low power
requirements of wireless sensor node. If we write VHDL code for our protocol implementation and find out it power after simulation
then we get power as low as 20µW so such amount power saving can lead to significant enhancement in sensor network lifetime.
Therefore our approach for implementation of wireless sensor network protocol is simple and cost effective.

REFERENCES

[1]. Vijay Raghunathan,Curt Schurgers,Sung Park, and Mani B. Srivastava ― Energy aware wireless sensor network ,‖
Signal Processing Magazine 1053-5888/02/$17.00©2002IEEE march 2002
[2]. Carlos de Morais Cordeiro, Dharma Prakash Agrawal ―Ad-hoc and sensor networks theory and application,‖ World Scientific
publication, 2006
[3]. Paolo Santi, ―Topology control in wireless Ad-hoc and Sensor networks, Jhon Wiley and son‘s publication,‖ 2005.
[4]. C.F. Chiasserini and R.R. Rao, ―Pulsed battery discharge in communication devices,‖ in Proc. Mobicom, 1999, pp. 88-95.
[5]. S. Park, A. Savvides, and M. Srivastava, ―Battery capacity measurement and analysis using lithium coin cell battery,‖ in Proc.
ISLPED,2001, pp.382-387.
[6]. Y. Xu, J. Heidemann, and D. Estrin, ―Geography-informed energy conservation for ad hoc routing,‖ in Proc. Mobicom, 2001,
pp. 70-84.
[7]. A. Wang, S-H. Cho, C.G. Sodini, and A.P. Chandrakasan, ―Energy-efficient modulation and MAC for asymmetric microsensor
systems,‖ in Proc.ISLPED, 2001, pp 106-111.















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

201 www.ijergs.org

Detection and Recognition of Mixed Traffic for Driver Assistance System
Pradnya Meshram
1
, Prof. S.S. Wankhede
2

1
Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh hill, Nagpur, India
2
Faculty, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh hill, Nagpur, India
E-mail- pmeshram111@gmail.com


ABSTRACT- Driver-assistance systems that monitor driver intent, warn drivers or assist in vehicle guidance are all being actively
considered. This paper present computer vision system designed for recognizing road boundary and a number of objects of interest
including vehicles, pedestrians, motorcycles and bicycles. The system is designed using Hough transform and Kalman filters to
improve the accuracy as well as robustness of the road environment recognition. A Kalman filter object can be configured for each
physical object for multiple object tracking. To use the Kalman filter, the moving object must be track. The results are then used as
the road contextual information for the following procedure, in which, particular objects of interest, including vehicles, pedestrians,
motorcycles and bicycles, are recognized by using a multi-class object detector. The results in various typical but challenging
scenarios show the effectiveness of the system.
Keywords— Computer vision toolbox, Video processing, Hough transform ,Kalman filters ,Region of interest, Object track ,Driver
assistance system , Intelligent vehicles.
INTRODUCTION

Within the last few years, research into intelligent vehicles has expanded into applications that work with or for the human user.
Computer vision system should be able to detect the drivable road boundary and obstacles. For some higher-level functions, it is also
necessary to identify particular objects of interest, such as vehicles, pedestrians, motorcycles, and bicycles. The detection and
recognition of such information are crucial for the successful deployment of the future intelligent vehicular technologies in the
practical mixed traffic, in which, the intelligent vehicles have to share the road environment with all road users, such as pedestrians,
motorbikes, bicycles and vehicles driven by human beings. Whereas computer vision can deliver a great amount of information,
making it a powerful means for sensing the structure of the road environment and recognizing the on-road objects and traffic
information. Therefore, computer vision is necessary and promising for the road detection and other applications related to intelligent
vehicular technologies.
The novelty of this paper lies in the following two aspects: First, we formulize the drivable road boundary
detection using Hough transform which not only improves the accuracy but also enhances the robustness for the estimation of the
drivable road boundary. The detected road boundaries are used to verify which ones are needed to be tracked and which ones are not.
Second, we recognize particular objects of interest by using the Kalman filter. It is use to predict a physical object's future location, to
reduce noise in the detected location. The system development in order to improve traffic safety with respect to the road users Such
framework can improve not only the accuracy but also the efficiency of the road environment recognition
REVIEW OF LITERATURE

Chunzhao Guo and Seiichi Mita, in their study, they recognizing a number of objects of interest in mixed traffic, in which, the host
vehicle have to drive inside the road boundary and interact with other road First, it formulize the drivable road boundary detection as a
global optimization problem in a Hidden Markov Model (HMM) associated with a semantic graph of the traffic scene Second, it
recognize particular objects of interest by using the road contextual correlation based on the semantic graph with the detected road
boundary Such framework can improve not only the accuracy but also the efficiency of the road environment recognition.

Joel C. McCall and Mohan M. Trivedi , in their study, motivate the development of the novel ―video-based lane estimation and
tracking‖ (VioLET) system. The system is designed using steerable filters for robust and accurate lane-marking detection. Steerable
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

202 www.ijergs.org

filters provide an efficient method for detecting circular- reflector markings, solid-line markings, and segmented-line markings under
varying lighting and road conditions. They help in providing robustness to complex shadowing, lighting changes from overpasses and
tunnels, and road-surface variations. They are efficient for lane-marking extraction because by computing only three separable
convolutions, we can extract a wide variety of lane markings There are three major objectives of this paper The first is to present a
framework for comparative discussion and development of lane-detection and position-estimation algorithms. The second is to present
the novel ―video-based lane estimation and tracking‖ (VioLET) system designed for driver assistance. The third is to present a
detailed evaluation of the VioLET system

Michael Darms, Matthias Komar and Stefan Lueke , The paper presents an approach to estimate road boundaries based on static
objects bounding the road. A map based environment description and an interpretation algorithm identifying the road boundaries in
the map are used. Two approaches are presented for estimating the map, one based on a radar sensor, one on a mono video camera.
Besides that two fusion approaches are described. The estimated boundaries are independent of road markings and as such can be used
as orthogonal information with respect to detected markings. Results of practical test using the estimated road boundaries for a lane
keeping system are presented.

Akihito Seki and Masatoshi Okutomi, in their study,Understanding the general road environment is a vital task for obstacle detection
in complicated situations. That task is easier to perform for highway environments than for general roads because road environments
are well-established in highways and obstacle classes are limited. On the other hand, general roads are not always well-established and
various small obstacles, as well as larger ones, must be detected. For the purpose of discerning obstacles and road patterns, it is
important to determine the relative positions of the camera and the road surface. This paper presents an efficient solution using a
stereo-vision-based obstacle detection method for general roads. The relative position is estimated dynamically even without any clear
lane markings. Additionally, obstacles are detected without applying explicit models. We present experimental results to demonstrate
the effectiveness of our proposed method under various conditions.


Zehang Sun, George Bebis, and Ronald Miller ,in their study, presents a review of recent vision-based on-road vehicle detection
systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway
monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of
intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection.
Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are
reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for
vehicle detection

Akihiro Takeuchi, Seiichi Mita, David McAllester, in their study ,proposes a novel method for vehicle detection and tracking using a
vehicle-mounted monocular camera. In this method, features of vehicles are learned as a deformable object model through the
combination of a latent support vector machine (LSVM) and histograms of oriented gradients (HOG). The vehicle detector uses both
global and local features as the deformable object model. Detected vehicles are tracked by using a particle filter with integrated
likelihoods, such as the probability of vehicles estimated from the deformable object model and the intensity correlation between
different picture frames.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

203 www.ijergs.org


SYSTEM DESIGN ARCHITECTURE :

Figure shows the diagram of the proposed Approach system .The system lies in two ways :Right image and Left
image. The system is design using Hough transform and Kalman filter. With the Hough transform, we find the drivable road
boundary. The resultant road boundary is then used as the road contextual information to enhance the performance for each processing
step of the multi-object recognition, which detects the objects of interest by using Kalman filter.



Fig. flow diagram of the proposed work

A) ROAD BOUNDARY DETECTION AND TRACKING :
Computer vision-based methods are widely used for road detection. They are more robust than the all.
The detection of road boundary an Hough transform is used. It is detect boundary in the current video frame and finally
localize the road boundary using the colour red and green . The current goal is to find the edges of road in which human driver
have to drive. By using the Hough transform, the proposed approach finds the edges of road. The detected road boundary are
used to verify which ones are needed to be tracked and which ones are not. The approach can still find the accurate drivable
road boundary robustly from the figure .



Fig. 1. Road boundary detection.






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

204 www.ijergs.org



B) MULTI-OBJECT RECOGNITION :

As mentioned previously, the intelligent vehicles have to share the road environment with all road users, such as
pedestrians, motorbikes, bicycles, and other vehicles. The system development of different types of detection systems in order to
improve traffic safety with respect to the road users .In the proposed system, particular objects of interest, including vehicles,
pedestrians, motorcycles and bicycles, are recognized with the context information . Object identification is challenging in that objects
present dramatic appearance changes according to camera viewpoints and environment conditions. The detection of object an kalman
filter is used. The Kalman filter object is designed for tracking. It is use to predict a physical object's future location, to reduce noise
in the detected location, or to help associate multiple physical objects with their corresponding tracks. A Kalman filter object can be
configured for each physical object for multiple object tracking.
The flowchart of this object detection system is shown in fig 2. and its main steps are discussed in the following section. The
first step is to collect database of video file. There is masking of all the image and foreground detection. Then block is analysis .After
the analysis of all the block it read all the frames of image. All the previous frames is delete and create a new frame. It detect the
object track and predict frames.


Fig 2. Flow chart of object detector

Object tracking is often performed to avoid false detections over time and predict future target positions. However, it is
unnecessary to keep tracking the targets which are out of the collision range Particular objects of interest, including vehicles,
pedestrians, motorcycles, and bicycles, are recognized, which will be provided to the behavioural and motion planning systems of the
intelligent vehicles for high-level functions. Some example results in various scenarios for different on-road objects are shown in
Figure which substantiated that the proposed system can successfully detect the objects of interest with various sizes, types, colours.

1) Vehicle detection:-

Original video Segmented video
Track object
Bound box
Block analysis
Foreground detection
Database of videofile
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

205 www.ijergs.org




2) Pedestrians detection :-


Original video Segmented video

3) Vehicle and pedestrian detection:-


Original video Segmented video


CONCLUSION :
We present a vision-based approach for estimating the road boundary and recognizing a number of road users. Our first
contribution is road detection using Hough transform. It allows the used to verify which ones are needed to be tracked and which
ones are not. it will help the human driver to go on a particular road boundary .Our second contribution is the use of road contextual
correlation for enhancing the object recognition performance. The Kalman filter object is designed for tracking. It is use to predict
object's future location. The system development in order to improve traffic safety with respect to the road users All of these
contributions improve the accuracy as well as robustness of the road environment recognition.

REFERENCES:
[1] Chunzhao Guo, Member, IEEE, and Seiichi Mita, Member, IEEE ―Semantic-based Road Environment Recognition in
Mixed Traffic for Intelligent Vehicles and Advanced Driver Assistance Systems‖
[2] M.Nieto and L.Salgado , ― Real time vanishing point estimation in sequences using adaptive steerable filter bank ,‖in
proc. Advanced concepts for intelligent vision system ,LNCS 2007,pp.
[3] J.C.McCall and M.M.Trivedi , ―Video-Based Lane Estimation and Tracking for Driver Assistance: Survey, System,
and Evaluation,‖ IEEE Trans .Intell .Transp .Syst .,vol 7,no.1 ,pp. 20-37,Mar 2006
[4] J. McCall, D. Wipf, M. M. Trivedi, and B. Rao, ―Lane change intent analysis using robust operators and sparse
Bayesian learning,‖ in Proc. IEEE Int. Workshop Machine Vision Intelligent Vehicles/IEEE Int. Conf. Computer Vision
and Pattern Recognition, San Diego, CA, Jun. 2005, pp. 59–66.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

206 www.ijergs.org

[5] M.darms ,M.Komar , and S. Lueke , ― Map based Road Boundary Estimation‖, in proc .IEEE Intelligent vehicle
symposium ,2010,pp.609 -614.
[6] A .seki and M. Okutomi ― Robust Obstacle Detection in General Road Environment Based on Road Extraction and
Pose Estimation ‖, in proc. IEEE Intelligent Vehicles Symposium, 2006, pp 437-444.
[7] S. Kubota , T. Nakano and Y. Okamoto , ―A Global Optimization Algorithm for Real-Time On-Board Stereo
Obstacle Detection Systems ‖, in proc. IEEE Intelligent Vehicles Symposium, 2007, pp 7-12
[8] F. Han, Y. Shan, R. Cekander, H. S. Sawhney, and R. Kumar, ―A two-stage approach to people and vehicle detection with HOG-
based SVM,‖ in Performance Metrics for Intelligent Systems 2006 Workshop, pp. 133-140, Aug. 2006
[9] K. Kluge, ―Performance evaluation of vision-based lane sensing: Some preliminary tools, metrics, and results,‖ in Proc. IEEE
Intelligent Transportation Systems Conf., Boston, MA, 1997, pp. 723–728.
[10] M.Bertozzi, A. Broggi, and A. Fascioli, ―Obstacle and Lane Detection on Argo Autonomous Vehicle,‖ IEEE Intelligent
Transportation Systems, 1997.
[11] z.sun ,G .Bebis ,and R.miller ,on-Road Vehicle Detect ion: A Review ,‖IEEE Trans. Pattern Analysis and Machine Intelligence,
vol. 28, no. 5,pp. 694-711, 2006
[12] D.geronimo ,A.M.lopez,A.D.sappa and T.Graf, ―survey on pedestrian detection for advance driver assistance system,‖ IEEE
Trans. Pattern Analysis and machine intelligence, vol. 32, no. 7,pp.1239-1258,2010
[13] A. Takeuchi, S. Mita, and D. McAllester, ―On-road vehicle tracking using Deformable Object Model and Part icle Filter with
Integrated Likelihoods_´_ in Proc. IEEE Intelligent Vehicles Symposium, 2010, pp.1014-1021.
[14] K. Fürstenberg, D. Linzmeier, and K. Dietmayer, ―Pedestrian recognition and tracking of vehicles using a vehicle based
multilayer laser scanner,‖ Proc. of 10th World Congress on ITS 2003, Madrid, Spain, November 2003














International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

207 www.ijergs.org

Design & Implementation of ANFIS System for Hand Gesture to Devanagari
Conversion
Pranali K Misal
1
, ProfM.M. Pathan
2

1
Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh hill, Nagpur, India
2
Faculty, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh hill, Nagpur, India
E-mail- missal.pranali20@gmail.com

ABSTRACT- Sign language mainly uses hand gesture for communicate between vocal and hearing impaired people and normal
people. It is also used to share message. This research work presents a simple sign language recognition system that develops using
ANFIS & neural network. Devanagari language is one of the most used for writing. The system works uses functions like skin color
detection, convex hull, contour detection and identification of extrema points on the hand. This is technology which is convert image
from video acquisition into Devanagari Spoken language. The database of all alphabets is created first then after feature extraction of
all images. The features calculated first then from centroids values trained Neural network. The Algorithms for training neural
network is linear vector Quantization. The feature extraction point trained in neural network after that from a new recognizes hand
gesture and translate sign language into Devanagari alphabets. The system architecture contains video acquisition, image processing,
feature extraction & neural network classifier. This system is used to recognize more alphabets which can sign with one and two hands
movement. This system useful to identify the gesture after that sign translating Devanagari language. This project aims to develop and
test a new method for recognition of Devanagari sign language. To do so preprocessing, contour & convex based feature extraction is
done. The method is evaluated on database and proves to be superior than rule based methods. To identify Devanagari alphabets of
sign language, in an image morphological operation and skin color detection is performed. A MATLAB implementation of the
complete algorithm is developed and conversion of sign language into Devanagari spoken language with the better accuracy.

Keywords—Hand gesture, Sign language recognition, Image processing, ANFIS, Feature Extraction, contour points, convex hull
Devanagari Alphabets & numerals.
INTRODUCTION
Hand gesture technique is a way of communication between vocal and hearing impaired people. A person who has knowledge of
sign language can talk and hear properly. Untrained people cannot communicate with mute person, because the person can
communicate with Impaired People by training sign language. Hands Gesture to Devanagari voice system will be more useful for the
vocal & hearing impaired for communicate with normal people more fluently. The proposed system will be use for sign language into
spoken language. The aim of this research work includes conversion of hand gesture to Devanagari speech. The vocal and hearing
impaired community has developed their own culture and communicates with ordinary person by using sign language.
Hands gestures are basically physical action by using hands & Eyes, we can communicate with the deaf & dumb people.
Gesture represents ideas and actions of deaf & dumb people. They can express their feelings with the different hand shapes, fingers
patterns & movements of hands. The gestures vary greatly culture among People, hand Gesture are basically used in communication
between the peoples those who are unable to speak with another. It is shown that people who are hearing impaired people , when they
talk on telephone, and unable to see each other as well as face to face communication. These problems overcome by hand gesture
recognition system which recognizes Devanagari alphabets & numerals. It is the demand of available advanced technology to
recognize, classify and various different hand gestures and use them in a wide range of application. Linear vector Quantization
algorithm is used for training the neural network. Devanagari alphabets are includes in two palm or both hand movement numerals in
only one hand movements. The background for image is in white background, because color gloves are very expensive. In this
proposed work, only use white background for better feature extraction.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

208 www.ijergs.org


II. RELATED WORK ON HAND GESTURE RECOGNITION

Gesture Recognition is becomes important factor for sign language. There has been gesture recognition technique developed for
voice recognition by using hand gesture. Ullah [7] designed a system that of 26 images representing each alphabet, which is used for
training purpose. The American Sign Language from static images using CGP has recognition accuracy of 90%. Miller & Thomson
first gave the idea of CGP. CGP genes are represented by nodes each of which some characteristics, which represents the actual
system of CGP chromosomes or genotype. The accuracy of this system is reported on sized images of 47*27 pixel resolution, by
taking too small a data set for testing and manual preprocess of training images. CGP based system are faster with respect to
conversion GP algorithms., but recently developed Neuro- Evolutionary approaches, CGP is slow. For the fast ability learning
approaches like Cartesian genetic programming evolved Artificial Neural network (CGPANN).Dr. Raed Abu Zaiter & Maraqa [3]
developed a system for recognition of Arabic sign language using recurrent neural network. This paper used color coded glove to
extract perfect feature with recognition accuracy rate of 95% is reported. In this paper, the images have been captured by a color
camera & image digitized into 256*256 pixel image, and then it is converted into HIS system, then after color segmentation is done
with matlab6. The result show an improvement in generalisability of system when using fully recurrent neural network rather than
using Elman network.

Paulraj [4], developed systems which convert sign language into voice signals i.e. Malaysian language. In this paper, feature
extraction method done by Discrete Cosine Transform (DCT). This system use a camera for lighting sensitivity & background
condition, skin color segmentation applied for each of gesture frame images and segmented. The feature extraction stage the moment
is calculated from the blob method, in the calculated from blob alone in a set of image frame. They surveyed the use of Skin Color
Segmentation system. They found that accuracy rate 95% in Recognition of 92.85% in A Phoneme Based Sign Language Recognition
System Using Skin Color Segmentation. Akmeliawati [6], developed an automatic sign language translator provides a real time
English translation of Malaysia SL. This sign language translator can recognize both finger, spelling and sign gestures that involve
static and motion sign. In this neural network is used to translate sign language to English. In earlier days, English & Malay languages
learnt only as second languages. The data gloves are less comfortable to signer. But these gloves are very costly. In this automatic sign
language translator recognize all 49 signs in BIM vocabulary. This system achieved recognition rate of over 90%.

Fang et al [1] make use of three additional trackers in their hybrid system with self organizing feature maps & HMM to
accuracy which is between 90-96%. But using SOFM/HMM system increases recognition accuracy by 5% than HMM. The
recognition rate of this system is 91%. Which is recognizing sentences of 40 signs, this system imposing a strict grammar, in real time
performance accuracy rate was 97%. A self adjusting recognition algorithm is proposed for improving for SOFM/HMM
discrimination. The aim of sign language recognition is to provide an efficient and accurate mechanism to transcribe sign language
into text or speech so that communication between deaf and hearing impaired people is very easy. Their system correctly recognizes
27 out of 31 ASL symbols. The recognition rate for their system was 91% which recognizing sentences of 40 signs. Memona Tariq,
Ayesha Iqbal, Aysha Zahid, and Zainab Iqbal [2], presented a machine translation of sign language into text. The approaches rely on
intrusive hardware webcam and in the form of wired or colored gloves. The specific language dialect dependent on accurate gesture for
their better understanding. The functions use for feature extraction like skin color based thresholding, contour detection & convexity
detect, for detection of hands and identification of important points on the hand. In testing, it can be used for translation of trained sign
language symbols. The system recognized one hand including 9 numerals & 24 alphabets of English language. The maximum
recognition average accuracy is of 77% on numerals and alphabets.



III. SYSTEM DESIGN ARCHITECTURE

The flowchart of this system is shown in fig 1 and its main steps are discussed in the following section. The first step is
to collect database of all Devanagari alphabets. The database having a large amount of Devanagari alphabets images, The features are
calculated by convex hull, extrema points and counter point‘s methods. After calculating feature of centroids values, then these values
are stored in neural network for training. Fuzzy system use for rule based system. In neural network for training purpose algorithm
uses linear vector quantization in this system, then after classification is done by this algorithm. Then from video acquisition new
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

209 www.ijergs.org

image is captured & compared with the database. The image processing is process, and then correctly converts sign language into
Devanagari Voice for alphabets.






Fig 1 -.System Architecture

Feature extracted from image based on the distance between centroid, fingers & palms. These feature vectors are used for neural
network system.



Fig 2 - flow chart of this system
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

210 www.ijergs.org


A. SKIN COLOR DETECTION-

Image is captured from video to identify hand, determine gesture. For identification of hand gesture RGB, gray scale color
model are require. The skin color detection i.e. detects only skin color by using morphological operation. The skin color is detected &
then boundary of the hand is located by points. The convex hull is useful for collecting features from image. The boundary covered
around the hand by using skin color detection technique.


B. FEATURE EXTRACTION-

The image is captured in white background for better Results. It is processed by function of skin color from skin color detection
and then determines contour and convex hand of the shape. To determine the spatial moment or position of hand are required for
contouring. Contour is boundary or outline of curved shape it draws outline around the hand. The hand having different orientation in
convex hull & contour points feature extraction. The key information contains in fingers and palm. For hand gesture identification
method, convex hull designed for counter points. Finding detect of convexity contour points are necessary for joining points. The start
points of detects are marked, used for computing feature vector. Defects points are unevenly distribution vary in number from one
frame to other. The defected points are filtered by identifying all contour points. The distances of contour points are determined from
centroid. The distances are feature extracted from every hand gesture.


Fig 3- Original Image Fig 4- Convex hull


Fig 5- Contour points



C. ANFIS-

The adaptive neuro-fuzzy inference system (ANFIS) proposed by Jang in 1993, implements a Sugeno fuzzy inference method.
The ANFIS architecture contains a six layer feed-forward neural network as shown in Figure 3. Layer 1 is the input layer that passes
external crisp signals to Layer 2, known as the fuzzification layer; to determine the membership grades for each input implemented by
the given fuzzy membership function. Layer 3 of ANFIS is the rule layer, which calculates the firing strength of the rule as the product
of the membership grades.

Layer 4 is called the ‗normalized firing strengths‘, in which each neuron in the layer receives inputs from all neurons in Layer 3, and
calculates the ratio of the firing strength of a given rule to the sum of firing strengths of all rules. Layer 5 is the defuzzification layer
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

211 www.ijergs.org

that yields the parameters of the consequent part of the rule. A single node in Layer 6 calculates the overall output as the summation of
all incoming signals. ANFIS training can use alternative algorithms to reduce the error of the training. The LVQ network use as a
classifier for sign recognition, where each neuron corresponds to a different category.


Fig. 6 Adaptive Neuro-Fuzzy Inference System (ANFIS)[9]


IV.CONCLUSION

In this paper, we proposed, design and tested method for Devanagari Sign Language Recognition using the neural network, ANFIS
classifier and features extracted from contour points & convex hull. From the experiments, we concluded that, obtained slightly better
results 90%. Sign language recognition system is created by using skin color detection and neural network using LVQ algorithm. The
sign language to voice system helps vocal and hearing impaired people to communicate with normal people more fluently by using
this system. Our approaches described in the paper recognition accuracy greater than 90%. Sign language is the most important
language for vocal and hearing impaired people. The aim of this research work to convert sign language into Devanagari spoken
language. Though a lot of work has been done in this area previously, but direction is to extend this system to recognize Devanagari
alphabets which can be signed with one hand movements.

REFERENCES

[1] G. Fang, W. Gao, J. Ma, ― Signer-independent sign language recognition based on SOFM/HMM‖, Recognition,
Analysis, and Tracking of Faces and Gestures in Real-Time Systems, Proceedings, IEEE ICCV Workshop, pp. 90-95,
2001.

[2] Memona Tariq, Ayesha Iqbal, Aysha Zahid, and Zainab Iqbal‖, Sign Language Localization: Learning Eliminate
Language Dialects, Journal to International of Human Computer Interaction, 2012.

[3] Meenakshi Panwar ―Hand Gesture based Interface for Aiding Visually Impaired‖ International Conference on Recent
Advances in Computing and Software Systems, 2012.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

212 www.ijergs.org

[4] M. Maraqa, R. Abu-Zaiter, ―Recognition of Arabic Sign Language (ArSL) using recurrent neural networks‖,
Applications of Digital Information and Web Technologies, ICADIWT, First International Conference, pp. 478-481,
2008.

[5] M. P. Paulraj, S. Yaacob, M. S. bin Zanar Azalan , R. Palaniappan, ―A phoneme based sign language recognition
system using skin color segmentation‖, Signal Processing and its Applications (CSPA), 6
th
International Colloquium, pp.
1-5, 2010.

[6] R. Akmeliawati, M. P-L. Ooi, Y. C. Kuang, ―Real-Time Malaysian Sign Language Translation using Color
Segmentation and Neural Network‖, IEEE Instrumentation and Measurement Technology Conference Proceedings,
IMTC, pp. 1-6, 2007.

[7] F. Ullah, ―American Sign Language recognition system for hearing impaired people using Cartesian Genetic
Programming‖ Automation, Robotics and Applications (ICARA), 5th International Conference, pp. 96-99, 2011.

[8] Chunli Wang, Wen GAO, Shiguang Shan ―An Approach Based on Phonemes to Large Vocabulary Chinese Sign
Language Recognition‖, Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FGR.02),
2002.

[9] Jyhshing Roger Jang, ―ANFIS: Adaptive- Network-Based Fuzzy Interface System,‖ in proceeding of IEEE
Transaction on System, Man and Cybernetics Vol.23,NO.3,MAY/JUNE, 1993.

[8] H. Birk, T. B. Moeslund, and C. B. Madsen, ―Real-time recognition of hand alphabet gestures using principal component
analysis,‖ in Scandinavian Conference on Image Analysis (SCIA), 1997, pp. 261– 268

[9]T. Starner and A. Pentland, ―Visual recognition of American sign language using hidden markov models,‖ In Intl. Conf. On
Automatic Face and Gesture Recognition, pp. 189–194, 1995.

[10]C. Vogler and D. Metaxas ―Asl recognition based on a coupling between hmms and 3d motion analysis‖ In Proc.Intl. Conf. on
Computer Vision, pp. 363–369, 1998.

[11]Y. Nam and K. Wohn ―Recognition of Space-Time Hand-Gestures Using Hidden Markov Model‖ In Proceedings of the ACM
Symposium on Virtual Reality Software and Technology, pp. 51{58, Hong Kong, July 1996.

[12] B. Bauer, H. Hienz, and K.F. Kraiss ―Video-Based Continuous Sign Language Recognition Using Statistical Methods‖ In
Proceedings of the International Conference on Pattern Recognition, pp. 463{466, Barcelona, Spain, September 2000.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

213 www.ijergs.org

Effects of Compression Ratio on Performance of a Single Cylinder 4-Stroke
Compression Ignition Engine using Blends of Neat Karanja Oil with Diesel in
Dual Fuel Mode
N.H.S.Ray
1
, S.N.Behera
1
,M.K.Mohanty
2

1
Faculty, Department of Mechanical Engineering, CEB, Bhubaneswar, BPUT, Odisha, India
2
Faculty, Department of FMP, CAEB, OUAT, Bhubaneswar, Odisha, India
E-mail- mohanty65m@gmail.com

ABSTRACT- The diesel engine is a major tool in the day-to-day life of modern society. The fossil fuel scarcity and pollutant emissions from diesel
engines have become two important problems of the world today. One method to overcome the crisis is to find suitable substitute for the petroleum
based fuels. Biofuels have been gaining popularity recently as an alternative fuel for diesel engines. Biofuels can be used in any diesel engine, usually
without any modifications. In India million tonnes non edible seeds like Karanja seeds are going in waste. Oil produced from these seeds can be used
as an alternate fuel in diesel engines. The overall objective is to prevent waste, increase the value recovery of resource as bio fuel and to meet fossil
fuel scarcity. As compared to diesel fuel biodiesel from Karanja oil has the advantages as it is renewable, non-toxic, reduces CO, HC and smoke
emission from the engine. It is used CI engine by blending with diesel or it can also be directly used in the engine without any engine modification.
In the present study, the effects of compression ratio on the performance of a four stroke single cylinder diesel engine using Karanja oil in dual fuel
mode are investigated.

Key Words- Bio diesel, VCR engine, Karanja oil, Compression ratio. Alternate fuel.BSFC, BTE.


I. INTRODUCTION
Energy is one of the major sources for the economic development of any country. India being a developing country requires much higher level of
energy to sustain its rate of progress. According to the International Energy Agency (IEA), hydrocarbons account for the majority of India‘s energy
use. Together, coal and oil represent about two-thirds of total energy use. Natural gas now accounts for a seven percent share, which is expected to
grow with the discovery of new gas deposits. India had approximately 5.7 billion barrels of proven oil reserves as of January 2011, the second-largest
amount in the Asia-Pacific region after China. The combination of rising oil consumption and relatively flat production has left India increasingly
dependent on imports to meet its petroleum demand. To combat the present energy crisis, one of the important strategies need to be adopted is to
develop and promote appropriate technology for utilizing non-traditional energy resources to satisfy energy requirements. Hence to overcome all
these problems most combustion devices are modified to adapt gaseous fuels in dual fuel mode.
For substituting the petroleum fuels used in internal combustion engines, fuels of bio-origin provide a feasible solution to the twin crises of ‗fossil
fuel depletion‘ and ‗environmental degradation‘. For diesel engines, a significant research effort has been directed towards using vegetable oils and
their derivatives as fuels. Several research institutions are actively pursuing the utilization of non-edible oils for the production of Bio-diesel,
additives for lubricating oils, saturated and unsaturated alcohols and fatty acids and many other value added products. Biodiesel has received a good
response worldwide as an alternative fuel to diesel. Biodiesel is a cleaner burning fuel because of its own molecular oxygen content. Again in place
of diesel, biodiesel can be substituted as the pilot fuel in the dual fuel due to the diminishing reserves of petroleum fuels and rising awareness for
protecting our environment. Biodiesel is produced by transesterification process of Karanja seeds which involves a chemical reaction between an
alcohol and triglyceride of fatty acid in the presence of a suitable catalyst leading to the formation of fatty acid alkyl esters (biodiesel) and glycerol.
Biodiesel‘s viscosity is much closer to that of the diesel fuel than vegetable oil. Although biodiesel has many advantages over diesel fuel, there are
several problems needs to be addressed such as its lower calorific value, higher flash point, higher viscosity, poor cold flow properties, etc. This can
lead to the poor atomization and mixture formation with air that result in slower combustion, lower thermal efficiency and hi gher emissions. To
overcome such limitations of biodiesel some researchers have studied the performance and emissions of the diesel engines with increased injection
pressure. Fuel injection pressure and fuel injection timing play a vital role in ignition delay and combustion characteristics of the engine, as the
temperature and pressure change significantly close to TDC. The fuels properties also play a significant role to increase or decrease exhaust
pollutants. Various investigations clearly reported that cetane number (CN) affects exhaust emissions. The CN also affects the combustion efficiency,
and this ensures starting the engine easily. However, if the CN is excessively higher than the normal value, the ignition delay will be too short for the
fuel to spread into the combustion chamber. As a result of this unexpected condition, the engine performance will be reduced and the smoke value
will increase.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

214 www.ijergs.org

II. METHODOLOGY
2.1 Experimental set up




Fig.1 Actual Engine setup
The principles and methodologies that have been used during the course of several experimental investigations in the VCR diesel engine test rig
consisting of a single cylinder, 4-stroke, 3.5 kW at 1500 rpm Diesel engine connected to eddy current dynamometer in computerized mode to study
the performance, emission and combustion of engine by varying its compression ratio at different load condition from 0kg to 12kg using various
blends of Karanja oil and diesel. The detailed specification of the engine is shown in Table 1. Engine performance analysis software package
―EnginesoftLV‖ has been employed for online performance analysis. Piezo-sensor and crank angle sensor which measure the combustion pressure and
the corresponding crank angle respectively are mounted into the engine head. The output shaft of the eddy current dynamometer is fixed to a strain
gauge type load cell for measuring applied load to the engine. Type K-Chromel (Nickel-Chromium Alloy)/ Alumel (Nickel-Aluminium Alloy)
thermocouples are used to measure gas temperature at the engine exhaust, calorimeter exhaust, water inlet of calorimeter and water outlet of
calorimeter, engine cooling water outlet and ambient temperature. The fuel flow is measured by using 50ml burette and stopwatch with level sensors.

2.1.1 Engine specifications and attachments.

Table:-1 Engine specifications
Make Kirloskar
General details VCR Engine test setup 1- cylinder, 4- stroke, Water
cooled, compression ignition
Rated power 3.5Kw at 1500 rpm
Speed 1500 rpm(constant)
Number of cylinder Single cylinder
Compression ratio 16:1 to 18:1(variable)
Bore 87.5 mm
Stroke 110 mm
Ignition Compression ignition
Loading Eddy current dynamometer
Load sensor Load cell, type strain gauge 0-50 Kg
Temperature sensor Type RTD, PT100 and Thermocouple, Type K
Cooling Water
Air flow transmitter Pressure transmitter, Range (-) 250 mm WC
Rotameter Engine cooling 40-400 LPH; Calorimeter 25-250 LPH
Software ―EnginesoftLV‖ Engine performance analysis software
Propeller shaft With universal joints
Air box M S fabricated with orifice meter and manometer
Fuel tank Capacity 15 lit with glass fuel metering column
Calorimeter Type Pipe in pipe
Piezo sensor Range 5000 PSI, with low noise cable
Crank angle sensor Resolution 1 Deg, Speed 5500 RPM with TDC pulse
Data acquisition device NI USB-6210, 16-bit, 250kS/s.
Piezo powering unit Make-Cuadra, Model AX-409.
Digital milivoltmeter Range 0-200mV, panel mounted
Temperature
transmitter
Type two wire, Input RTD PT100, Range 0–100 Deg C,
Output 4–20 mA and Type two wire, Input
Thermocouple, Range 0–1200 Deg C, Output 4–20 mA
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

215 www.ijergs.org

Load indicator Digital, Range 0-50 Kg, Supply 230VAC
Pump Type Monoblock
Overall dimensions W 2000 x D 2500 x H 1500 mm


2.1.2 Compression ratio adjustment



Fig.2 Compression ratio adjustment

The maximum compression ratio can be set by slacking the 6 Allen bolts provided for clamping the tilting block. Then the lock nut on the adjuster is
to set and to rotate the adjuster as per the marking on the CR indicator. Lock nut on adjuster and the 6 Allen bolts are then tightened gently. The
centre distance between two pivot pins of the CR indicator is to be noted down. After changing the compression ratio the difference (Δ) can be used
to know new CR.

2.1.3 Dynamometer



Fig.3 Dynamometer

It is an absorption type of eddy current water cooled dynamometer used as loading unit. The load is measured by strain gauge type load cell.

2.1.4 Multi gas analyser

Multi - Gas Analyzer capable of measuring CO, HC, CO
2
, O
2
& NOx (optional) contents in the exhaust. The AVL-444 analyzer provides optimized
analysis methods for different applications. This analyzer can easily check the pollution level of various I.C.engines, elegant and smart in appearance.
The analyzers are easy to install and known for its‘ efficient functioning. Further, the range is tested on various parameters in order to meet the set
industrial standards. The specifications of accuracy for measurement of various parameters are given in Table-2.

Table-2 Measurement range and accuracy of AVL 444 gas analyzer

Measured Quality Measuring Range Resolution Accuracy
CO 0…10% vol. 0.01% vol. <0.6% vol: ±0.03% vol.
>=0.6% vol: ±5% of ind value
CO2 0…20% vol. 0.1% vol. <10% vol: ±0.5% vol.
>=10% vol: ±5% vol.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

216 www.ijergs.org

HC 0…20000 ppm vol <=2000:1 ppm vol.
>2000:10 ppm vol.
<200 ppm vol: ±10 ppm vol.
>=200 ppm vol: ±5% of ind. val.
O2 0…22% vol. 0.01% vol. <2% vol: ±0.1% vol.
>=2% vol: ±5% vol.
NO 0…5000 ppm vol. 1 ppm vol. <500 ppm vol: ±50 ppm vol.
>=500 ppm vol: ±10% of ind. val.
Engine Speed 400…6000 min
-1
1 min
-1
±1% of ind. val.
Oil Temperature -30…125
0
C 1
0
C ±4
0
C
Lambda 0…9.999 0.001 Calculation of CO, CO2, HC, O2



Fig.4 Multi gas analyser

2.1.5 Smoke meter




Fig.5 Smoke meter

Table-3 Measurement range and accuracy of Smoke meter

Measured Quality Measuring Range Resolution Accuracy
Opacity 0…100% 0.1% ±%of full scale

Absorption 0…99.99m-1 0.01 m-1 Better than± 0.1 m-1
RPM 400-6000 1/ min 11 1/ min ±10
Oil Temperature 00…150
o
C 1
o
C ±3
o
C
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

217 www.ijergs.org

2.2 Experimental lay out


Fig.6 Layout of VCR engine

2.3 Experimental procedure

The variable compression ratio engine is started by using standard diesel. It is to run for 30 minutes. When the engine is warmed up then readings are
taken. The tests are conducted at the rated speed of 1500 rpm. Fuel consumption is measured by the help of the measuring burette attached to the data
acquisition system. In every test, brake thermal efficiency, brake specific fuel consumption, exhaust gas temperature, mechanical efficiency, torque
and combustion parameters like combustion pressure, combustion temperature, ignition delay, net heat release rate, combustion duration and exhaust
gas emissions such as carbon monoxide (CO), carbon dioxide (CO
2
), hydrocarbon (HC), nitrogen oxides (NO
x
), and smoke opacity are measured.
From the initial measurement, performance, combustion and emission parameters with respect to compression ratio 16:1, 17:1 and 18:1 at 100% load
for different blends are calculated and recorded. Also the engine operating parameters such as performance, combustion and emissi on with respect to
different loads for different blends at compression ratio 18 are measured and recorded. At each operating condi tions, the performance and combustion
characteristics are also processed and stored in personal computer (PC) for further processing of results. The same procedure is repeated for different
blends of Karanja oil.

The specific gravity of biodiesel fuels is lower than that of straight vegetable oil. Therefore, the specific gravity of the blend increases with the
increase of biodiesel concentration. Also, the specific gravity shows an inverse relationship with temperature. As the temperature is increased,
specific gravity decreases. The viscosity of biodiesel is found lower than that of straight vegetable oil. The viscosity of the blend increases with the
increasing biodiesel fraction for all. Similar to the effect of temperature on specific gravity, viscosi ty also shows linearly inverse trend i.e. increasing
temperature reduces the viscosity. This property helps in better atomisation and hence fuel burning in application of biodiesels. It has been noticed
that the specific gravity and the viscosity of the biodiesel blends increase with increase of the biodiesel fraction. It is also seen that the specific
gravity and viscosity of each blend decreases with increase in the temperature.


2.4 Fuel property testing at different blends and temperatures

2.4.1 Specific gravity blend of Karanja oil at different temperature
The specific gravity of all fuel blends (neat Karanja oil, blended neat oil with diesel, 100% biodiesel and blended biodiesel with diesel) are measured
as per standard ASTM D4052 at varying temperature using hydrometer.

Fig.7 Hydrometer
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

218 www.ijergs.org

Referring to table-4, it can be seen that specific gravity go on decreasing for all the blend with increase in temperature. It is found that the specific
gravity of neat Karanja oil (K100) varies from 0.925 to 0.878 at a temperature range of 30-100°C.

Table-4 Variation of specific gravity with temperature and blend (neat Karanja)

Temp

Blend
30
0
C 40
0
C 50
0
C 60
0
C 70
0
C 80
0
C 90
0
C 100
0
C
K-10 0.827 0.8215 0.814 0.807 0.800 0.793 0.785 0.780
k-20 0.8375 0.832 0.825 0.820 0.812 0.8055 0.797 0.791
k-30 0.850 0.843 0.836 0.830 0.823 0.816 0.810 0.803
k-40 0.860 0.854 0.847 0.840 0.834 0.827 0.821 0.816
k-50 0.871 0.866 0.860 0.853 0.845 0.838 0.833 0.8265
k-60 0.883 0.877 0.870 0.866 0.859 0.852 0.846 0.840
k-70 0.892 0.886 0.880 0.873 0.866 0.860 0.853 0.846
k-80 0.905 0.898 0.891 0.884 0.877 0.871 0.866 0.858
k-90 0.916 0.913 0.904 0.895 0.887 0.881 0.874 0.870
k-100 0.925 0.919 0.914 0.907 0.901 0.891 0.885 0.878

2.4.2 Viscosity of blends of Karanja oil at different temperatures

When a fluid is subjected to external forces, it resists flow due to internal friction. Viscosity is a measure of internal friction. The viscosity of the fuel
affects atomization and fuel delivery rates. It is an important property because if it is too low and too high then atomizati on, mixing of air and fuel in
combustion chamber gets affected. Viscosity studies are conducted for different fuel blends (neat Karanja oil, blended neat oil with diesel, 100%
biodiesel and blended biodiesel with diesel). Kinematic viscosity of liquid fuel samples are measured using the viscometer at different temperatures
and blends as per specification given in ASTM D445, using Cannon-Frensky viscometer tubes in viscometer oil bath.

\
Fig.8 Viscometer
Referring to the table-4, it is observed that viscosity for all blends go on decreasing with increase in temperature. It is found that the viscosity of neat
Karanja oil blends (K10) varies from 4.116 cSt to 2.2912 cSt at a temperature range of 30 to 100°C.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

219 www.ijergs.org

Table- 4variation of viscosity with temperature and blend (neat Karanja)

Temp

Blend

30
0
C 40
0
C 50
0
C 60
0
C 70
0
C 80
0
C 90
0
C 100
0
C
K-10 4.116 3.238 2.351 2.043 1.763 1.2572 2.0572 2.2912
K-20 4.976 4.354 3.421 2.724 2.332 2.052 1.842 1.7
K-30 8.061 6.563 4.5 3.695 2.852 2.612 2.316 2.005
K-40 10.344 8.382 6.420 5.25 4.189 3.586 2.789 2.556
K-50 14.821 11.948 8.917 6.920 4.683 3.933 3.494 3.147
K-60 18.678 14.789 10.898 9.024 7.384 5.305 4.372 4.189
K-70 27.167 20.686 16.562 11.185 9.667 7.633 6.563 4.226
K-80 34.513 24.602 17.645 12.475 9.956 8.989 8.097 7.491
K-90 36.478 30.665 23.436 16.753 12.785 10.636 9.631 8.561
K-100 58.1324 42.785 32.173 22.228 13.256 11.649 7.589 9.346

III. RESULT AND DISCUSSION

3.1 Performance Analysis of Neat Karanja Oil

3.1.1 Brake specific fuel consumption

The brake specific fuel consumption decreases with increase in load and K10 gives less BSFC compared to K20 and diesel, it can be seen from Fig
3.1.1.1 that with increase in blend % age of Karanja oil BSFC increasing this is due to the decrease in calorific value and higher density of Karanja
oil for the higher blends. The BSFC varies with diesel, K10 and K20 at full load is found to be 0.34 kg/kWh, 0.33kg/kWh, 0.35kg/kWh respectively.
Form Fig 3.1.1.2 it can be observed that the brake specific energy consumption decreases with the increase in compression rat io. The BSFC for
blend K20 is found to be higher compared to that of diesel. K10 shows less BSFC as compared to K10 and diesel at all compression ratio.



Fig 3.1.1.1


0
0.2
0.4
0.6
0.8
1
1.2
0 10 20 30 40 50 60 70 80 90 100
B
S
F
C
(
K
g
/
k
W
h
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

220 www.ijergs.org



Fig 3.1.1.2


3.1.2 Brake thermal efficiency

The variation of brake thermal efficiency (BTE) for different loads and for different fuels is given in Fig 3.1.2.1. It is seen that there is a steady
increase in efficiency with increases in load in all the fuel operations. It is happened due to reduction in heat loss and increase in power developed
with increase in load. The engine BTE at full load for diesel, K10, and K20 fuels is 24.9%, 26.63%, and 24.1% respectively. It is also observed that
the BTE of the blend K20 is slightly lower than that of the diesel and K10 is higher than diesel. This may be due to higher viscosity of the blend K20
resulting in poorly formed fuel spray and air entrainment affecting the combustion in the engine and further due to lower vol atility of vegetable oil.
The variation of brake thermal efficiency (BTE) for different compression ratio and for different blends is given in Fig 3.1.2.2. It is observed that the
BTE of the blend K10 is higher than that of the diesel at all compression ratios. BTE also gets increased for all the fuel types tested. BTE is directly
proportional to the compression ratio.



Fig 3.1.2.1



Fig 3.1.2.2
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
16 17 18
B
S
F
C
(
K
g
/
k
W
h
)
Compression Ratio
Diesel
K10
K20
0
5
10
15
20
25
30
0 10 20 30 40 50 60 70 80 90 100
B
T
E
(
%
)
Load(%)
Diesel
K10
K20
0
5
10
15
20
25
30
16 17 18
B
T
E

(
%
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

221 www.ijergs.org


3.1.3 Mechanical efficiency

Fig 3.1.3.1 shows that the variation of mechanical efficiency with load for various blends. It has been observed that there is a steady increase in
mechanical efficiency for diesel and blends as the load increases. Maximum mechanical efficiency has been obtained from blend K10 at 50% load
i.e. 35.41%. The efficiency of the fuel blends is in general very closer to that of diesel. The increase in efficiency for all the blends may be due to
improved quality of spray, high reaction activity in the fuel rich zone and decrease in heat loss due to lower flame temperat ure of the blends than that
of diesel. At full load diesel gives maximum mechanical efficiency as compared to K10 and K20. The mechanical efficiency at full load for diesel,
K10, and K20 fuels are 52.32%, 35.41%, and 32.56% respectively.
The variations of mechanical efficiency with compression ratio for various blends are shown in Fig 3.1.3.2. It has been observed that the mechanical
efficiency increases with compression ratio and higher in higher compression ratio. Mechanical efficiency of diesel is higher than K10 and K20 at all
compression ratios. Mechanical efficiency increases with increasing compression ratio for all the blends.



Fig 3.1.3.1



Fig 3.1.3.2

3.1.4 Exhaust gas temperature

The variation of exhaust gas temperature with applied load for different bends is shown in Fig 3.1.4.1. The exhaust gas temperature increases with
increase in load. The exhaust gas temperatures decrease for different blends when compared to that of diesel. The highest temperature obtained is
324.53ºC for diesel for full load, whereas the temperature is only 317.93ºC and 310.39ºC for the blend K10 and K20.It may be due to energy content
in diesel is higher as compared to K10 and K20.
The variation of exhaust gas temperature for different compression ratio and for different blends is shown in Fig 3.1.4.2.The exhaust gas temperature
decreases with increase in compression ratio. The result indicates that exhaust gas temperature decreases for different blends when compared to that
of diesel. As the compression ratio increases, the exhaust gas temperature of various blends is lesser than that of diesel. The reason for the reduction
in exhaust gas temperature at increased compression ratio is due to lower temperature, at the end of compression,

0
10
20
30
40
50
60
0 10 20 30 40 50 60 70 80 90 100
M
e
c
h
n
i
c
a
l

e
f
f
i
c
i
e
n
c
y
(
%
)
Load(%)
Diesel
K10
K20
0
10
20
30
40
50
60
16 17 18
M
e
c
h
a
n
i
c
a
l

e
f
f
i
c
i
e
n
c
y
(
%
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

222 www.ijergs.org



Fig 3.1.4.1



Fig 3.1.4.2

3.2 Combustion analysis of neat Karanja
3.2.1 Combustion pressure
The variation of combustion pressure with load for different blends is shown in Fig 3.2.1.1. It shows that increasing load combustion pressure
increases. It shows that diesel gives maximum pressure as compared toK10 and K20. It is seen that the maximum pressure for diesel as well as
Karanja oil blends is almost the same at full load, the maximum pressure value for diesel and blends K10 and K20 being 61.2 bar, 58.93 bar, and
59.19 bar respectively. The peak pressure depends on the amount of fuel taking part in the uncontrolled phase of combustion, which is governed by
the delay period and spray envelop of the injected fuel.
The variation of combustion pressure for different compression ratio and for different blends is shown in Fig 3.2.1.2.It shows that increasing
compression ratio, combustion pressure increases. . It shows that diesel gives maximum pressure as compared toK10 and K20.


Fig 3.2.1.1

0
50
100
150
200
250
300
350
0 10 20 30 40 50 60 70 80 90 100
E
x
h
a
u
s
t

t
e
m
p
e
r
a
t
u
r
e
(
°
C
)
Load(%)
Diesel
K10
K20
0
100
200
300
400
16 17 18
E
x
h
a
u
s
t

t
e
m
p
e
r
a
t
u
r
e
(
°
C
)
Compression ratio
Diesel
K10
K20
0
10
20
30
40
50
60
70
0 10 20 30 40 50 60 70 80 90 100
M
a
x
i
m
u
m

p
r
e
s
s
u
r
e
(
B
a
r
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

223 www.ijergs.org



Fig 3.2.1.2

3.2.2. Combustion duration

It is difficult to define exactly the combustion duration of a diesel engine as the total combustion process consists of the rapid premixed
combustion, mixing controlled combustion and the late combustion of fuel present in the fuel rich combustion products. The combustion duration in
general increases with load. The variation of the total combustion duration with different loads for different blends is shown in Fig 3.2.2.1. At full
load, the combustion duration for the fuel blends K10, K20 and diesel are 47, 77 and 19 ºCA respectively. As the calorific value of the Karanja oil
blend is lower than diesel, a higher quantity of fuel is consumed to keep the engine speed stable at different loads. The decrease in combustion
duration is due to the efficient combustion of the injected fuel. K20 gives higher combustion duration than other.
Fig 3.2.2.2 shows the variation of combustion duration with compression ratio for different blends. Increase in compression ratio combustion
duration increases. The oil blends causes longer duration for combustion at lower compression ratio and less duration for combustion at higher
compression ratio. K20 gives higher combustion duration than other.



Fig 3.2.2.1


Fig 3.2.2.2
3.2.3. Net Heat release rate

The variation of the total combustion duration with different loads for different blends is shown in Fig 3.2.3.1. Increasing load heat release rate
increases. The maximum heat release rate of diesel, K10, and K20 at full load has been observed to be 53.2, 47.2 and 41.5J/ ºCA. The heat release
rate is analysed based on the changes in crank angle variation of the cylinder. The heat release rate of Karanja oil blends decreases compared to that
0
10
20
30
40
50
60
70
16 17 18
M
a
x
i
m
u
m

p
r
e
s
s
u
r
e
(
B
a
r
)
Compression ratio
Diesel
K10
K20
0
20
40
60
80
100
0 10 20 30 40 50 60 70 80 90 100 C
o
m
b
u
s
t
i
o
n

d
u
r
a
t
i
o
n


(
D
e
g
)
Load(%)
Diesel
K10
K20
0
20
40
60
80
100
16 17 18
C
o
m
b
u
s
t
i
o
n

d
u
r
a
t
i
o
n

(
D
e
g
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

224 www.ijergs.org

of diesel at all load. The heat release rate of diesel is higher than oil blend due to its reduced viscosity and reduction of air entrainment and fuel-air
mixing rates.
Fig 3.2.3.2 shows the variation of heat release rate with compression ratio for different blends. Low compression ratio Heat release rate increases
with the low compression ratios and slightly decreases at higher compression ratio. This may be due to the air entrainment and lower air/fuel mixing
rate and effect of viscosity blends. The heat release rate of diesel is higher than oil blend due to its reduced viscosity and better spray formation.




Fig.3.2.3.1



Fig 3.2.3.2

3.2.4 Mass fraction burnt

The variations of the mass fraction burnt with the crank angle for Karanja oil blends and diesel at compression ratio 18 at full load is given in Fig
3.2.4.1, due to the oxygen content of the blend, the combustion is sustained in the diffusive combustion phase. Diesel gives higher mass fraction
burnt than other blends. The highest rate of burning shows that the efficient rate of combustion. The engine operates in rich mixture and it reaches
stoichiometric region at higher compression ratio. More fuel is accumulated in the combustion phase and it causes rapid heat release.



Fig 3.2.4.1

3.2.5 Ignition delay

The most vital parameter in combustion analysis is ignition delay. The variation of the ignition delay with different loads for different blends is
shown in Fig.3.2.5.1. It has observed that the ignition delay decreases with Karanja oil in the diesel blend with increase in load and increases in
compression ratio. K20 give higher ignition delay than diesel. It is because more fuel required due to lower calorific value.
Fig.3.2.5.2 shows the variation of ignition delay with compression ratio for different blends. Ignition delay decrease with i ncrease in compression
ratio.
0
20
40
60
0 10 20 30 40 50 60 70 80 90 100
M
a
x
i
m
u
m

N
H
R
(
J
/
C
A
)
Load(%)
Diesel
K10
K20
0
20
40
60
80
16 17 18
M
a
x
i
m
u
m

N
H
R
(
J
/
C
A
)
Compression ratio
Diesel
K10
K20
0
10
20
30
40
50
60
70
80
90
350 355 360 365 370 375 380
M
a
s
s

f
r
a
c
t
i
o
n

b
u
r
n
e
d
(
%
)
Crank angle(Deg)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

225 www.ijergs.org




Fig.3.2.5.1



Fig 3.2.5.2

3.2.6 Maximum combustion temperature

The variations of the maximum combustion temperature with loads for different blends are given in Fig.3.2.6.1. Increasing load combustion
temperature increases for all cases. Diesel gives better combustion temperature than other blends. The maximum combustion temperature of diesel,
K10, and K20 at full load has been observed to be 1429.04, 1425.74 and 1400.01ºC respectively.
Fig 3.2.6.2 shows the variation of maximum combustion temperature with compression ratio for different blends. It has been observed that
increasing compression ratio combustion temperature increases. Diesel gives better combustion temperature than all other blends. Due to more fuel
accumulated in the combustion chamber.



Fig 3.2.6.1

0
2
4
6
8
10
12
0 10 20 30 40 50 60 70 80 90 100
I
g
n
i
t
i
o
n

d
e
l
a
y

(
D
e
g
)
Load(%)
Diesel
K10
0
2
4
6
8
16 17 18
I
g
n
i
t
i
o
n

d
e
l
a
y
(
D
e
g
)
Compression ratio
Diesel
K10
K20
0
200
400
600
800
1000
1200
1400
1600
0 10 20 30 40 50 60 70 80 90 100
M
a
x
i
m
u
m

c
o
m
b
u
s
t
i
o
n

t
e
m
p
e
r
a
t
u
r
e
(
°
C
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

226 www.ijergs.org



Fig3.2.6.2



Fig.3.2.6.3

3.3 Emission analysis of Karanja oil

3.3.1 Carbon monoxide emission

Fig.3.3.1.1 shows the variation of carbon monoxide emission of the blends and diesel for various loads. CO emission is higher at lower load then
decreases with increase in load and at higher load it again increases. The CO emission of the blend K10 and K20 is more at lower load compared to
diesel; it may be due to higher viscosity and improper spray pattern resulting in incomplete combustion. At full load diesel gives highest CO
emission.
Fig.3.3.1.2 shows the variation of carbon monoxide emission of the blends and diesel with various compression ratios. . CO emission decreases with
increase in compression ratio. The CO emission of diesel is minimum compared to K10 and K20. This may be due to; at higher compression ratio air
fuel mixing is better.


Fig 3.3.1.1
1370
1380
1390
1400
1410
1420
1430
1440
16 17 18
M
a
x
i
m
u
m

c
o
m
b
u
s
t
i
o
n

t
e
m
p
e
r
a
t
u
r
e
(
º
C
)
Compression ratio
Diesel
K10
K20
0
200
400
600
800
1000
1200
1400
1600
320 330 340 350 360 370 380 390 400
M
e
a
n

g
a
s

t
e
m
p
e
r
a
t
u
r
e
(
˚
C
)
Crank angle(Deg)
Diesel
K10
K20
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0 10 20 30 40 50 60 70 80 90 100
C
O
(
%
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

227 www.ijergs.org




Fig.3.3.1.2

3.3.2 Carbon dioxide emission

The variation of carbon dioxide emission with different loads is shown in Fig 3.3.2.1. CO
2
emission increases with increase in load. In the range of
whole engine load CO
2
emission of diesel fuel is lower than other fuel .This is because vegetable oil contains oxygen element, carbon content is
relatively lower in the same volume of fuel consumed at the same engine load. More amount of CO
2
is an indication of complete combustion of fuel
in the combustion chamber. CO
2
emission of the blend K20 is slightly higher than diesel at all loads. It is probably due to higher oxygen availability.
The variation of carbon dioxide emission with different compression ratio is shown in Fig.3.3.2.2. The blend emits higher percentage of CO
2
than
diesel at lower compression ratios and vice versa. The CO
2
emission from the combustion of bio fuels can be absorbed by the plants and the carbon
dioxide level and is kept constant in the atmosphere.



Fig 3.3.2.1




Fig 3.3.2.2

3.3.3 Hydrocarbon emission

The variation of hydrocarbon emissions with load for different blends is plotted in Fig 3.3.3.1. Increased HC emissions clearly show that the
combustion in the engine is not proper. It is very clear that increasing the blend percentage of Karanja oil increase the HC emissions. All blends have
shown higher HC emissions at 50% load. This may be due to poor atomization of the blended fuel because of higher viscosity. Physical properties of
fuels such as density and viscosity influence the HC emissions. The blend K10 has higher HC emissions at full load.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
16 17 18
C
O
(
%
)
Compression ratio
Diesel
K10
K20
0
1
2
3
4
5
6
0 10 20 30 40 50 60 70 80 90 100
C
O

(
%
)
Load(%)
Diesel
K10
K20
0
1
2
3
4
5
6
16 17 18
C
O

(
%
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

228 www.ijergs.org

The variation of hydrocarbon emission with different compression ratios for different blends is given in Fig 3.3.3.2. It shows that the hydrocarbon
emissions of various blends are lower at higher compression ratios. Blend K20 gives higher HC emission at lower compression ratio but at higher
compression ratio K10 gives higher.






Fig 3.3.3.1




Fig 3.3.3.2

3.3.4 Nitrogen oxides emission

Fig 3.3.4.1 shows that the variations of Nitrogen oxides (NO
x
) emission with load for different blends. NO
x
emission increases with increase in load.
This is properly due to higher combustion temperature in the engine cylinder with increasing load. It is also observed that with increasing the
percentage of Karanja oil blend there is a trend of decreasing NO
x
emission. NO
x
emission for diesel, K20, and K20 are 550ppm, 493ppm, and
524ppm at full load. The limitation is higher viscosity of these higher Karanja oil blends. The variations of nitrogen oxides (NO
x
) emission with
respect to different compression ratio for different blends are shown in Fig 3.3.4.2. The NO
x
emission for diesel and other blends increase with
increase of compression ratio. Diesel gives higher NOx emission than other blends. The other blends closely follow diesel.

0
5
10
15
20
25
30
35
40
45
0 10 20 30 40 50 60 70 80 90 100
H
C
(
p
p
m
)
Load(%)
Diesel
K10
K20
0
10
20
30
40
50
60
16 17 18
H
C
(
p
p
m
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

229 www.ijergs.org



Fig 3.3.4.1



Fig 3.3.4.2

3.3.5 Smoke opacity

Fig 3.3.5.1 shows that the variations of smoke opacity with load for different blends. Smoke opacity increase with increase in load. K10 and K20
give higher smoke opacity than diesel. It is observed that K10 and K20 have smoke opacity less than diesel at nearly 70% load. Hence it can be
conclude that K20 would be better blend from other. The smoke opacity for diesel, K10, and K20 at full load is 88.7%, 96%, and 97.8% respectively.
The variation of smoke opacity with respect to different compression ratio for different blends is shown in Fig 3.3.5.2. Smoke opacity increase with
increase in compression ratio. K20 give higher smoke opacity than that of K10 and diesel.



Fig.3.3.5.1
0
100
200
300
400
500
600
700
0 10 20 30 40 50 60 70 80 90 100
N
O
X
(
p
p
m
)
Load(%)
Diesel
K10
K20
0
100
200
300
400
500
600
16 17 18
N
O
X
(
p
p
m
)
Compression ratio
Diesel
K10
K20
0
20
40
60
80
100
120
0 10 20 30 40 50 60 70 80 90 100
S
m
o
k
e

O
p
a
c
i
t
y
(
%
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

230 www.ijergs.org





Fig 3.3.5.2


IV CONCLUSION

The performance, emission and combustion characteristics of a dual fuel variable compression ratio engine with Karanja oil and diesel blends have
been investigated and compared with that of diesel. The experimental results confirm that the BTE, SFC, exhaust gas temperature, mechanical
efficiency and torque of variable compression ratio engine, is a function of bio diesel blend, load and compression ratio. For the similar operating
conditions, engine performance reduced with increase in bio-diesel percentage in the blend. However by increasing the compression ratio the engine
performance varied and it becomes comparable with that of diesel. The following conclusions are drawn from this investigation:
 The performance, emission and combustion characteristics of a dual fuel variable compression ratio engine with Karanja oil and diesel
blends have been investigated and compared with that of diesel.
 The experimental results confirm that the BTE, SFC, exhaust gas temperature, mechanical efficiency and torque of variable compression
ratio engine, is a function of bio diesel blend, load and compression ratio. For the similar operating conditions, engine performance reduced
with increase in bio-diesel percentage in the blend. However by increasing the compression ratio the engine performance varied and it
becomes comparable with that of diesel.
 The following conclusions is drawn from this investigation was found out. K10 gives less BSFC compared to K20 and diesel. . The engine
BTE at full load for diesel, K10, and K20 fuels is 24.9%, 26.63%, and 24.1% respectively. It is also observed that the BTE of the blend K20
is slightly lower than that of the diesel and K10 is higher than diesel.
 The highest temperature obtained is 324.53ºC for diesel for full load, whereas the temperature is only 317.93ºC and 310.39ºC for the blend
K10 and K20.It may be due to energy content in diesel is higher as compared to K10 and K20. CO, HC emission of K10 and K20 is lower
than diesel and NOx emission was higher than diesel.



REFFERENCES

[1] Varuvel EG, Mrad N, Tazerout M, Aloui F. Experimental analysis of bio fuel as an alternative fuel for diesel engines. Applied Energy
2012; 94: 224-231.
[2] Swaminathan C, Sarangan J. Performance and exhaust emission characteristics of a CI engine fuelled with biodiesel (fish oil) with DEE as
additive. Biomass and bio energy 2012; 39:168-174.
[3] Muralidharan K, Vasudevan D, Sheeba K.N. Performance, emission and combustion characteristics of biodiesel fuelled variable
compression ratio engine. Energy 2011; 36:5385-5393.
[4] Muralidharan K, Vasudevan D. Performance, emission and combustion characteristics of a variable compression ratio engine using methyl
esters of waste cooking oil and diesel blends. Applied Energy 2011; 88:3959-3968.

[5] Jindal S, Nandwana BP, RathoreNS, VashisthaV. Experimental investigation of the effect of compression ratio and injection pressure in a
direct injection diesel engine running on Jatropha methyl ester. Applied Thermal Engineering 2010; 30:442-8.

[6] Saravanan S, Nagarajan G, Lakshmi Narayana Rao G, Sampath S. Combustion characteristics of a stationary diesel engine fuelled with a
blend of crude rice bran oil methyl ester and diesel. Energy 2010; 35:94-100.
[7] Prem Anand B, Saravanan CG, Ananda Srinivasan C. Performance and exhaust emission of turpentine oil powered direct injection diesel
engine. Renewable Energy 2010; 35:1179-84.

70
75
80
85
90
95
100
16 17 18
S
m
o
k
e

O
p
a
c
i
t
y
(
%
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

231 www.ijergs.org

[8] Haik Yousef, Selim Mohamed YE, Abdulrehman Tahir. Combustion of algae oil methyl ester in an indirect injection diesel engine. Energy
2011; 36:1827-35.

[9] Kalam MA, Masjuki HH, Jayed MH, Liaquat AM. Emission and performance characteristics of an indirect ignition diesel engine fuelled with
waste cooking oil. Energy 2011; 36:397-402.
[10] Mani M, Nagarajan G, Sampath S. Characterization and effect of using waste plastic oil and diesel fuel blends in compression ignition
engine. Energy 2011; 36:212-9.
[11] Gumus MA. Comprehensive experimental investigation of combustion and heat release characteristics of a biodiesel (hazelnut kernel oil
methyl ester) fuelled direct injection compression ignition engine. Fuel 2010; 89:2802-14.

[12] Celikten Ismet, Koca Atilla, Arslan Mehmet Ali. Comparison of performance and emissions of diesel fuel, rapeseed and soybean oil methyl
esters injected at different pressures. Renewable Energy 2010; 35:814-20.

[13] A.S. Ramadhas, C. Muraleedharan, S. Jayaraj. Performance and emission evaluation of a diesel engine fuelled with methyl esters of rubber
seed oil, Renewable Energy 30 (2005) 1789–1800.
[14] Arul Mozhi Selvan V, Anand RB, Udayakumar M. Combustion characteristics of Diesohol using bio diesel as an additive in a direct
injection ignition engine under various compression ratios. Energy & Fuels 2009; 23:5413-22.
[15] Satyanarayana M, Muraleedharan C.A comparative study of vegetable oil methyl esters (biodiesels). Energy 2011; 36:2129-37.

[16] Devan PK, Mahalakshmi NV. Study of the performance, emission and combustion characteristics of a diesel engine using poon oil e based
fuels. Fuel Processing Technology 2009; 90:513-9.














International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

232 www.ijergs.org

Innovation with TRIZ
N.U.Kakde
1
, D B Meshram
1
,G R Jodh
1
,A.S.Puttewar
1

1
Faculty, Dr Babasaheb Ambedkar College of Egineering and Research, Nagpur

ABSTRACT- Today, evolution of science and technology has reached tremendous rate. Major breakthroughs in sciences, technology, medicine,
and engineering make our everyday life more and more comfortable. Today it is nearly impossible to find an engineer who does not use
complex mathematical tools for formal modeling of design products, CAD systems for drawings, electronic handbooks and libraries, and the
Internet to find necessary data, information, and knowledge.

But what happens when we need to invent a radically new solution? To generate a new idea? To solve a problem when no known problem solving
methods provide results? What tools and methods do we have to cope with these situations? It happens that when it comes to producing new ideas,
we still rely heavily on thousands-years-old trials and errors method. It is good when a new brilliant and feasible idea is born quickly. But what
price we have to pay for it most of the time? Waste of time, money and human resources. Can we afford this today, when competition is
accelerating every day and capability to innovate becomes a crucial factor of survival? Certainly, not. But if there is anything that can help?

Fortunately, the answer is ―yes‖. To considerably improve innovative process and avoid costly trials and errors, leading innovators use TRIZ, a
scientifically-based methodology for innovation. Relatively little known outside the former Soviet Union before the 90
th
, it rapidly
gained popularity at world-leading corporations and organizations, among which are DSM, Hitachi, Mitsubishi, Motorola, NASA, Procter
& Gamble, Philips, Samsung, Siemens, Unilever, just to name a few. This article presents a brief overview of TRIZ and some its techniques with
focus on technological applications of TRIZ.

TRIZ origins
TRIZ (a Russian acronym for the Theory of Inventive Problem Solving) was originated by the Russian scientist and engineer Genrich Altshuller. In
the early 50
th
, Altshuller started massive studies of patent collections. His goal was to find out if inventive solutions were the result of chaotic and
unorganized thinking or there were certain regularities that governed the process of creating new inventions.

After scanning approximately 400.000 patent descriptions, Altshuller found that only 2% of all patented solutions were really new, which means
that they used some newly discovered physical phenomenon – such as the first radio receiver or photo camera. 98% of patented inventions used
some already known physical principle but were different in its implementation (for instance, both a car and a conveyer may use the air cushion
principle). In addition, it appeared that a great number of inventions complied with a relatively small number of basic inventive principles.
Therefore, 98% of all new problems can be solved by using previous experience - if such experience is presented in a certain form, for instance
as principles or patterns. This discovery had given impact on further studies which let to discovery the basic principles of invention.

More than thirty years of research resulted in revealing and understanding of origins of an inventive process, and formulation of general principles
of inventive problem solving. At the same time, first TRIZ techniques were developed.

Later, many researchers and practitioners worldwide united efforts and largely extended Altshuller‘s approach with new methods and tools.
Today, a number of companies and universities worldwide are involved to enhancing TRIZ techniques and putting them to the practical use.

Modern TRIZ
TRIZ offers a number of practical techniques, which help to analyze existing products and situations, extract core problems and generate new
solution concepts in a systematic way. TRIZ fundamentally changes our view on solving inventive problems and innovative design as shown in
figure 1. Instead of random generation of thousands of alternatives among which only one can work, TRIZ uses systematic approach to generate new
ideas as sh.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

233 www.ijergs.org



Fig 1. Modern TRIZ
Modern TRIZ is a large volume of knowledge. It includes such techniques as Inventive Principles, Patterns of standard solutions, Functional
Analysis, Databases of physical, chemical and geometrical effects, Trends and Patterns of technology evolution and Algorithm of Inventive Problem
Solving, which is also known as ARIZ. TRIZ is not easy to learn. However, most of its techniques can be learned and applied independently that
simplifies the processes of learning and implementation. This is shown in Fig. (1)

Common Patterns of Inventions

Let us have a look as how TRIZ works by comparing two problems.

First problem: how to protect a hydrofoil moving at a high speed from hydraulic cavitation, which results from collapsing air bubbles which destroy
the metal surface of the foil? Second problem: how to prevent orange plantations from being eaten by apes if installing fences around the plantations
would be too expensive?

Are these problems similar? At a first glance, not at all. However from the TRIZ point of view, they are similar, because both the problems
result in identical problem patterns. In both cases, there are two components interacting with each other and the result of the interaction is
negative.

In the first situation, the water destroys the foil, in the second – an ape eats an orange. And there is no visible and simple way to improve the
situations. To solve this type of problems, TRIZ recommends introducing a new component between the existing ones. Well, but how? We tried it,
and it did not work – fences are still expensive. What did the best inventors do in this case? Analysis of the best inventions showed that this new
component has to be a modification of one of the two existing components!

In TRIZ, the word ―modification‖ is being understood in broad terms. It can be a change of aggregate state of a substance, or a change of color,
structure, etc. What can a modification of the water be? Ice. A refrigerator is installed inside the foil and freezes the water thus forming an ice
layer over the foil surface. Now, the cavitations destroy the ice, which is constantly rebuilt.
What can be the ―modification‖ of the orange? A lemon! The ape does not like the taste of the lemon so it was proposed to surround
the orange plantations with lemon trees.


As seen in figure 2, TRIZ suggests recommendations on solving new problems accordingly guidelines drawn from previous
experience of tackling similar problems in different areas of technology. Well- known psychological methods for activation of
thinking (brainstorm, for instance) or traditional design methods aim at finding a specific solution to a specific problem. It is difficult
– too much information has to be browsed and there is no guarantee that we move in a right direction. TRIZ organizes translation of
the specific problem into abstract problem and then proposes to use a generic design principle or a pattern, which is relevant to the
type of the problem. As clear, by operating at the level of conceptual models, the search space is significantly reduced that makes
it much easier to find the needed solution concept among the patterns TRIZ offers.(Fig.2)






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

234 www.ijergs.org
















A b s tr a c t pr ob l e m A b s tr a ct s o l u tio n
P P R RI IN N C C I IP P L LE ES S O
OOF
FF T T R R I I Z Z



S p e c if ic p r o b l e m S p e c i f ic s o l u t io n


T T R R I I A A L L S S & & E E R R R RO OR R S S
S S E E A AR RC C H H S S P P A AC CE E

Fig. 2 Common Platform of Invention
INVENTION IS A RESULT OF SOLVING A CONTRADICTION
Another discovery of Altshuller was that every inventive solution is a result of elimination of a contradiction. The contradiction arises
when two mutually exclusive design requirements are put on the same object or a system. For example, the walls of a space shuttle
have to be lightweight to decrease the mass of the shuttle when bringing it to the orbit. However, this cannot be done by simply
decreasing the thickness of the walls due to the thermal impact when entering the Earth‘ atmosphere. The problem is difficult due to
the necessity to have two contrary values of the same design parameter: according to the existing solutions, the walls have to be both
heavyweight and lightweight at the same time.

When a designer faces a contradiction that cannot be solved by redesigning a product in a known way, this means that he faces an
inventive problem, and its solution resides outside a domain the product belongs to. One known method to solve problems with
contradicting demands is to find a compromise between two conflicting parameters or values. But what to do if no optimum can be
reached that solves the problem? TRIZ suggests solving problems by removing the contradictions.

A comprehensive study of patent collections undertaken by TRIZ researchers and thorough tests of TRIZ within industries have
proven the fact that if a new problem is represented in terms of a contradiction, a relevant TRIZ principle has to be used to find
a way to eliminate the contradiction. The principle indicates how to eliminate the same type of the contradiction encountered in
some area of technology before

A collection of TRIZ inventive principles is the most known and widely used TRIZ problem solving technique. Each principle in the
collection is a guideline, which recommends a certain method for solving a particular type of an inventive problem. There are 40
inventive principles in the collection, which are available in a systematic way according to a type of a contradiction that arises during
attempts to solve the problem. Examples of the inventive principles are:

•Variability Principle: Characteristics of the object (or external environment) should change such as to be optimal at each stage of
operation; the object is to be divided into parts capable of movement relative to each other; if the object as a whole is immobile, to
make it mobile or movable.

•Segmentation Principle: Divide the object into independent parts; make the object such that it could be easily taken apart; increase
the degree of the object‘s fragmentation (segmentation). Instead of non-fragmented objects, more fragmented objects can be
used, as well as granules, powders, liquids, gases.

Access to the principles is provided through a matrix, which consists of 39 rows and columns. Positive effects that have to be achieved
(so-called ―generalized requirements‖) are listed along the vertical axis while negative effects, which arise when attempting to achieve
the positive effects, are listed along the horizontal axis. A selection of a pair of positive and negative effects indicates which principles
should be used to solve the problem






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

235 www.ijergs.org



Table 1
A matrix of principles for engineering contradiction elimination. Numbers indicate what principles
have to be used: 1 - Segmentation; 2 - Removing; 10 - Preliminary action; 13 - Other way round;
etc

what gets worse as a result of improvement
Speed Force Stress ..... Stability
what to improve
Speed 13,28,15,19 6,18,38,40 ..... 28,33,1
Force 13,28,15 18, 21,11 ..... 35,10,21
Stress 6, 35,36 36,35,21 ..... 35, 2,40
..... ..... .... .... .....
Stability 33,28 10,35,21 2,35,40 .....

For instance, a problem is that we need a device to hold an easily breakable part, which has a complex shape. If we use a traditional
vise with clamping teeth, the contradiction is the following: to hold the part reliably (positive effect), we have to apply sufficient
forces. However, the forces are distributed non-uniformly and the part can be damaged (negative effect).Table 1 indicates the matrix
principle used in TRIZ tool.

To solve this type of contradictions TRIZ recommends using ―Segmentation Principle‖ mentioned above. So we must to segment the
clamping teeth. It can be done by replacing the teeth with a chamber filled with small-sized elastic cylinders and to compress the
cylinders by moving the chamber wall as shown in fig 3. As a result, the contradiction is eliminated: a part of almost any shape can be
hold by such the device and the forces will be distributed uniformly





Fig. 3 Segmentation using Teeth
PHYSICS FOR INVENTO PHYSICS FOR INVENTORS
Sometimes, just to be capable of seeing things different is not enough. New breakthrough products often result from a synergy of non-
ordinary view of a problem and knowledge of the latest scientific advances. TRIZ suggest to search for new principles by defining
what function is needed and then finding which physical principle can deliver the function.

Studies of the patent collections indicated, that inventive solutions are often obtained by utilizing physical effects not used previously
in a specific area of technology. Knowledge of natural phenomena often makes it possible to avoid the development of complex and
unreliable designs. For instance, instead of a mechanical design including many parts for precise displacement of an object for a short
distance, it is possible to apply the effect of thermal expansion to control the displacement.

Finding a physical principle that would be capable of meeting a new design requirement is one of the most important tasks in the early
phases of design. However, it is nearly impossible to use handbooks on physics or chemistry to search for principles for new products.
The descriptions of natural phenomena available there present information on specific properties of the effects from a scientific point
of view, and it is unclear how these properties can be used to deliver particular technical functions.

TRIZ Catalogues of the effects bridge a gap between technology and science. In TRIZ Catalogues, each natural phenomenon is
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

236 www.ijergs.org

identified with a number of technical functions that might be achieved on the basis of the phenomenon.

The search for effect is possible through formulation of a problem in terms of a technical function. Each technical function indicates
an operation that can be performed with respect to a physical object or field. Examples of the technical functions are ―move a loose
body‖ or ―change density ―, ―generate heat field‖, and ―accumulate energy‖.

Another example illustrates the use the TRIZ Catalogues of physical effects. How to accurately control the distance between a
magnetic head and a surface of a tape in a special high- performance digital tape recorder, where the gap should be different during
different recording modes and a change must be produced very quickly?

In the TRIZ Catalogue to physical effects, the function ―to move a solid object‖ refers to several effects. One of the effects is the
physical effect of magnetostriction: a change in the dimensions and shape of a solid body (made of a specific metal alloy) under
changing the intensity of applied magnetic field. This effect is similar to the effect of thermal expansion, but it is caused by magnetic
field rather than thermal field.

The magnetic head is fixed to a magnetostrictive rod as shown in figure 4. A coil generating magnetic field is placed around the rod. A
change of the magnetic field‘s intensity is used to compress and extend the rod exactly to the required distance between the head and
the recording surface.

Picture A Picture B
Fig. 4
Solving a problem with TRIZ pointer to physical effects. Picture A: Old design with a screw, Picture B: new design with a magnetostrictive rod and a
electromagnetic induction coil
Trends of the Technology Evolution
Altshuller also discovered that the technology evolution is not a random process. Many years of studies revealed that there are a
number of general trends governing the technology evolution no matter what area the products belong to.

The practical use of trends is possible through specific patterns. Every pattern indicates a line of evolution containing particular
transitions between old and new structures of a design product. In total, TRIZ presents nine trends of the technology evolution. One of
the trends – Evolution of systems by transitions to more dynamic structures is shown at the figure below.

The significance of knowing the trend of the technology evolution is that they can be used to estimate what phases of the evolution a
system has passed. As a consequence, it is possible to foresee what changes the system will experience. And, what is more important,
produce the forecast in design terms.






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

237 www.ijergs.org














Table 2 Patterns of increasing the degree of system dynamics

Evolution Phase Example Illustrations


Solid object Traditional mobile phone




Solid object divided into Mobile phone with a sliding part which
two segments with non contains a microphone
-flexible link



Two segments with Flip-flop phone of two parts.
a flexible link




Many segments with Phone which is made as a wrist watch: its bracelet
flexible links is made of segments, which contain different parts
of the phone


Flexible object A flexible liquid-crystal film, which can be rolled
in and out and stored inside a plastic cylinder
(serves also as a mobile videophone)





Practical value of TRIZ
Today, TRIZ and TRIZ software are used in about than 5000 companies and government organizations worldwide. For instance,
designers at Eastman Kodak used TRIZ to develop a new solution for a camera‘s flash. The flash has to move precisely to change the
angle of lightning. A traditional design includes a motor and mechanical transmission. It complicates the whole design and makes it
difficult to precisely control the displacement. A newly patented solution uses piezoelectric effect and involves a piezoelectric linear
motor, which is more reliable and easier to control.

In general, the use of TRIZ provides the following benefits:

1. Considerable increase of productivity in searching for new ideas and concepts to create new products or solve existing problems.
As estimated by the European TRIZ Association experts on the basis of industrial case studies, these processes are usually
accelerated 5-10 times. Sometimes, new solutions became only possible from using TRIZ.

2. Increasing the ratio ―Useful ideas / useless ideas‖ during problem solving by providing immediate access to hundreds of unique
innovative principles and thousands of scientific and technological principles stored in TRIZ knowledge bases.

3. Reducing risk of missing an important solution to a specific problem due to a broad range of generic patterns of inventive solutions
offered by TRIZ
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

238 www.ijergs.org

4. Using the scientifically-based trends of technology evolution to examine all possible alternatives of future evolution of a specific
technology or a design product and select the right direction of the evolution.

5. Leveraging intellectual capital of organizations via increasing a number of patented solutions of high quality.

6. Raising the degree of personal creativity index by training personnel to approach and solve inventive and innovative problems in a
systematic way.

TRIZ is the most powerful and effective practical methodology of creating new ideas available today. However, TRIZ does not replace
human creativity – instead, amplifies it and helps to move to the right direction. As proven during long-term studies, everyone can
invent and solve non- trivial problems with TRIZ.
TRIZ IN THE WORLD

Today, TRIZ is widely recognized as a leading method for innovation worldwide. Leading Japanese research organization,
Mitsubishi Research Institute, which unites research efforts of 50 major Japanese corporations, invested US$14 mln to bring TRIZ and
TRIZ-relates software to Japan.

In 1998, the TRIZ Association was formed in France, which involves such participants as Renault, Peugeot, EDF, Legrand. In South
Korea, LG Electronics uses TRIZ to solve major inventive problems and develop new products. Motorola purchased 2000 packages of
TRIZ software, while Unilever has recently released information about investing US$ 1.2 mln into purchasing TRIZ software and
using it as a major tool for achieving competitive leadership.

In 2000, the European TRIZ Association was established, with a global coordination group of 26 countries including representatives
from Japan, South Korea, USA.

In 2004, Samsung Corporation recognized TRIZ as a best practice for innovation after a number of successful TRIZ projects, which
resulted in total economic benefits of 1.5 billion Euros during three years.

Small and medium-sized companies benefit from using TRIZ as well. TRIZ helps to define and solve problems within short time and
with relatively small efforts thus avoiding large R&D investments for approaching solutions and finding new design concepts.

REFERENCES

1. Lawrence D. Miles: Techniques of Value Analysis & Engineering. McGraw-Hill Book Co., London
2. S. D. Savransky: Engineering of Creativity. Introduction to TRIZ Methodology of Inventive
3. Problem Solving; CRC Press, Boca Raton, Florida, 2000
4. Darrell Mann: Hands-On Systematic Innovation, Ieper Belgium, 2002.
5. Anticipating Failures with Substance-Field Inversion by Thomas W. Ruhe, TRIZ-Journal, Feb 2003.









International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

239 www.ijergs.org

Analysis and Design of Low Voltage Low Power Dynamic Comparator with
Reduced Delay and Power
Dinabandhu nath Mandal
1,
, Niladri Prasad Mohapatra
1
, Rajendra Prasad
3
,Ambika Singh
1

1
Research Scholar (M.Tech), Department of Electronics,KIIT University, Bhubaneswar, India
1
Assistant Professor, Department of Electronics,KIIT University, Bhubaneswar, India
Email-mandaldinbandhu@gmail.com
Abstract— High speed devices such as ADC, operational amplifier are of great importance and for this high speed application a
major thrust is given towards low power methodologies. Reduction of power consumption in these device can be achieved by moving
towards smaller feature size processes. Now ADC requires lesser power dissipation, low noise, better slew rate ,high speed etc.
Dynamic comparator are being used in today's A/D converters extensively because these comparator are high speed ,consumes lesser
power dissipation ,having zero static power consumption and provide full-swing digital level output voltage in shorter time duration.
Back to back inverter in these dynamic comparator provides positive feedback mechanism which convert a smaller voltage difference
in full scale digital level output. A pre-amplifier based comparator can amplify a small input voltage difference to a large enough
voltage to overcome the latch offset voltage and also can reduce the kickback noise. However the pre-amplifier based comparator
suffer large static power consumption as well as from the reduced intrinsic gain with the reduction of the drain to source resistance due
to continuous technology scaling. In this paper a delay analysis has been presented for different dynamic comparators and finally a
proposed designed has been given where delay has been reduced 264Ps and average power dissipation has been reduced to
1.09µw.The above design has been simulated in 180nm technology with a supply voltage of 0.8v

Keywords— High speed analog-to-digital comparators(ADCs) , Dynamic clocked comparator, low power analog design , Double-tail
dynamic comparator, conventional dynamic comparator , preamplifier based comparators

INTRODUCTION
Comparator is a fundamental building block in analog-to-digital converter(ADCs). In design of ADCs , comparator of high speed ,
low power consumption are used. comparator in ultra deep sub micrometer (UDSM) technologies suffers from low supply voltage.
hence design of high speed comparator is a challenge when the supply voltage is low[1]. Hence to achieve high speed in a given
technology more transistor are required and more area and power is required. Technique such as supply boosting method[2],[3] a
technique such as body driven transistor[4] ,[5] has been developed to meet the low voltage design. In addressing switching problems
and input range two technique such as boosting and bootstrapping are used . In this paper the delay has been presented for various
dynamic comparator architecture. Based on the double-tail architecture a new dynamic comparator has been presented where delay is
comparatively reduce compared to the earlier design which doesn't require boosted voltage . By adding a few number of transistor the
delay time at the latch has been comparatively reduce. As a result in the modified design the power is saved and can be used for high
speed ADCs design.
CLOCK REGENERATIVE COMPARATORS
Clock regenerative comparator are widely use in design of ADCs of high speed as these type of comparator makes fast decision due
to the presence of feedback(positive) in the latch stage. There are many analysis which investigate the behavior of the comparator
from many respect such as random decision error[ 10],offset voltage[ 8],[9], noise[7] , kick-back noise[11]. The the above section the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

240 www.ijergs.org

analysis of delay is presented. the delay of the conventional dynamic and conventional double-tail comparator are verified and based
on the above proposed comparator will be presented
I. CONVENTIONAL DYNAMIC COMPARATOR
Conventional dynamic comparator is widely used dynamic comparator for the design of analog to digital comparator .It has rail-to-rail
output swing ,high input impedance , zero static power consumption. the schematic of conventional dynamic comparator is shown in
fig 1.1 and fig 1.2 shows the transient simulation of conventional dynamic comparator .


fig 1.1. Schematic of conventional dynamic comparator

fig 1.2 :- Transient simulation of the conventional dynamic comparator for the voltage difference of 5 m V, V cm=0.7 and supply
voltage of 0.8V
The delay of the above comparator consists of two delay t
0
and t
latch
where t
o
discharging delay of the load capacitance C
L
and t
latch
is
the latching delay of the cross coupled inverter and hence the total delay(t
delay
) of the above comparator is given as
(1)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

241 www.ijergs.org

Where C
L
is the load capacitance, |V
thp
| is the threshold voltage of M2 transistor , g
m,eff
is the transconductance of the back-to-back
inverter , V
DD
is the supply voltage ,I
tail
is the current of the M
tail
transistor. β
1,2
is the current factor of the input transistor , Δ Vin is
the input voltage difference. According to equation (1) the delay of the above comparator depends directly to the load capacitance(C
L
)
and inversely to input difference voltage (Δ Vin) .
The main advantage of the above architecture are rail-to-rail swing at the output, better robustness against noise and mismatch, static
consumption is zero. The power plot of the conventional dynamic comparator is shown in fig 1.3 and the layout of the above
comparator is shown in fig 1.4

Fig 1.3 power plot of conventional dynamic comparator

Fig 4 Layout schematic of the conventional dynamic comparator
II. CONVENTIONAL DOUBLE-TAIL DYNAMIC COMPARATOR
The schematic of double-tail dynamic comparator is shown in fig 1.5 as the above topology has large number of transistor and has less
stacking and operation can be done at lower supply voltage compared to the earlier design of conventional dynamic comparator. As in
these structure due to the presence of two M
tail
transistor it provides large current at the latching stage and wider M
tail2
requires for fast
latching which is independent to V
cm
(common mode voltage at input) and has small current at the input stage , required for low
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

242 www.ijergs.org

offset[6]. The fig 1.6 shows the transient simulation of the conventional double-tail dynamic comparator for input voltage difference
of ∆Vin=5mv , V
cm
=0.7v and V
DD
= 0.8V

fig 1.5 Schematic of conventional double-tail dynamic comparator

Fig1.6 Transient simulation of the conventional double-tail dynamic comparator for input voltage difference of ∆Vin=5mv ,V
cm
=0.7v
and V
DD
The delay of the double-tail dynamic comparator comprises of two delay t
0
and t
latch
. which is similar to that of conventional
dynamic comparator . Here t
0
is the capacitive charging of the capacitance at the load C
Lout
(at the outn and outp) until the transistor
(M9/M10) are on, and hence the latch regeneration starts and t
0
is determined. The total delay of the above comparator is given as

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

243 www.ijergs.org

where g
mR1,2
is transconducatance of the transistor(M
R1
and M
R2
) , I
tail 2
is the current of M
tail2
transistor , Δ Vin is the voltage
difference at the input , Δ V
0
is the output voltage difference. The fig 8 and fig 1.7 below shows the power plot for calculating power
and layout for determining area respectively of double tail dynamic comparator.

Fig 1.7-Power plot of double-tail dynamic comparator

Fig 1.8 - Layout of double tail dynamic comparator








International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

244 www.ijergs.org

III PROPOSED DOUBLE TAIL DYNAMIC COMPARATOR
The schematic of proposed design is compared with double-tail dynamic comparator is shown in fig 1.9 . In the proposed design the
lower input state is replace by differential amplifier with PMOS load









Fig :- 1.9 Schematic of proposed comparator(right) with double-tail dynamic comparator(left)
The delay of the proposed double-tail dynamic comparator is comparatively reduced in comparison to double-tail dynamic
comparator. The power plot and Transient simulation of the proposed double-tail dynamic comparator for input voltage difference of
∆Vin=5mv ,V
cm
=0.7v and V
DD
= 0.8V is shown in fig 2.1 and fig 2.2 shows the layout of proposed double-tail dynamic comparator


Fig 2.1 :- Power plot and Transient simulation of the Modified double-tail dynamic comparator for input voltage difference of
∆Vin=5mv ,V
cm
=0.7v and V
dd
= 0.8V
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

245 www.ijergs.org


fig 2.2 Layout of proposed double-tail dynamic comparator in 180nm technology
SIMULATION RESULT
The comparison table has been presented to compare the results of proposed comparator with conventional and double-tail dynamic
comparators. the above circuit are simulated in a 180nm CMOS technology
Comparator
Structure

Conventional
Dynamic
Comparator

Double-tail
Dynamic
Comparator

Proposed
Double-tail
Dynamic
Comparator

No of
Transistors
used


9 14 16
Supply
Voltage (V)

0.8 0.8 0.8
Delay(Ps)


898.2 293 263
Energy (FJ)

1.108μ

2.125 μ

866 n

Estimated
Area

22.7μ * 15.7μ

28μ*13μ 28.9μ*19.5μ



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

246 www.ijergs.org


CONCLUSION
A new proposed double-tail comparator shows better performance as compared to conventional dynamic and double-tail dynamic
comparator. As it is shown that the delay of the proposed design is 263 Ps which is comparatively lesser than the earlier also energy
per conversion is reducing from 1.108µ in conventional dynamic to 866 ns in proposed double-tail. The proposed double-tail dynamic
comparator can be used for the design of high speed ADCs as the delay is reduced and hence the operation will be faster. As in the
proposed structure the number of transistor is more so the area of the design is more which is one of the disadvantage of the above
comparator

REFERENCES

[1] B. Goll and H. Zimmermann, ―A comparator with reduced delay time in 65-nm CMOS for supply voltages down to 0.65,‖ IEEE
Trans. Circuits Syst. II, Exp. Briefs, vol. 56, no. 11, pp. 810–814, Nov. 2009

[2] S. U. Ay, ―A sub-1 volt 10-bit supply boosted SAR ADC design in standard CMOS,‖ Int. J. Analog Integr. Circuits Signal
Process., vol. 66, no. 2, pp. 213–221, Feb. 2011.

[3] A. Mesgarani, M. N. Alam, F. Z. Nelson, and S. U. Ay, ―Supply boosting technique for designing very low-voltage mixed-signal
circuits in standard CMOS,‖ in Proc. IEEE Int. Midwest Symp. Circuits Syst. Dig. Tech. Papers, Aug. 2010, pp. 893–896.
[4] B. J. Blalock, ―Body-driving as a Low-Voltage Analog DesignTechnique for CMOS technology,‖ in
Proc.IEEESouthwest Symp. Mixed-Signal Design, Feb. 2000, pp. 113–118.
[5] M. Maymandi-Nejad and M. Sachdev, ―1-bit quantiser with rail to rail input range for sub-1V modulators,‖IEEE Electron. Lett.,
vol. 39, no. 12, pp. 894–895, Jan. 2003
[6] B. Murmann et al., "Impact of scaling on analog performance and associated modeling needs," IEEE Trans. Electron
Devices, vol. 53, no. 9, pp. 2160-2167,
Sep. 2006
[7 ] R. Jacob Baker, Harry W. Li, David E. Boyce, “CMOS- Circuit Design, Layout, And Simulation”, IEEE Press Series on
Microelectronic Systems, IEEE Press, Prentice Hall of India Private Limited, Eastern Economy Edition,2002

[8] Meena Panchore, R.S. Gamad, “Low Power High Speed CMOS Comparator Design Using .18μm Technology”,
International Journal of Electronic Engineering Research, Vol.2, No.1, pp.71-77, 2010

[9] M. van Elzakker, A.J.M. van Tuijl, P.F.J. Geraedts, D. Schinkel, E.A.M. Klumperink and B. Nauta, "A 1.9W
4.4fJ/Conversion-step 10b 1MS/s Charge-Redistribution ADC," ISSCC Dig. Tech. Papers, pp. 244–245, February 2008

[10] Heungjun Jeon and Yong-Bin Kim, “A Novel Low-Power, Low-Offset and High-Speed CMOS Dynamic Latched
Comparator”, IEEE, 2010

[11] Behzad. Razavi, ―Design of Analog CMOS Integrated Circuits‖, New York McGraw-Hill, 2001

[12] Dinabandhu Nath Mandal , Sanjay Kumar ''High Speed Comparators for Analog-To-Digita comparatorl'' IOSR Journal of
Electrical and Electronics Engineering (IOSR-JEEE) e-ISSN: 2278-1676,p-ISSN: 2320-3331, Volume 9, Issue 2 Ver. III
(Mar – Apr. 2014), PP 5661




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

247 www.ijergs.org

A Novel Blind Hybrid SVD and DCT Based Watermarking Schemes
Samiksha Soni
1
, Manisha Sharma
1

1
Bhilai Institue of Technology Durg, Chhattisgarh
Email- samiksha.soni786@gmail.com

ABSTRACT – — In recent years SVD has gained wide importance in the field of digital watermarking. In this paper the
fundamental of SVD and quantization based watermarking algorithm is discussed and a modified hybrid algorithm is proposed. In this
work cascade combination of DCT and SVD is applied to design a robust watermarking system. This work exploits the features of
both DCT and SVD. We implemented the same algorithm in three variants where these variation lies in the embedding procedure of
watermark bit ‗1‘. Simulation result shows that minor change in embedding formula has significant impact on robustness of the
system. To check the robustness of the proposed work it is subjected to variety of attacks and robustness is measured in terms of
normalized correlation and bit error rate.
Keywords— DCT, SVD, watermarking, quantization, embedding, extraction, singular value, diagonal, orthogonal.
INTRODUCTION
In today‘s era, the internet has subverted the way we access information and share our ideas. The internet provides excellent means for
sharing digital multimedia object. It is inexpensive, eliminates warehousing and delivery, and is almost instantaneous. But with the
advent of information technology there is threat to duplication and authentication of multimedia data. Watermarking is a branch of
information hiding which is used to embed proprietary information in digital multi media.The conceptual model [1] of the
watermarking system is explained in Fig. 1 and Fig. 2. Which comprise of two basic modules, embedding module and extraction
module. Original image acts as the carrier which is to be secured. The watermark embedding module embeds the secondary signal in
to the original image. This secondary signal providing the sense of ownership or authenticity is called watermark. The optional key is
used to enhance the security of the system. Extraction module estimates the hidden secondary signal from the received image with the
help of key and original image if required. Channel noise or illegitimate access may degrade quality of watermarked image during
transmission. But embedding system should be strong enough in such a manner that no manipulation can detach the watermark from
its cover except the authentic user.



Fig. 1 Watermark Embedding Module




Fig. 2 Watermark Extraction Module


Key
Extraction
module
Watermarked
image
Original image
Watermark
Watermarked
image
Key
Embedding
module
Original image
Watermark
Channel
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

248 www.ijergs.org

An effective watermarking scheme [2] should satisfy the following basic requirements
- Transparency: The watermark embedded in the original signal should not be perceivable by human eye and watermark should not
distort the media being protected.
- Security: A watermarking scheme should also ensure that no one can generate bogus watermarks and should provide reliable
evidence to protect the rightful ownership.
- Robustness: It refers to the property of survival of watermark against various attacks such as filtering, geometric transformations,
noise addition, etc.
Image watermarking techniques proposed so far can be broadly categorized according to the basis of how to embed the watermark as:
First category is spatial domain technique [3] which adds the digital watermark on the image directly in terms of a certain algorithm.
Second category is transform domain technique which embeds the watermark into the transformed image [4-6]. The former technique
has an easier algorithm and faster computing speed, but the disadvantage is that its robustness is not stronger; the latter one has better
robustness and resilient to image compression, common filtering and noise, but its problem lies in computing speed. However,
because of its better robustness, transform domain technique has gradually been applied to digital watermarking development and
research.
In the recent years, singular value decomposition based watermarking technique and its variations have been proposed. SVD is a
mathematical technique used to extract algebraic features from an image. The core idea behind SVD based approaches is to apply the
SVD to the whole cover image or, alternatively, to small blocks of it, and then modify the singular values to embed the watermark.
Gorodetski et al. in [7] proposed a simple SVD domain watermarking scheme by embedding the watermark to the singular values of
the images, to achieve a better transparency and robustness. Proposed method is not image adaptive and fails to maintain transparency
for different images. Liu et al. in [8] presented a scheme where a watermark is added to the singular value matrix of the watermarking
image in spatial domain. This scheme offers good robustness against manipulations for protecting rightful ownership. But since the
scheme is designed for the rightful ownership protection, where the robustness against manipulations is desired, it is suitable for
authentication. Makhloghi et al. in [9] presents singular value decomposition and discrete wavelet transform based blind robust digital
image watermarking. In the proposed work the wavelet coefficients of the host image are modified by inserting bits of singular values
of watermark image.
In [10] a digital image watermarking scheme based on Singular Value Decomposition using Genetic Algorithm (GA) is proposed. The
proposed scheme is based on quantization step size optimization using the Genetic Algorithm to improve the quality of watermarked
image and robustness of the watermark. Zhu et al. [11] method can deal with the rectangle matrices directly and can extract better-
quality watermarks. It takes little time to embed and extract the watermark in large images. This method can avoid some
disadvantages such as the distortion caused by the computing error then extracting the watermark in the diagonal direction.
Modaghegh et al. [12] proposed an adjustable watermarking method based on SVD, the parameters of which were adjusted using the
GA in consideration of image complexity and attack resistance, and by the change of the fitness function, watermarking method can be
converted to each of robust, fragile, or semi-fragile types. Abdulfetah et al. [13] proposed a robust quantization based digital image
watermarking for copy right protection in DCT-SVD domain. The watermark is embedded by applying a quantization index
modulation process on largest singular values of image blocks in the DCT domain. To avoid visual degradation of, they have designed
adaptive quantization model based on blocks statistics of the image.
Horng et al. [14] proposed an efficient blind watermarking scheme for e-government document images through a combination of the
discrete cosine transform (DCT) and the singular value decomposition (SVD) based on genetic algorithm (GA). DCT, in this case, is
applied to the entire image and mapped by a zigzag manner to four areas from the lowest to the highest frequencies. SVD, meanwhile,
is applied in each area and then the singular value of DCT-transformed host image, subsequently, is modified in each area with the
quantizing value using GA to increase the visual quality and the robustness. The host image is not needed in the watermark extraction
and it is more useful than non blind one in real-world applications.
SVD BASED WATERMARKING ALGORITHM
Sun et al. [15] proposed an SVD and quantization- based watermarking scheme. The diagonal matrix property is exploited to embed
the watermark. To embed the watermark largest coefficient of diagonal matrix is selected. The modification was determined by the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

249 www.ijergs.org

quantization means. After that, the inverse of the SVD transformation was performed to reconstruct the watermarked image. Because
the largest coefficients of diagonal matrix can resist general image processing, the embedded watermark was not greatly affected.
Also, the quality of the watermarked image can be determined by the quantization. Thus, the quality of the watermarked image can be
maintained. To extract the watermark, the SVD transformation was employed and the largest coefficients in the S component were
examined. After that, the watermark was extracted.
The watermark embedding and extracting procedures can be described as follows.
 watermark embedding procedure
In first step partition the host image into blocks. In second step perform SVD transformation. In third step extract the largest
coefficient S
i
1,1 from each S component and quantize it by using a predefined quantization coefficient Q.
Let Y
i
= S
i
1,1mod Q
In fourth Step embed watermark bit as follows
When W
i
= 0 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 +Q/4 − Y
i
else S
i

1,1 = S
i
1,1 + 5Q/4 −Y
i

When W
i
= 1 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 − Q/4 +Y
i
else S
i

1,1 = S
i
1,1 + 3Q/4 −Y
i

In step five perform the inverse of the SVD transformation with modified S matrix and U, V matrix of original image to reconstruct
the watermarked image.
- Watermark extraction procedure
In first step partition the watermarked image into blocks. In second step perform SVD transformation. In third step extract the largest
coefficient S‘(1, 1) from each S component and quantize it by using the predefined quantization coefficient Q. Let Z = S‘(1, 1)modQ.
In fourth step check if Z < Q/2, the extracted watermark has a bit value of 0. Otherwise, the extracted watermark has a bit value of 1.
In the proposed work we implemented three variants of quantization based blind embedding [] which differs minutely from one
another. Difference lies in the embedding step for watermark bit to be ‗1‘. This minor difference creates significant change in
robustness.
PROPOSED SCHEME
In the proposed work we provide modification in the existing method by cascading it with DCT. DCT operation is performed on
original image to obtain its frequency components. Then reordering of DCT components is done in zigzag manner. After that block
SVD operation is performed on scanned DCT coefficients then watermark is embedded inside the largest SV‘s of each block.
- Watermark embedding procedure:
In first step convert the original color image in to gray scale. Then apply 2-D DCT to the gray scale image and perform the zigzag
scanning operation on DCT coefficients shown in Eq. (1) and Eq. (2). Let the gray scale image be A
A
d
= DCT2(A) (1)
Z
d
= Zigzag(A
d
) (2)
In next step two dimensional matrix is formed from the zigzag scanned vector
M = Con2_matrix(Z
d
) (3)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

250 www.ijergs.org

After that Matrix M is fractioned in to smaller blocks depending on the payload size m
1,
m
2,…….,
m
n
=

divi(M) where n is equal to
watermark length, then using Eq. (4) SVD operation is performed on this blocks

U
i
S
i
V
i
= svd(m
i
) (4)
Where i=1,2,3,4…….,n
After applying DCT SVD operation on original image the binary watermark is inserted by the following ways:
Modify the largest singular value of each block as
Y
i
= S
i
1,1mod Q
Where Q is predefined quantizing value, Q must be selected with the specification of an image both to obtain a maximum resistance
towards attack and to obtain the minimum perceptibility.
- First Embedding Procedure:
When W
i
= 0 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 + Q/4 −Y
i
else S
i

1,1 = S
i
1,1 + 5Q/4 −Y
i

When W
i
= 1 it will be embedded as follows:
if Y
i
< /4,then S
i

1,1 = S
i
1,1 − Q/4 −Y
i
else S
i

1,1 = S
i
1,1 + 3Q/4 −Y
i

- Second Embedding Procedure:
When W
i
= 0 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 + Q/4 −Y
i
else S
i

1,1 = S
i
1,1 + 5Q/4 −Y
i

When W
i
= 1 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 − Q/4 +Y
i
else S
i

1,1 = S
i
1,1 + 3Q/4 −Y
i

- Third Embedding Procedure:
When W
i
= 0 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 + Q/4 −Y
i
else S
i

1,1 = S
i
1,1 + 5Q/4 −Y
i

When W
i
= 1 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 − Q/4 +Y
i
else S
i

1,1 = S
i
1,1 + 3Q/4 +Y
i

Next step is to perform inverse SVD operation on blocks to obtain modified DCT coefficients m
i

= ISVD(U
i
S
i

V
i
) and smaller blocks
are recombined by M

= mergm
1

, m
2

, …. , m
n

, then after inverse zigzag operation is performed on M

to map DCT coefficients
back to their position A
d

= IZigzag(M


). Last step is to perform inverse DCT operation on A
d

using Eq. (5) to obtain watermarked
imageA

.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

251 www.ijergs.org

- Watermark extraction procedure
The first step of the watermark-extraction process is to apply DCT to the watermarked image as shown in Eq. (5)
A
dr

= DCT2(A

) (5)
In Step two, using Eq. (6) scan the DCT coefficients in the zigzag manner
Z
dr
= Zigzag(A
dr


) (6)
After that two dimensional matrix is formed from scanned vector using Eq. (7)
M
r
= Con2_matrix(Z
dr
) (7)
In step three matrix M
r
is fractioned in to smaller blocks depending on the payload size m
r1,
m
r2,…….,
m
rn
=

divi(M
r
) where n is
equal to watermark length, then SVD operation is performed on this blocks as shown in Eq. (8)
U
ri
S
ri
V
ri
= svd(m
ri
) (8)
Where i=1, 2, 3, 4……., n. In step four get the largest singular values from each block and extract the watermark
Y
ri
= S
ri
1,1mod Q
- Extraction mechanism for first and second embedding procedure:
If Y
ri
< /2 , then W
ri
= 0, else W
ri
= 1, these extracted bit values are used to construct the extracted watermark.
- Extraction mechanism for third procedure:
If Y
ri
≤ Q/2 , then W
ri
= 0, else W
ri
= 1, these extracted bit values are used to construct the extracted watermark.
EXPERIMENTAL RESULTS
To verify the performance of the proposed watermarking algorithm, MATLAB platform is used and a number of experiments are
performed on different images of size 512×512 and binary logos of size 64×64.Here we provide the comparative result for host image
Lena and binary logo shown in Fig. 3 (a) and Fig. 3 (b).Extracted watermark of three procedure is shown in Fig. 4 (a)(first procedure)
, Fig. 4 (b)(second procedure), Fig. 4 (c)(third procedure) The watermarked image quality is measured using PSNR (Peak Signal to
Noise Ratio) given by Eq. (9).To verify the presence of watermark, two parametric measures are used to show the similarity between
the original watermark and the extracted watermark. These two parameters are normalized correlation and bit error rate given by Eq.
(9) and (10)
PSNR = 10Log
10

(A

(i,j))
2 N
j =1
N
i=1
(Ai,j−A

(i,j))
2 N
j =1
N
i=1
(9)
NC =
wi,j−w
mean
(w

i,j−w
mean

)
N
j =1
N
i=1
(w

i,j−w
mean

)
2 N
j =1
N
i=1
(wi,j−w
mean
)
2 N
j =1
N
i=1
(10)
. BER =
w(i,j)⨁w

i,j
N
j =1
N
i=1
N×N
(11)
Where w(i, j) be the original watermark image and the extracted watermark be w'(i, j) original watermark image and the extracted
watermark be w'(i, j).
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

252 www.ijergs.org


(a) (b)
Fig.3 Host image and watermark image

(a) (b) (c)
Fig.4 Extracted watermark

In order to check the robustness of the proposed watermarking scheme the watermarked image is attacked by a variety of attacks
namely Average and Median Filtering, Gaussian noise, Random noise, JPEG Compression, Cropping, Resize, Rotation, Blur. After
these attacks on the watermarked image, the extracted logo is compared with the original one.
- Filtering
The most common manipulation in digital image is filtering. In filtering watermarked image is attacked by applying Mean(3×3),
Median (3×3) and Gaussian low pass (5x5) filter.
- Addition of noise
Noise addition in watermarked image is another way of checking the robustness of the system. Noise addition leads to degradation and
distortion of the image. Which effects the quality of extracted watermark. Here robustness is checked against salt and pepper noise and
random noise.
- JPEG compression
Another most common manipulation in digital image is image compression. To check the robustness against Image Compression, the
watermarked image is tested with JPEG100 and JPEG2000 compression attacks.
- Cropping and resizing
Cropping is the process of selecting and removing a portion of an image to create focus or strengthen its composition. Cropping an
image is done by either hiding or deleting rows or columns. In the proposed work three variants of cropping is performed they are row
column blanking, row column copying, cropping 25% area (right bottom corner). To fit the image into the desired size, enlargement
or reduction is commonly performed and resulted in information loss of the image including embedded watermark. For this attack,
first the size of the watermarked image is reduced to 256×256 and again brought to its original size 512×512.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

253 www.ijergs.org


- Rotation
In this work watermark is subjected to very minor rotation i.e. of 0.2, 0.3 and result are obtained. When rotation of larger degree is
provided watermark fails to resist the attack. However if the effect of rotation is reverted by some way watermark can be successfully
extracted.
- General image processing attacks
We employed motion blur with pixel length 3 and angle 45
0
on watermarked image to check its robustness
TABLE I
NORMALIZED CORRELATION VALUE OF THREE IMPLEMENTED SCHEMES
Types of attacks First Embedding
procedure
Second Embedding
Procedure
Third Embedding
Procedure
Without attack 0.9927 0.8956 0.5352
Random noise 0.5930 0.4070 0.3028
Low Pass Filtering 0.5218 0.3854 0.2160
Rotation 0.6316 0.4624 0.2951
Blurred 0.6831 0.5229 0.3064
Average Filtering 0.6004 0.4499 0.2630
Median Filtering 0.7333 0.5805 0.3437
Crop 0.7396 0.5904 0.1906
JPEG 100 0.9546 0.7606 0.4903
JPEG2000 0.9912 0.9004 0.5340
Salt & Pepper 0.7439 0.5786 0.3917
Row Column
Blanking
0.7550 0.6306 0.4232
Row Column Copying 0.7984 0.7535 0.4320
Resizing 0.8328 0.5130 0.4185
TABLE III
PSNR VALUE OF THREE IMPLEMENTED SCHEMES
Types of attacks First Embedding
prPPprocedure
Second Embedding
Procedure
Third Embedding
Procedure
Without attack 47.5090 47.5671 38.8217
Random noise 33.9449 33.9422 32.8873
Low Pass Filtering 33.6067 32.5978 32.1208
Rotation 37.4609 37.4382 35.5961
Blurred 35.4630 35.4561 34.3489
Average Filtering 32.7986 32.7889 32.2431
Median Filtering 35.9154 35.9007 34.6384
Crop 11.4074 11.8670 11.3010
JPEG 100 44.0591 44.3251 38.2187
JPEG2000 46.3709 47.2910 38.5917
Salt & Pepper 32.0481 32.1717 31.4888
Row Column
Blanking
24.1098 26.2981 23.9875
Row Column Copying 28.9098 33.3551 27.0656
Resizing 34.5344 37.9656 33.5479


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

254 www.ijergs.org

TABLE IIIII
BER VALUE OF THREE IMPLEMENTED SCHEMES

Types of attacks First Embedding
procedure
Second Embedding
Procedure
Third Embedding
Procedure
Without attack 0.0037 0.0510 0.4304
Random noise 0.1848 0.2253 0.4614
Low Pass Filtering 0.2261 0.2607 0.5028
Rotation 0.1768 0.2146 0.4695
Blurred 0.1528 0.1868 0.4983
Average Filtering 0.1951 0.2275 0.5029
Median Filtering 0.1328 0.1599 0.4870
Crop 0.2554 0.1406 0.5012
JPEG 100 0.0225 0.0896 0.4579
JPEG2000 0.0044 0.0496 0.4255
Salt & Pepper 0.1240 0.1587 0.4475
Row Column Blanking 0.1277 0.1365 0.4412
Row Column Copying 0.1030 0.0923 0.4380
Resizing 0.0840 0.2795 0.4882
Conclusion
In this paper three variants of quantization based blind watermarking scheme is discussed. Experimental result shows that
performance of first embedding procedure is better is terms of NC, PSNR and BER. Proposed technique shows resilience
towards a variety of attacks but it fails to withstand histogram equalization, contrast enhancement attacks and rotational
attacks of higher degree. Since embedding procedure for inserting watermark bit ‗0‘ is common in all the procedure and
variation exists in insertion of watermark bit ‗1‘ only. This variation plays significant impact on watermark retrieval
which is clearly identified by the NC, BER and PSNR of three embedding procedure shown in Table I, II and III.
REFERENCES:
[1] C.I.Podilchuk and E.J.Delp, ―Digital Watermarking: Algorithms and Applications,‖ IEEE Signal Process.Magazine, pp.33-46,
July 2001.
[2] Fernando PCrez-Gonz6lez and Juan R. Hernbdez, " A TUTORIAL ON DIGITAL WATERMARKING, " IEEE Trans. on
Information Forensics Security, 1999;
[3] Dipti Prasad Mukherjee, Subhamoy Maitra , Scott T. Acton ,―Spatial Domain Digital Watermarking of Multimedia Objects for
Buyer Authentication‖, IEEE Transactions on multimedia, VOL. 6, NO. 1, FEBRUARY 2004.
[4] J. R. Hernansez, M. Amado, and F. Perez-Gonzalez, ―DCT-domain watermarking techniques for still images: Detector
performance analysis and a new structure,‖ IEEE Trans. Image Process., vol. 9, pp. 55–68, Jan. 2000.
[5] I. J. Cox, J. Kilian, T. Leighton, and T. Shamoon, ―Secure spread spectrum watermarking for multimedia,‖ IEEE Trans. Image
Processing, vol. 6, pp. 1673–1687, Dec. 1997.
[6] P. Meerwald, ―Digital Watermatrking in the Wavelet Transform Domain,‖ Master‘s, Dept. Sci. Comput., Univ. Salzburg, Austria,
2001.
[7] V. I. Gorodetski, L. J. Popyack, and V. Samoilov, ―SVD-based approach to transparent embedding data into digital images,‖ in
Proc. International Workshop, MMM-ACNS, St. Petersburg, Russia, pp. 263–274, May 2001.
[8] R. Liu and T. Tan, ―A SVD-Based Watermarking Scheme for Protecting Rightful Ownership‖, IEEE Trans. on Multimedia, Vol.
4, pp.121-128, March 2002.
[9] M. Makhloghi, F. Akhlaghian, H. Danyali, Robust digital image watermarking using singular value decomposition, in: IEEE
International Symposium on Signal Process. and Information Technology, pp. 219–224, 2010.
[10] B.Jagadeesh, S.Srinivas Kumar, K.Raja Rajeswari, ―Image Watermarking Scheme Using Singular Value Decomposition,
Quantization and Genetic Algorithm‖, International Conf. on Signal Acquisition and Process., IEEE Computer Society, pp.120-
124, 2010.
[11] Xinzhong Zhu, Jianmin Zhao and Huiying Xu ,‖ A Digital Watermarking Algorithm and Implementation Based on Improved
SVD‖The 18th International Conf. on Pattern Recognition ,2006
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

255 www.ijergs.org

[12] H. Modaghegh, R.H. Khosravi, T. Akbarzadeh, ― A new adjustable blind watermarking based on GA and SVD‖, Proceeding of
International Conf. on Innovations in Information Technology in, pp. 6–10, 2009.
[13] A. Abdulfetah, X. Sun and H. Yang, ―Quantization Based Robust Image Watermarking in DCT-SVD Domain‖. Research Journal
of Information Technology, Vol 1, pp. 107-114, 2009.
[14] Shi-Jinn Horng , Didi Rosiyadi, Tianrui Li a, Terano Takao , Minyi Guo , Muhammad Khurram Khan,‖ A blind image copyright
protection scheme for e-government‖, Pattern Recognition Letters, pp.1099–1105 ,2013.
[15] Sun, R., Sun, H., Yao, T., ―A SVD and quantization based semi-fragile watermarking technique for image authentication‖, Proc.
IEEE International Conf. Signal Process., pp. 1592-95, 2002.





















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

256 www.ijergs.org

Low Power Design of Pre Computation-Based Content-Addressable Memory
SK.Khamuruddeen
1
, S.V.Devika
1
, V Rajath
2
, Vidhan Vikram Varma
2

1
Associate professor, Department of ECE, HITAM,Hyderabad, India
2
Research Scholar (B.Tech), Department of ECE, HITAM,Hyderabad, India

ABSTRACT - Content-addressable memory (CAM) is a special type of computer Memory used in certain very high speed searching
applications. It is also known as associative memory, associative storage, or associative array. Content-addressable memory (CAM) is
frequently used in applications, such as lookup tables, databases, associative computing, and networking, that require high-speed
searches due to its ability to improve application performance by using parallel comparison to reduce search time. Although the use of
parallel comparison results in reduced search time, it also significantly increases power consumption. In this paper, we propose a
Block-XOR approach to improve the efficiency of low power pre computation- based CAM (PB-CAM). Compared with the ones-
count PB-CAM system, the experimental results show that our proposed approach can achieve on average 30% in power reduction
and 32% in power performance reduction. The major contribution of this paper is that it presents practical proofs to verify that our
proposed Block-XOR PB-CAM system can achieve greater power reduction without the need for a special CAM cell design. This
implies that our approach is more flexible and adaptive for general designs.

Keyword’s— Content-addressable memory, Block-XOR, pre computation- based CAM

I.INTRODUCTION

1.1 Existing System:
A CAM is a functional memory with a large amount of stored data that compares the input search data with the stored data. Once
matching data are found, their addresses are returned as output. The vast number of comparison operations required by CAMs
consumes a large amount of power.
1.2 Proposed System:
This proposed system approach can reduce comparison operations by a minimum of 909 and a maximum of 2339. We propose a new
parameter extractor called Block-XOR, which achieve the requirement.
II. CAM OVERVIEW

Content addressable memory (CAM) compares input search data against a table of stored data, and returns the address of the
matching data [1]–[5]. CAMs have a single clock cycle throughput making them faster than other hardware- and software-based
search systems. CAMs can be used in a wide variety of applications requiring high search speeds. A CAM is a good choice for
implementing this lookup operation due to its fast search capability.
However, the speed of a CAM comes at the cost of increased silicon area and power consumption, two design parameters that
designers strive to reduce. As CAM applications grow, demanding larger CAM sizes, the power problem is further exacerbated.
Reducing power consumption, without sacrificing speed or area, is the main thread of recent research in large-capacity. CAMs.
Development in the cam area is surveyed at two levels: circuits and architectures levels. We can compare CAM to the inverse of
RAM. When read, RAM produces the data for a given address. Conversely, CAM produces an address for a given data word. When
searching for data within a RAM block, the search is performed serially. Thus, finding a particular data word can take many cycles.
CAM searches all addresses in parallel and produces the address storing a particular word. CAM supports writing "don't care" bits into
words of the memory. The don't care bit can be used as a mask for CAM comparisons; any bit set to don't care has no effect on
matches.
The output of the CAM can be encoded or un encoded. The encoded output is better suited for designs that ensure duplicate
data is not written into the CAM. If duplicate data is written into two locations, the CAM's output will not be correct. If the CAM
contains duplicate data, the un encoded output is a better solution; CAM with un encoded outputs can distinguish multiple data
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

257 www.ijergs.org

locations. We can pre-load CAM with data during configuration, or you can write into CAM during system operation. In most cases,
two clock cycles are required to write each word into CAM. When you use don't care bits, a third clock cycle is required.
2.1 Operation of CAM:

FIG.1 Conceptual view of a content-addressable memory containing w words
Fig.1 shows a simplified block diagram of a CAM. The input to the system is the search word that is broadcast onto the
search lines to the table of stored data. The number of bits in a CAM word is usually large, with existing implementations ranging
from 36 to 144 bits. A typical CAM employs a table size ranging between a few hundred entries to 32K entries, corresponding to an
address space ranging from 7 bits to 15bits.
Each stored word has a match line that indicates whether the search word and stored word are identical (the match case) or
are different (a mismatch case, or miss). The match lines are fed to an encoder that generates a binary match location corresponding to
the match line that is in the match state. An encoder is used in systems where only a single match is expected.
In addition, there is often a hit signal (not shown in the figure) that flags the case in which there is no matching location in the
CAM. The overall function of a CAM is to take a search word and return the matching memory location. One can think of this
operation as a fully programmable arbitrary mapping of the large space of the input search word to the smaller space of the output
match location. The operation of a CAM is like that of the tag portion of a fully associative cache. The tag portion of a cache
compares its input, which is an address, to all addresses stored in the tag memory. In the case of match, a single match line goes high,
indicating the location of a match. Many circuits are common to both CAMs and caches; however, we focus on large capacity CAM s
rather than on fully associative caches, which target smaller capacity and higher speed.
Today‘s largest commercially available single-chip CAMs are 18 M bit implementations, although the largest CAMs reported
in the literature are 9 M bit in size. As a rule of thumb, the largest available CAM chip is usually about Half the size of the largest
available SRAM chip. This rule of thumb comes from the fact that a typical CAM cell consists of two SRAM cells.
2.2 Simple CAM architecture:
Content Addressable Memories (CAMs) are fully associative storage devices. Fixed-length binary words can be stored in any
location in the device. The memory can be queried to determine if a particular word, or key, is stored, and if so, the address at which it
is stored. This search operation is performed in a single clock cycle by a parallel bitwise comparison of the key against all stored
words.

Fig 2. Simple schematic of a model CAM with 4 words having 3 bits each.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

258 www.ijergs.org

We now take a more detailed look at CAM architecture. A small model is shown in Fig. 2. The figure shows a CAM
consisting of 4 words, with each word containing 3 bits arranged horizontally (corresponding to 3 CAM cells). There is a match line
corresponding to each word (ML0, ML1, etc.) feeding into match line sense amplifiers (MLSAs), and there is a differential search line
pair corresponding to each bit of the search word ( etc.). A CAM search operation begins with loading the search-
data word into the search-data registers followed by precharging all match lines high, putting them all temporarily in the match state.
Next, the search line drivers broadcast the search word onto the differential search lines, and each CAM core cell compares
its stored bit against the bit on its corresponding search lines. Match lines on which all bits match remain in the pre charged-high state.
Matchlines that have at least one bit that misses, discharge to ground. The MLSA then detects whether its match line has a matching
condition or miss condition. Finally, the encoder maps the match line of the matching location to its encoded address.
2.3 LOW POWER PB-CAM
Since content addressable memory (CAM) is frequently used in applications, that require high-speed searches, and because
of its ability to improve application performance by using parallel comparison, it results in reduced search time. But it also
significantly increases power consumption. So the main CAM-design challenge is to reduce power consumption associated with the
large amount of parallel active circuitry, without sacrificing speed or memory density.
2.3.1 Power saving CAM architecture:
Architectural technique for saving power, which applies to binary CAM, is pre-computation. Pre-computation stores some
extra information along with each word that is used in the search operation to save power. These extra bits are derived from the stored
word, and used in an initial search before searching the main word. If this initial search fails, then the CAM aborts the subsequent
search, thus saving power.

2.4 PB-CAM Architecture:

Fig.3 Memory organization of PB-CAM architecture
Fig. 3 shows the memory organization of the PB-CAM architecture which consists of data memory, parameter memory, and
parameter extractor, where k << n. To reduce massive comparison operations for data searches, the operation is divided into two
parts. In the first part, the parameter extractor extracts a parameter from the input data, which is then compared to parameters stored in
parallel in the parameter memory. If no match is returned in the first part, it means that the input data mismatch the data related to the
stored parameter. Otherwise, the data related to those stored parameters have to be compared in the second part. It should be noted that
although the first part must access the entire parameter memory, the parameter memory is far smaller than that of the CAM (data
memory). Moreover, since comparisons made in the first part have already filtered out the unmatched data, the second part only needs
to compare the data that match from the first part.
The PB-CAM exploits this characteristic to reduce the comparison operations, thereby saving power. Therefore, the
parameter extractor of the PB-CAM is critical, because it determines the number of comparison operations in the second part. So, the
parameter extractor plays a significant role since this circuit determines the number of comparison operations required in the second
part. Therefore, the design goal of the parameter extractor is to filter out as many unmatched data as possible to minimize the required
number of comparison operations in the second part. Two parameter extractors are discussed, namely One‘s count parameter extractor
and Block-XOR parameter extractor.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

259 www.ijergs.org

2.5 One’s count approach:
For ones count approach, with an n-bit data length, there are n+1 types of one‘s count (from 0 ones to n ones count).
Further, it is necessary to add an extra type of one‘s count to indicate the availability of stored data. Therefore, the minimal bit length
of the parameter is equal to log (n+ 2). The below fig 5 shows the conceptual view of one‘s count approach. The extra information
holds the number of ones in the stored word. For example, in fig.10, when searching for the data word, 01001101, the pre-computation
circuit the number of one‘s (which is four in this case). The number four is compared on the left-hand side to the stored one‘s count.
Only match lines PML
5
and PML7 match, since only they have a one‘s count of four. In the data-memory stage in fig.3.2, only two
comparisons actively consume power and only match line PML5 results in a match. The 14-bit ones-count parameter extractor is
implemented with full adders as shown in Fig. 4.

Fig.4 conceptual view of one‘s count approach

2.6 Mathematical Analysis:
For a 14-bit length input data, all the input data contain numbers, and the number of input data related to the same
parameter for ones count approach is , where n is a type of ones-count (from 0 to 14 ones-counts). Then we can compute the
average probability that the parameter occurs. The average probability can be determined by
(1)

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

260 www.ijergs.org


Fig. 5 14-bit ones-count parameter extractor
TABLE I
NUMBER OF DATA RELATED TO THE SAME PARAMETERS AND AVERAGE PROBABILITIES FOR THE ONES COUNT
APPROACH

Table I lists the number of data related to the same parameter and their average probabilities for the input data that is 14-bit
in length. For example, if a match occurs in the first part of the comparison with the parameter 2, the maximum number of required
comparison operations for the second part is .With conventional CAMs, the comparison circuit must compare all stored
data, whereas with the ones-count PB-CAMs, a large amount of unmatched data can be initially filtered out, reducing comparison
operations for minimum power consumption in some cases. However, the average probabilities of some parameters, such as 0, 1, 2,
12, 13, and 14 are less than 1%.
In Table I, parameters with over 2000 comparison operations range between 5 and 9. However, the summation of the
average probabilities for these parameters is close to 82%. Although the number of comparison operations required for ones-count PB-
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

261 www.ijergs.org

CAMs is fewer than that of conventional CAMs, ones-count PB-CAMs fail to reduce the number of comparison operations in the
second part when the parameter value is between 5 and 9, thereby consuming a large amount of power. From the Table I we can see
that random input patterns for the ones-count approach demonstrate the Gaussian distribution characteristic. The Gaussian distribution
will limit any further reduction of the comparison operations in PB-CAMs.
2.7 Block –XOR approach:
The key idea behind this method is to reduce the number of comparison operations by eliminating the Gaussian
distribution. For a 14-bit input data, if we can distribute the input data uniformly over the parameters, then the number of input data
related to each parameter would be , and the maximum number of required comparison operations would be
for each case in the second part of the comparison process. Compared with the ones-count approach, this approach
can reduce comparison operations by a minimum of 909 and a maximum of 2339 (i.e., for parameter value is from 5 to 9) for 82% of
the cases. Based on these observations, a new parameter extractor called Block-XOR, which is shown in Fig.3.4, is used to achieve the
previous requirement.


Fig. 6 concept of n-bit Block-XOR block diagram.

In this approach, we first partition the input data bit into several blocks, from which an output bit is computed using
XOR logic operation for each of these blocks. The output bits are then combined to become the input parameter for the second part of
the comparison process. To compare with the ones-count approach, we set the bit length of the parameter to [log (n+ 2)]. Where n is
the bit length of the input data. Therefore, the number of blocks is [n/ log (n+2)] in this approach. Taking the 14-bit input length as an
example, the bit length of the parameter is log (14+2) = 4-bit, and the number of blocks is [14/ log(14+2)] = 4 . Accordingly, all the
blocks contain 4 bits except the last one, which contains the remainder 2 bits as shown in the upper part of Fig. 6.
However, the concept of Block-XOR approach does not provide a, valid bit for checking whether the data is valid; hence
it cannot be applied to the PB-CAM directly. For this reason, modified architecture is used as shown in the lower part of Fig. 6 to
provide a valid bit and to guarantee the uniform distribution property of the Block-XOR approach. We added a multiplexer to select
the correct parameter.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

262 www.ijergs.org


Fig. 7: Structure of Block-XOR approach with valid bit.
The selected signal is defined as
S=A3A2A1A0. (2)
According to (2), if the parameter is ―0000 to 1110‖ (S = ―0‖), the multiplexer will transmit the i0 data as the output. In other words,
the parameter does not change. Otherwise, (A3A2A1A0 =‗‗1111‖, S =‗‗1‖), the first block of the input data becomes the new
parameter, and ―1111‖ can then be used as the valid bit. The case where the first block is ―1111‖ was not considered, because the
―1111‖ block bits will result in ―0‖ for one of the four parameter bits.
2.8 Comparison between Two Approaches:
To eliminate the Gaussian distribution, we uniformly distribute the parameter over the input data. However, as can be seen
from Tables III and IV, when the parameter is 0, 1, 2, 3, 4, 10, 11, 12, 13, or 14, the number of comparison operations required for the
ones-count approach is fewer than that for the Block-XOR PB-CAM. Although the Block-XOR PB-CAM is better than the ones-count
PB-CAM only for parameters between 5 and 9, we must draw attention to the fact that the probability that these parameters occurs is
82%. For example, when the parameter is 7, there is a 20.95% chance that the Block-XOR PB-CAM can result in more than 2280
fewer comparison operations as compared to the ones-count approach. Compared with the ones-count approach, we can reduce the
number of comparison operations for more than 1000 in most cases. In other words, the ones-count approach is better than Block-
XOR approach in only 18% of the cases.
The number of comparison operations required for different input bit length 4, 8, 14, 16, and 32 bits is shown in Fig.8. As can
be seen, from the fig 3.6 Block-XOR PB-CAM becomes more effective in reducing the number of comparison operations as the input
bit length increases. This implies that the longer the input bit length is, the fewer the number of comparison operations required (i.e.,
power reduction). Therefore, the Block-XOR PB-CAM is more suitable for wide-input CAM applications. In addition, the Block-
XOR parameter extractor can compute parameter bits in parallel with three XOR gate delays for any input bit length, hence short
constant delay. On the contrary, as the input bit length increases, the delay of the ones-count parameter extractor will increase
significantly.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

263 www.ijergs.org


Fig.8.Comparision operations for different input bit length.
III Gate-Block Selection Algorithm:
To make the parameter extractor of the block-xor PB-BAM more useful for specific data types, we take into account the different
characteristic of logic gates to synthesize the parameter extractors for different data types. As can be seen in Fig. 3.5, if the input bits
of each partition block is set into l, the bit length of the parameter (i.e. the number of blocks) will be [n/l], where n is the bit length of
the input data, and then the levels in each partition block equal [log
2
l]. We observe that when the input bits of each partition block
decreases, the mismatch rate and the number of comparison operations in each data comparison process will decrease (this is because
that the combinations of the parameter increase). Although the increasing parameter bit length can decrease the mismatch rate and the
number of comparison operations in each data comparison process, the parameter memory size must be increased. In other words, it
increases the power consumption of the parameter memory as well. As we stated, when the PBCAM performs data searching
operation, it must compare the entire parameter memory. To avoid wasting the large amount of power in the parameter memory, we
set the input of each partition block to 8 bits. Fig. 3.7 shows the proposed parameter extractor architecture. We first partition the input
data bit into several blocks, G0~G6 in each block stand for different logic gates, from which an output bit is computed using
synthesized logic operation for each of these blocks. The output bits are then combined to become the parameter for data comparison
process.
The objective of our work is to select the proper logic gates in Fig. 9 so that the parameter (Pk−1, Pk−2, · · ·, P0) can reduce
the number of data comparison operations as many as possible.

Fig. 9: n-bit block diagram of the proposed parameter extractor architecture.
In our proposed parameter extractor, the bit length of the parameter is set into [n/8], and then the levels in each partition
block equal [log
2
8] (which is 3). Suppose that we use basic logic gates (AND, OR, XOR, NAND, NOR, and NXOR) to synthesize a
parameter extractor for a specific data type, which has (6
7
)
[n/8]
different logic combinations based on the proposed parameter extractor.
Obviously, the optimal combination of the parameter extractor cannot be found in polynomial time.
To synthesize a proper parameter extractor in polynomial time for a specific data type, we propose a gate-block selection
algorithm to find an approximately optimal combination. We illustrate how to select proper logic gates to synthesize a parameter
extractor for specific data type from mathematical analysis below.
3.1 Mathematical Analysis:
For a 2-input logic gate, let p be the probability of the output signal Y that is one state. The probability mass function of the
the output signal Y is given by
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

264 www.ijergs.org



Assume that the inputs are independent, if we use any 2-input logic gate as a parameter extractor to generate the parameter
for 2-bit data, then the PB-CAM requires the average number of comparison operations in each data search operation can be
formulated as


Where N0 is the number of zero entries, and N1 is the number of one entries for the generated parameters. To illustrate
clearly, we use Table V as an example.

TABLE II


Suppose that a 2-input AND gate is used to generate the parameter, the average number of comparison operations in each
data search operation for the PB-CAM can be derived:

In other words, when we use a 2-input AND gate to generate the parameter for this 2-bit data, the average number of
comparison operations required for each data search operation in the PB-CAM is 4.33. According to Equ. 4, Table V derives the
average number of comparison operations for six basic logic gates. Obviously, using OR and NOR gates are the best selection for this
case, because they require the least average number of comparison operations (which is 3). Moreover, when we use the inverse
relation of logic gates (AND/NAND, OR/NOR, and XOR/NXOR) to generate the parameter, the average number of comparison
operations for each data search operation required in the PB-CAM will be the same. To reduce the complexity of our proposed
algorithm and the performance of the parameter extractor, our proposed approach only selects NAND, NOR, and XOR gates to
synthesize the parameter extractor for our implementation. This is because that NAND and NOR is better than AND and OR in terms
of the area, power, and speed. Based on this mathematical analysis, Fig.3.8 shows our proposed gate block selection algorithm.
Note that when the input is random, the synthesized result will be the same as the block-XOR approach. In order words, the
block-XOR approach is a subset of our proposed algorithm. To better understand our proposed approach, we give a simple example as
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

265 www.ijergs.org

illustrated in figure. In this example, a 4-bit data is assigned as input data. Because the input data is only 4 bits in this example, we set
the number of input bits of each partition block to 4, and then the levels in each partition block equal [log2 4] (i.e. two levels).
IV RESULT:

Fig.5.11.VHDL output showing the data write into the CAM


Fig.5.12.VHDL output showing the data read from the CAM
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

266 www.ijergs.org


Fig.5.13.VHDL output showing the address read from the CAM

V. CONCLUSION:
In this 14-bit low power pre computation–based content addressable memory (PB-CAM) is simulated in VHDL.
Mathematical analysis and simulation results confirmed that the Block-XOR PB-CAM can effectively save power by reducing the
number of comparison operations in the second part of the comparison process. In addition, it takes less area as compared with the
one‘s count parameter extractor. This PB-CAM takes data as input and gives the address pointing to the same data as well as different
data as an output exactly after one clock cycle. So it is flexible and adaptive for the low power and high speed search applications.
In this synthesis, a gate-block selection algorithm was proposed. The proposed algorithm can synthesize a proper
parameter extractor of the PB-CAM for a specific data type. Mathematical analysis and simulation results confirmed that the proposed
PB-CAM effectively save power by reducing the number of comparison operations in the data comparison process. In addition, the
proposed parameter extractor can compute parameter bits in parallel with only three logic gate delays for any input bit length (i.e.
constant delay of search operation).

REFERENCES
[1] K. Pagiamtzis and A. Sheikholeslami, ―Content-addressable memory(CAM) circuits and architectures:A tutorial and survey,‖
IEEE J. Solid-State Circuits, vol. 41, no. 3, pp. 712–727, Mar. 2006.
[2] H. Miyatake, M. Tanaka, and Y. Mori, ―A design for high-speed-lowpower CMOS fully parallel content-addressable memory
macros,‖IEEE J. Solid-State Circuits, vol. 36, no. 6, pp. 956–968, Jun. 2001.
[3] I. Arsovski, T. Chandler, and A. Sheikholeslami, ―A ternary contentaddressable memory (TCAM) based on 4T static storage and
includinga current-race sensing scheme,‖ IEEE J. Solid-State Circuits, vol. 38,no. 1, pp. 155–158, Jan. 2003.‘
[4] I. Arsovski and A. Sheikholeslami, ―A mismatch-dependent power allocationtechnique for match-line sensing in content-
addressable memories,‖IEEE J. Solid-State Circuits, vol. 38, no. 11, pp. 1958–1966,Nov. 2003.
[5] Y. J. Chang, S. J. Ruan, and F. Lai, ―Design and analysis of low power cache using two-level filter scheme,‖ IEEE Trans. Very
Large Scale Integr. (VLSI) Syst., vol. 11, no. 4, pp. 568–580, Aug. 2003.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

267 www.ijergs.org

[6] K. Vivekanandarajah, T. Srikanthan, and S. Bhattacharyya, ―Dynamic filter cache for low power instruction memory hierarchy,‖ in
Proc. EuromicroSymp. Digit. Syst. Des., Sep. 2004, pp. 607–610.
[7] R. Min, W. B. Jone, and Y. Hu, ―Location cache: A low-powre L2cache system,‖ in Proc. Int. Symp. Low Power Electron. Des.,
Apr.2004, pp. 120–125.
[8] K. Pagiamtzis and A. Sheikholeslami, ―Using cache to reduce power in content-addressable memories (CAMs),‖ in Proc. IEEE
Custom Integr.Circuits Conf., Sep. 2005, pp. 369–372.
[9] C. S. Lin, J. C. Chang, and B. D. Liu, ―A low-power precomputationbased fully parallel content-addressable memory,‖ IEEE J.
Solid-State Circuits, vol. 38, no. 4, pp. 622–654, Apr. 2003.
[10] K. H. Cheng, C. H.Wei, and S. Y. Jiang, ―Static divided word matching line for low-power content addressable memory design,‖
in Proc. IEEEInt. Symp. Circuits Syst., May 2004, vol. 2, pp. 23–26.
[11] S. Hanzawa, T. Sakata, K. Kajigaya, R. Takemura, and T. Kawahara,―A large-scale and low-power CAM architecture featuring a
one-hotspot block code for IP-address lookup in a network router,‖ IEEE J. Solid-State Circuits, vol. 40, no. 4, pp. 853–861, Apr.
2005.
[12] Y. Oike, M. Ikeda, and K. Asada, ―A high-speed and low-voltage associative co-processor with exact Hamming/Manhattan-
distance estimation using word-parallel and hierarchical search architecture,‖ IEEE J. Solid-State Circuits, vol. 39, no. 8, pp. 1383–
1387, Aug. 2004.
[13] K. Pagiamtzis and A. Sheikholeslami, ―A low-power content-addressable memory (CAM) using pipelined hierarchical search
scheme,‖IEEE J. Solid-State Circuits, vol. 39, no. 9, pp. 1512–1519, Sep. 2004.
[14] D. K. Bhavsar, ―A built-in self-test method for write-only content addressablememories,‖ in Proc. 23rd IEEE VLSI Test Symp.,
2005, pp.9–14.













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

268 www.ijergs.org

Design of low power S-Box in Architecture Level using GF
N.Shanthini
1
, P.Rajasekar
1
, Dr. H.Mangalam
1

1
Asst. prof, Department of ECE, Kathir college of Engg, Coimbatore
E-mail – rajasekarkpr@gmail.com
Abstract - Information security has become an important issue in the modern world and also the technology is going to increase very
fast.Data encryption & decryption method were more popular for real-time security communication application used in nowadays. For
that purpose AES has to be proposed. One of the most critical problem in AES is the power consumption. In this paper presents an
optimized composite field arithmetic based S-Box implemented in four stage pipeline.Here we mainly concentrate the power
consumption of S box which is the most power consuming block in AES. The construction procedure for implementing Galois Field
(GF) combinational logic based S-Box is presented here. S Box operation is divided in to the GF based multiplication and inverse
operation and illustrated in a step-by-step manner. The XC2VP30 device of Xilinx FPGA is used to validate the power with VHDL
code for the proposed architecture. Power consumption has been measured by Xpower analyser tool in ISE 14.7 design suite.

Keywords - AES,S-Box, composite field arithmetic, GF, Pipelining, FPGA,VHDL.
INTRODUCTION
One of the most important think in the modern world is the information because without information we cannot doing
anything. The evolution of information technology and in particular the increase in the speed of processing and power consumption
devices has necessitated the need to reconsider the cryptographic algorithms used. So it‘s necessary to encrypt and decrypt our
information. Encryption normally hides our original message into unreadable form for anyone at the same way decryption change the
unreadable form into readable form for the respective person. Cipher system is one of the security mechanism to protect any
information from unauthorized person or any public person. Cipher systems are usually subdivided into block ciphers and stream
ciphers. Block ciphers operates on simultaneously encrypts the groups of characters, and also the stream ciphers usually operate on
the individual characters of a plain text message one at a time, cryptographic algorithms used. There are two types of encryption
algorithm Private(symmetric key)& Public where as Private uses only one key for both encryption & decryption, Public uses two key
one for encryption & another one for decryption. Substitution-permutation networks (SPNs) are natural constructions for symmetric
key Cryptosystems that realize confusion and diffusion through substitution and permutation operations, respectively.In SPNs the only
non-linear operation is substitution step, and it can commonly referred to as an S(ubstitution)-box,the construction of S-BOX is very
difficult and it‘s important in AES.
cryptographically strong block ciphers that are resilient to common attacks, including both the linear and differential
cryptanalysis, and also the algebraic attacks. The two Claude Shannon‘s properties of confusion and diffusion are strengthening the
symmetric key cryptosystem where Confusion can be defined as the complexity of the relationship between the secret key and cipher
text, and diffusion can be defined as the degree to which the influence of single input plaintext bit is spread throughout the resulting
cipher text. The National Institute of Standards and Technology of the United States (NIST) in cooperation with industry and
cryptographic communities have worked together to create a new cryptographic standard. The symmetric block cipher Rijndael was
standardized by the NIST as the AES in November of 2001.AES is an Advanced Encryption Standard provides high security as
compared to other encryption techniques along with RSA model. At the time of introducing AES the NIST publicly calls for
nominees for the new AES. Totally 15 algorithm has to be applied in that 5 finalists were chosen based on the Presentation ,analysis
& testing. From that 5 the one algorithm will be chosen as the successful one that one is Rijndael. Finally Rijndael AES cipher is
adapted which is a Symmetric key encryption standard. This algorithm is proposed by the two Belgian cryptographers Vincent Rijmen
and Joan Doemen. In Advanced Encryption Standard (AES) symmetric-key blockcipher, the construction of cryptographically strong
S-boxes with efficient hardware and software implementations in these cryptosystems has become a topic of critical research. The
basic difference between the normal AES & Rijndael AES is that in the Normal AES fixes block length to 128 bits & support key
length of 128,192,256 were as the Rijndael AES block & key length can be independently fixed to any multiple of 32,ranging from
128 to 256 bits.In this paper we investigate a design methodology for low power S-Box because the S-Box is one of the non-linear
operation in AES. The FPGA implementation of the architecture is done along with comparison of some exsisting system.
The remaining part of this paper as follow: Section II describe the AES operation .The S-Box construction method was
described in Section III. Section IVcontain the Proposed S-Box architecture. The simulation result & conclusion are drawn from
Section V, Section VI respectively.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

269 www.ijergs.org

AES Encryption algorithm
In previous the DES was used but it can support only 56 bit key. The AES is a symmetric block cipher, which uses the same
key for both encryption and decryption. It has been broadly used for different applications, like smart cards, cellular phones, website
servers, automated teller machines etc. The process of generating cipher is Similar to other symmetric ciphers, the AES applies round
operations iteratively to the plaintext to generate the cipher text. There are four transformations in a round operation: SubBytes,
ShiftRow, MixColumn and AddRoundKey. The subbyte is a non-linear operation where one byte is substituted for another based on
the algorithm we have to use. In the shiftrow operation data is shifted with in row. Row 0 is not shifted, Row 1 is shifted 1 byte like
wise. The mixcolumn operation has perform mixing of data within columns. The actual encryption is performed in the add round key
function, when each byte in the state perform xor operation with the subkey.
The AES process can be defined in three types based on length of the key used for the generating the cipher text which are
AES 128, AES192, AES256. In this operation, the AES cipher maintains an internally 4 by 4 matrix of bytes called states. The state
consists of four rows of bytes, each row containing Nb bytes, where N is the number of byte and b is the block length divided by 32 (4
for 128-bit key, 6 for 192-bit key, 8 for 256-bit key). At the same time key length and number of rounds differ from key to key, i.e we
have to use 10 round for 128-bit key,12 round for 192-bit key,14 round for 256-bit key. The last round operation is different from the
Previous other rounds as there is no mixcolumn transformation. The AES encryption & decryption operation is shown in Fig1.


Fig 1: AES encryption and decryption algorithm

S-Box Transformation
The Sub Bytes transformation is a nonlinear byte substitution that operates independently on each byte of the State using a substitution
table (S-box). ThisS-box, was usually invertible, and it can construted using two method :
1.Look up table
2.Composite field arithmetic
In that Look up table all the values are predefined based on the ROM so the area and memory access & latency is high. So
our method is based on the composite field arithmetic it contain two main operation as follows:
(1) Perform the multiplicative inverse in GF(2
8
).
(2)Perform the affine transformation over GF(2).
The GF stands for Galois Field. The Arithmetic in a finite field(Galois Field) is usually different from the standard integer arithmetic.
The finite field should contain the limited number of elements. The finite field with (p
n
)element is denoted GF(p
n
), wherepis a prime
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

270 www.ijergs.org

numbercalled the characteristic of the field andnis a positive integer. Aparticular case is GF(2) which has only two elements (1 and 0),
where addition is exclusive OR (XOR) and multiplication is AND. The element ―0‖ is never invertible, the element ―1" is always
invertible and inverse to itself. Therefore, the only invertible element in GF(2) is ―1". Since the only invertible element is ―1" and the
multiplicative inverse of ―1" is also ―1", division is an identity function.
The individual bits in a byte representing a GF(2
8
) element can be viewed as coefficients to each power term in the
GF(2
8
) polynomial. For instance, {10001011}2 is representing the polynomial q
7
+ q
3
+ q + 1in GF(2
8
). From [2], it is stated that any
arbitrary polynomial can be represented as bx + c, given an irreducible polynomial of x
2
+ Ax + B.
Thus, element in GF(2
8
)may be represented as bx + c in that b is the most significant nibble while c is the least significant nibble. So
the multiplicative inverse can be construted using the equation below,
( + )
−1
= (
2
B+bcA+
2
)-1 x+(c+bA)(
2
+ + 2) − 1
where A=1, B=ì so that the equation become
( + )
−1
= (
2
ì+bc+
2
)-1 x+(c+b)(
2
ì + +2) − 1 (1)
Proposed S-Box Design Method
This section says that the multiplicative inverse computation will first be covered and the affine transformation will then follow to
complete the methodology involved for constructing the S-BOX for the subbyte operation.For the invsubbyte operation,that can reuse
multiplicative inversion module and combine it with the inverse affine transformation.So the multiplicative inverse can be constructed
using the equation 1,

Fig 2 : Show the block diagram for S-box
Show the Description for the building blocks of the S-Box
o =Isomorphic mapping to composite field

2
=Squarer in GF(2
4
)
ì =Multiplication with constant,ì in GF(2
4
)
© = Addition operation in GF(2
4
)

−1
= Multiplicative inversion in GF(2
4
)
= Multiplication operation in GF(2
4
)
o
−1
= Inverse isomorphic mapping to GF(2
8
)
Affine Transform
The affine transform is normally should improve our result. It‘s the second building for the composite field arithmetic based S-Box.
Our proposed affine transform & Inverse affine transform as follows:
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

271 www.ijergs.org

o=

©
(+4 8)
©
(+5 8)
©
(+6 8)
©
(+7 8)
©

(2)
Where d = {01100011},I = 0 to 7
o
−1
=
(+2 8)
©
(+5 8)
©
(+7 8
©

(3)
Where d={00000101},I = 0 to 7
Isomorphic and Inverse Isomorphic mapping
Computation of the multiplicative inverse in composite fields cannot be directly applied to an element which is based on GF(2
8
) .So
for that we have to decomposing the more complex GF(2
8
) to lower order fields of GF(2
1
), GF(2
2
), GF(2
2
)
2
).To accomplish
this ,the following irreducible polynomials are used .
(2
2
) GF(2) :
2
+ +1
GF(2
2
)
2
) GF(2
2
) :
2
+ +|
GF(((2
2
)
2
)
2
) GF((2
2
)
2
) :
2
+ +ì
where | = {10}2 and ì = {1100}2.
The element in GF(2
8
) has to be mapped to its composite field representation via an isomorphic function,o.After performing the
multiplicative inversion, the result will also have to be mapped to its equivalent in GF(2
8
) via the inverse isomorphic functiono
−1
.Let
q be the element in GF(2
8
) ,in that o&o
−1
can be represented as 8x8 matrix, where q7 is the most significant bit,q0 is the least
significant bit. The equation is given as below,
o =

7
©
5
7©6©4©3©2©1
7©5©3©2
7©5©3©2©1
7©6©2©1
7©4©3©2©1
6©4©1
6©1©0

o
−1
X q =

7©6©5©1
6©2
6©5©1
6©5©4©2©1
5©4©3©2©1
7©4©3©2©1
5©4
6©5©4©2©0

Arithmetic operation in composite Field
In Galois Field the element q can be split into qHx+qL i.e the higher & lower order term.
Addition in GF(

)
Addition of two elements in Galois Field can be translated to simple bitwise XOR operation between the two elements.
Squaring in GF(

)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

272 www.ijergs.org

We have to take k=
2
,where k and q is an element in GF(2
4
) ,represented by the binary number of {k3 k2 k1 k0}2 and {q3 q2 q1
q0}2 respectively, From that,
k3 k2 = kH , k1 k0 = kL, q3q2 =qH
q1 q0= qL So,
kH x+kL = (qH x+qL)
2

Using the irreducible polynomial
2
+x+1,and setting it to
2
=x+1,so the higher and lower order term is given by,
kH = q3(x+1)+q2
i.e k3x+k2 = q3x+(q2+q3) (4)
kL = q3(1)+q2x+q1(x+1)+q0
i.e k1x+k0 = (q2+q1)x+(q3+q1+q0)(5)
From the equation 2 & 3 the formula for computing the squaring operation in GF(2
4
) is shown below.
k3 = q3
k2 = q3©q2
k1 = q2©q1
k0 = q3©q1©q0
Multiplication with constant ì
In that we take k=qì where k={k3 k2 k1 k0}2,q={q3 q2 q1 q0}2 and ì={1100}2 are element in GF(2
4
) we proceed the
same procedure as seen in Addition we get,
k3 = q3
k2= q3©q2
k1=q2©q1
k0=q3©q1©q0
GF(

) Multiplication
Let k=qw where k={k3k2k1k0}2, q={q3q2q1q0}2 & w={w3w2w1w0}2 are element of GF(2
4
)
k = kHx+kL = (qHwH+qHwL+qLwH) x +qHwH|+qLwL
GF(

) Multiplication
k=qw, where k={k1 k0}2,q={q1q0}2 & w={w1 w0} are element of GF(2
2
) we get
k1=q1w1© q0w1©q1w0
k0=q1w1©q0w0
Multiplication with constant |
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

273 www.ijergs.org

Let k=q|, where k={k1k0}2,q={q1q0}2 and |={10}2 are element ofGF(2
2
)
k1=q1©q0,k0=q1.
Multiplicative Inversion in GF(

)
q is an element of GF(2
4
) such that
−1
={3
−1
,2
−1
,1
−1
, 0
−1
},the inverse of the individual bits can be computed as below,
3
−1
=q3©q3q2q1©q3q0©q2
2
−1
=q3q2q1©q3q2q0©q3q0©q2©q2q1
1
−1
=q3©q3q2q1©q3q1q0©q2©q2q0©q1
0
−1
=q3q2q1©q3q2q0©q3q1©q3q1q0©q3q0©q2©q2q1©q2q1q0©q1©q0
From the above discussion is the operation for the composite field arithmetic based S-Box .Our proposed method is the
implementation of this S-Box in the four stage pipeline. So that the area, delay, power will be reduced. The diagram will shown
below,

Fig 3: Proposed Pipelined implemented S-Box






Comparison Result
We design the S-Box is based on composite field arithmetic method. In this paper proposed method coding can be written using
VHDL hardware description language. The XC2VP30 device of xilinx FPGA is used to validate the power with VHDL code for the
proposed architecture also the power is analysed using Xilinx ISE 14.7 Xpower analyzer. Table 1 show the comparison of power
,delay and slices for conventional & proposed method.fig 3 show power report for the proposed method,

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

274 www.ijergs.org

Table 1:Comparison Result for conventional&Proposed architecture, Simulation
Implementatio
n
No.of 4
LUT’S
No of
Occupied
slices
Dynamic power(W) Delay (ns)
Conventional
structure(C.S)
76 40 8.278 19.866
C.S in 2 stage
pipelined
83 43 5.072 15.76
C.S inv replace
equ
74 40 8.278 18.986
C.S inv rep equ
in 2(pipe)
81 43 5.076 14.412
C.S inv rep equ
in 4(pipe)
82 43 5.064 6.275
C.S inv replace
mux
76 39 8.277 18.863
C.S inv rep
mux in 2(pipe)
83 44 5.16 14.627
C.S inv rep
mux in 4(pipe)
88 49 8.36 6.275
Operand based
S-Box(OP)
75 39 8.278 18.366
OP in 2(pipe) 86 45 5.012 18.608
OP in 4(pipe) 77 40 5.061 6.318
OP inv replace
equ
75 39 8.277 18.318
OP inv rep equ
in 4(pipe)
79 40 5.098 6.318
OP inv rep mux 74 39 8.278 18.318
OP inv rep mux
in 2(pipe)
79 40 5.066 16.869
OP inv rep mux
in 4(pipe)
76 40 5.78 6.318
Proposed
architecture
85 44 5.053 6.275

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

275 www.ijergs.org


Fig 4: Simulation Result for Proposed structure


Fig 5:Power report for the proposed architecture
Conclusion
The main aim of this paper is to design and implementation of the composite field arithmetic method based S-Box. Proposed
method is based on combinational logic , thus it‘s Power & delay is very low. The proposed approach is based on pipelining
technique. In this paper we have to use four stage pipelining in S-Box design. The proposed S-Box design is only based on XOR,
AND, NOT, OR logic gates. The pipelined based S-Box has low power & high speed than the conventional structure.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

276 www.ijergs.org

Acknowledgment
The author‘s would like to thank Kathir College of Engineering to utilize the lab facility, network resources to complete this
paper in time. The suggestion & comments of anonymous reviewers, Which have greatly helped to improve the quality of this paper &
knowledge.
REFERENCES:
[1]. Sumio moroika, Akashi Satoh, ―An optimized S-Box circuit architecture for low power AES design‖, Springer – Verilog
Berlin Heidelberg 2003.
[2]. Joon-HoHwang,―Efficient Hardware Architecture of SEED S-Box for the application of smart cards‖, journal December
2004.
[3]. P.Noo-intara, S. Chantarawong, S.Choomchaay, ―Architecture for mixcolumn transform for the AES‖,ICEP 2004.
[4]. GeorgeN.Selimis,Athanasios P.Kakarountas,ApostolosP.Fournaris,Odysseas Koufopaylou, ―A Low Power design for S-box
cryptographic Primitive of AES for the Mobile end user‖,Journal 2007.
[5]. Xing Ji-Peng,Zou Xue-cheng,Guo Xu,‖Ultra-Low power S-Boxes architecture in the AES method‖,journal march 2008.
[6]. L.Thulasimani,M.Madheswaran,‖A Single chip design & Implementation of AES -
128/192/256encryptionalgorithm‖,International journal 2010.
[7]. MohammadAminAmiri,Sattar Mirzakuchaki, Mojdeh Mahdavi,‖LUT based QCA realization of a 4x4 S-Box in the AES
method‖, Journal April 2010.
[8]. Yong-sung Jeon,Young-Jin Kim,Dong-Ho Lee,‖A Compact Memory-Free architecture for the AES algorithm using RS
methods‖,Journal 2010.
[9]. MuhammadH.Rais,Mohammad H.Al.Mijalli,―ReconfigurableImplementation of S-Box using Virtex-5,Virtex-6,Virtex-7
based reduced residue of Prime number‖.
[10]. TomoyasuSuzaki,KazuhikoMinematsu,Sumio moroika Eita Kobayashi, ―TWINE : A light weight block cipher for multiple
Platforms‖.
[11]. Vincent Rijmen ―Efficient Implementation of the Rijndael S-Box‖ Katholieke Universiteit Leuven,Dept,ESAT Belgium.
[12]. Akashi Satoh, Sumio Morioka,Kohji Takano and Seiji Munetoh, ―A Compact Rijndael Hardware Architecture with
Optimization‖, Springer-Verlag Berlin Heidelberg.
[13]. Saurabh kumar, V.K. Sharma, K.K.Mahapatra ―Low latency VLSI architecture of S-Box for AES encryption‖.
[14]. Saurabh Kumar, V.K. Sharma, K.K. Mahapatra ―An improved VLSI architecture of S-Box for AES encryption‖.
[15]. S.Limanarrag,Abdellatifhamdown, Abderrahimtragha, Sulaheddinekhamilich, ―Implementation of stronger AES by using
dynamic S-Box dependent of master key‖, journal of theroretical and applied information technology, 20
th
july 2013, vol.53
no.2.
[16]. Cheng wang, ―Performance characterization of pipelined S-Box implementation for the AES‖, Journal January2014.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

277 www.ijergs.org

Single Phase d-q Transformation using as indirect Control Method for Shunt
Active Power Filter
Sachi Sharma
1

1
Research Scholar (M.E), LDRP-ITR College, Gandhinagar, India
Email-spark_sachi@yahoo.com
Abstract— A single-phase shunt active power filter is used mainly for the elimination of harmonics in single-phase AC networks. In
this paper a single-phase shunt active power filter based on, an indirect control technique is designed and simulated. This control
technique is achieved by phase shifting the input signal(voltage/current) by π/2.The overall action of the shunt active power filter to
eliminate the harmonics created by a non-linear load on the source side is discussed in this paper and the output of the shunt active
power filter is verified using MATLAB/Simulink software.

Keywords— Harmonics, Single Phase Shunt Active Power Filter,

1. Introduction
Because of the tremendous advantage of power electronic based devices/equipment they play a vital role in the modern power
processing .As a result these devices/equipment draws non-sinusoidal current from the utility side due to their nonlinearity .So in
addition to the reactive power supply a typical distribution system has to take care of the harmonics also[C]. These power quality
concerns made the power engineers to think about the devices which reduces the harmonics in the supply line [E,F].Such devices are
known as active power filter/power conditioners which are capable of current/voltage harmonic compensation. Active power filters
are classified into shunt , series and hybrid active power filters which can deal with various power quality issues [A,E]. One of the
major advantage of the APF‘s are they are adaptable to changes in network and load fluctuations and it consumes only less space
compared with the conventional passive filters[H]. Nowadays power quality issues in single phase system is more than three phase
due to the large scale uses of non-linear loads and also due to the increase in newly developed distributed generation systems like
solar photo voltaic, small wind energy systems etc in single phase network [A,G].Reactive power and current harmonics are
significant while considering a single-phase network, which are major concerns for a power distribution system, because these issues
leads to other power quality troubles. In this paper a single-phase shunt active power filter based on indirect control technique for
generating the reference signal is used. In this paper section (2) detailing about single-phase shunt active power filter, section (3) gives
an idea about the indirect control strategy which is then followed by the simulation study and conclusions.
2. Single-Phase Shunt Active Power Filter
In this topology the active power filter is connected in parallel to the utility and the non-linear load. Pulse width modulated voltage
source inverters are used in shunt active power filter and they are acting as a current controlled voltage source. The compensation for
current harmonics in shunt active power filter is by injecting equal and opposite harmonic compensating current (180 degree phase
shifted).As a result the harmonics in line get cancelled out and source current becomes sinusoidal and makes it in phase with source
voltage .With the help of control strategies reference signals are generated and which then compared with the source current to
produce the gating signals for the switches.. For the reference signal generation there are different control strategies like instantaneous
active reactive power theory (pq theory) developed by Akagi [K] ,Parks d-q or synchronous reference frame theory[D].
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

278 www.ijergs.org

These control strategies are mainly focused on three phase systems [I].The three phase pq theory is made applicable to the
single phase systems by the work of Liu [J] by phase shifting an imaginary variable which is similar to voltage or current signals by 90
degree. Later this concept extended to single phase synchronous d-q reference frame by Zhang [B].

Figure 1: Principle of shunt active power filter.

3. Indirect Control Technique
3.1 Single-phase d-q transformation

Figure 3: Reference signal generation using single-phase d-q transformation.

A single-phase system can directly convert into αβ frame without any matrix transformation. An imaginary variable obtained
by shifting the original signal (voltage/current) by 90 degrees and thus the original signal and imaginary signal represent the load
current in αβ co-ordinates.

From second equation we can write as

From

we can derive fundamental active ,fundamental reactive, harmonic active, and harmonic reactive by using
appropriate filters .The DC components


,


are obtained by using LPF and AC components

~
,

~
are obtained by using HPF.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

279 www.ijergs.org

Here we are using the DC component for the generation of reference current hence it is called indirect method. The load requires only
fundamental active part of the source current.


In order to obtain a constant DC voltage across the active filter the term

is added to the above equation.
s
Therefore the reference signal is

The generated reference current is used for making gating pulses to the inverter switches which further inject the compensating current
into the line

Figure 4: Simulink model of proposed shunt active power filter.

4. Simulation Study
The proposed single-phase shunt active power filter using indirect control strategy is simulated in simpower system toolbox in
MATLAB software. Here a 60Hz source is connected to the non-linear diode rectifier load.Due to the non-linearity in the load the
source current is distorted and the THD content is about 38.90%.When the shunt active power filter is connected in between source
and load which injects thenegative harmonic compensating current into the line and the source current regain its sinusoidal nature , the
power factor is much better than without the filter and the THD content is improved to 9.65%.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

280 www.ijergs.org




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

281 www.ijergs.org




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

282 www.ijergs.org



Figure 5: FFT analysis of distorted source current.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

283 www.ijergs.org


Figure 6: FFT analysis of source current after compensation.

Figure 7: FFT analysis of source voltage.

Table 1: Performance of indirect control technique for 1Ø SAPF.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

284 www.ijergs.org

5. ACKNOWLEDGEMENT
A colossal number of people have directly or indirectly helped me at different levels of me to successfully accomplish it. It is their
relentless support and care that has got me through this memorable journey of our life. So, here I would like to express our sincere
gratitude towards them.
I am grateful to all faculty members of Electrical Engineering Department, LDRP Institute of Technology & Research. I also thank
our Head of Electrical Engineering Department, Prof. H. N. Prajapati for providing the necessary infrastructure to carry out the paper
work at the institute. Last but not least, we would like to express thanks to the Almighty GOD and our families without whose
blessings the successful accomplishment of this seminar could not have been possible. I will always remember and cherish each and
every one who has helped me to bring the project to this level.
6. Conclusion

A single phase shunt active power filter based on indirect control technique is used in this paper. Using this control strategy
reference signal is generated successfully. The shunt active power filter is found effective in injecting harmonic compensating current
and thereby reducing the source current THD and improves the power factor of the line.The THD is reduced from 38.90% to 9.65%
after compensation. It is also noticed that a constant voltage appears across the DC-link capacitor which helps the smooth functioning
of the voltage source inverter.The shunt active power filter output is verified successfully with the help of MATLAB software.
REFERENCES:
[1] V Khadikar,A Chandra and B N Singh(2009),‖Generalised single-phase p-q theory for active power filtering:simulation and
DSP-based experimental investigation‖, IET Power Electronics,,Vol.2,No.1,pp.67-78.
[2] R Zhang, M Cardinal, P Szczesny and M Dame(2002), ―A grid simulator with control of single-phase power converters in
D-Q rotating frame,‖ in proc. IEEE Power Electronics Specialists Conference(PESC),vol.,pp.1431-1436.
[3] M Gonzalez, V Cardenas and F Pazos (2004),‖D-Q transformation development for single-phase systems to compensate
harmonic distortion and reactive power,‖ in Proc. IEEE Power Electronics Congress ,pp.177-182.
[4] S Golestan, M Joorabian, H Rastegar, A Roshan and J.M. Guerrero(2009),‖Droop based control of parallel-connected single
phase inverters in D-Q rotating frame,‖ in Proc. IEEE Industrial Technology,pp1-6.
[5] B Singh, K Al-Haddad and A Chandra(1999),‖A Review of Active Power Filters for Power Quality Improvement‖, IEEE
Transactions Ind. Electro. Vol 45,no.5,pp.960-971.
[6] M El-Habrouk,M K.Darwish and P Mehta(2000),‖Active power filters a review‖, in Proc. Of IEE-Elect. Power
Appl,vol.147,no-5,pp.403-413.
[7] Kunjumuhammed L.P,Mishra M.K(2006) ,―Comparison of single phase shunt active power filters algorithm‖, proc. Annu.
Conf..IEEE Power India.
[8] Mohammad H Rashid(2007),‖Power Electronics Handbook: Devices Circuits and Applications ‖Elsevier 2e. Single-Phase
Shunt Active Power Filter Using Indirect Control Method 89
[9] H Akagi, Y Kanazawad and A Nabae (1984),‖Instantaneous reactive power compensators comprising switching devices
without energy storage Components‖, IEEE Trans.Ind.Appl,Vol.20.no-3,pp 625-630.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

285 www.ijergs.org

[10] J Liu, J Yang and Z Wang(1999),‖A new approach for single –phase harmonic current detecting and its application in a
hybrid active power filter,‖inproc.Annu.Conf.IEEE.Indist.Electronics.Soc(IECON99),vol 2,pp,849-854.
[11] A Vectorial Approach for Generation of Optimal Current References for Multiphase Permanent -Magnet Synchronous
Machines in Real Time Xavier Kestelyn, Member, IEEE, and Eric Semail, Member, IEEE
[12] A Vectorial Approach for Generation of Optimal Current References for Multiphase Permanent -Magnet Synchronous
Machines in Real Time Xavier Kestelyn, Member, IEEE, and Eric Semail, Member, IEEE



















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

286 www.ijergs.org

A Comparative Study on Feature Extraction Techniques for Language
Identification
Varsha Singh
1
, Vinay Kumar Jain
2
, Dr. Neeta Tripathi
3

1
Research Scholar, Department of Electronics & Telecommunication, CSVTU University
2
Associate Professor, Department of Electronics & Telecommunication, CSVTU University
3
Principal, SSITM, CSVTU Univeristy, FET, SSGI, SSTC jumwani Bhilai, C. G, India
E-mail- varshasingh.40@gmail.com
ABSTRACT-— This paper presents a brief survey of feature extraction techniques used in language identification (LID) system. The
objective of the language identification system is to automatically identify the specific language from a spoken utterance. Also the
LID system must perform quickly and accurately. To fulfill this criteria the extraction of the features of acoustic signals is an
important task because LID mainly depends on the language-specific characteristics. The efficiency of this feature extraction phase is
important since it strongly affects the performance and quality of the system. There are different features which are used in LID are
cepstral coefficients, MFCC, PLP, RASTA-PLP, etc.

Keywords— LID (Language Identification), feature extraction, LPC, Cepstral analysis, MFCC, PLP, RASTA-PLP.
INTRODUCTION
The Speech is an important and natural form of communication with others. Over the past three decades there is the tremendous
development in the area of speech processing. Applications of speech processing include speech/ speaker recognition, language
identification etc. The objective of the automatic speaker recognition system is to extract, characterize and recognize the information
about speaker identity [1]. Language identification system automatically identifies the specific language from a spoken utterance.
Automatic language identification is therefore an essential component of, and usually the first gateway in, a multi-lingual speech
communication/interaction scenario. There are many potential applications of LID. In the area of telephone-based information
services, including customer service, phone banking, phone ordering, information hotline and other call-centre/Interactive Voice
Response (IVR) based services; LID systems would be able to automatically transfer the incoming call to the corresponding agent,
recorded message, or speech recognition system. LID system can be made efficient by extracting the language-specific characteristics.
In this paper we mainly focus on the language specific characteristics for language identification system. Spectral features are those
features that characterize the short-time spectrum and based on the time-varying properties of the speech signal. Temporal features are
assumed constant over a short period and its characteristics are short-time stationary.
LITERATURE REVIEW
Feature Extraction is a process of reducing data while retaining speaker discriminative information. The amount of data, generated
during the speech production, is quite large while the essential characteristics of the speech process change relatively slowly and
therefore, they require less data [2]. We can define requirement that should be taken into account during selection of the appropriate
speech signal characteristics of features [3, 4]:
- large between-speaker and small within-speaker variability
- not change over time or be affected by the speaker's health
- be difficult to impersonate/mimic
- not be affected by background noise nor depend on the specific transmission medium
- Occur naturally and frequently in speech.
It is not possible that a single feature would meet all the criteria listed above. Thus, a large number of features can be extracted and
combined to improve the accuracy of the system.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

287 www.ijergs.org

The pitch and formant features of speech signal are extracted and used to detect the three different emotional states of a person [5].
Pitch is originates from the vocal cords. When air flow from glottal through the vocal cords, the vibration of vocal cords/folds
produces pitch harmonics. The rate at which the vocal folds vibrate is the frequency of the pitch. So, when the vocal folds oscillate at
300 times per second, they are said to be producing a pitch of 300 Hz. Pitch is useful to differentiate speaker genres. In males, the
average pitch falls between 60 and 120 Hz, and the range of a female‘s pitch can be found between 120 and 200 Hz [2]. The Cepstral
analysis method is used for pitch extraction and LPC analysis method is used to extract the formant frequencies. Formants are defined
as the spectral peaks of sound spectrum, of the voice, of a person. In speech science and phonetics, formant frequencies refer to the
acoustic resonance of the human vocal tract. They are often measured as amplitude peaks in the frequency spectrum of the sound
wave. Formant frequencies are very much important in the analysis of the emotional state of a person. The Linear predictive coding
technique (LPC) has been used for estimation of the formant frequencies [5].
LPC is one of the feature extraction method based on the source-filter model of speech production. B.S. Atal in 1976 [3] uses linear
prediction model for parametric representation of speech derived features. The predictor coefficients and other speech parameters
derived from them, such as the impulse response function, the auto-correlation function, the area function, and the cepstrum function
were used as input to an automatic speaker recognition system, and found the cepstrum to provide the best results for speaker
recognition.
Reynolds in 1994 [6] compared different -features useful for speaker recognition, such as Mel frequency cepstral coefficients
(MFCCs), linear frequency cepstral coefficients (LFCCs), LPCC (linear predictive cepstral coefficients) and perceptual linear
prediction cepstral coefficients (PLPCCs). From the experiments conducted, he had concluded that, of these features, MFCCs and
LPCCs give better performance than the other features. Revised perceptual linear prediction was proposed by Kumar et al. [7], Ming
et al. [8] for the purpose of identifying the spoken language; Revised Perceptual Linear Prediction Coefficients (RPLP) was obtained
from combination of MFCC and PLP.
Of all the various spectral features, MFCC, LPCC and PLP are the most recommended features which carry information about the
resonance properties of vocal tract [9].
METHODOLOGY
In this section a comprehensive review of several methods for feature extraction are presented for language identification.
LPC: It is one of the important method for speech analysis because it can provide an estimate of the poles (hence the formant
frequency- produced by vocal tract) of the vocal tract transfer function. LPC (Linear Predictive Coding) analyzes the speech signal by
estimating the formants, removing their effects from the speech signal, and estimating the intensity and frequency of the remaining
buzz. The process of removing the formants is called inverse filtering and the remaining signal is called the residue [1]. The basic idea
behind LPC coding is that each sample can be approximated as a linear combination of a few past samples. The linear prediction
method provides a robust, reliable, and accurate method for estimating the parameters. The computation involved in LPC processing is
considerably less than cepstrum analysis.
Digital speech signal LPC Coefficients

Fig. 1 Block diagram of LPC algorithm
Cepstral Analysis: This analysis is a very convenient way to model spectral energy distribution. Cepstral analysis operates in a
domain in which the glottal frequency is separated from the vocal tract resonances. The low order coefficients of the cepstrum contain
information about the vocal tract, while the higher order coefficients contain primarily information about the excitation. (Actually, the
higher order coefficients contain both types of information, but the frequency of periodicity dominates). The word cepstrum was
derived by reversing the first syllable in the word spectrum. The cepstrum exists in a domain referred to as quefrency (reversal of the
first syllable in frequency) which has units of time. The cepstrum is defined as the inverse Fourier transform of the logarithm of the
power spectrum. The Cepstrum is the Forward Fourier Transform of a spectrum. It is thus the spectrum of a spectrum, and has certain
properties that make it useful in many types of signal analysis [10]. Cepstrum coefficients are calculated in short frames over time.
Only the first M cepstrum coefficients are used as features (all coefficients model the precise spectrum, coarse spectral shape is
Frame
blocking
Windowing
Autocorrelation
Analysis
Pre-
emphasis
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

288 www.ijergs.org

modeled by the first coefficients, precision is selected by the number of coefficients taken, and the first coefficient (energy) is usually
discarded). The cepstrum is calculated in two ways: LPC cepstrum and FFT cepstrum. LPC cepstrum is obtained from the LPC
coefficients and FFT cepstrum is obtained from a FFT. The most widely parametric representation for speech recognition is the FFT
cepstrum derived based on a Mel scale [11]. A drawback of the cepstral coefficients: linear frequency scale. Perceptually, the
frequency ranges 100–200Hz and 10 kHz 20 kHz should be approximately equally important. The standard cepstral coefficients do
not take this into account. Logarithmic frequency scale would be better. Mimic perception is necessary because typically we want to
classify sounds according to perceptual dissimilarity or similarity; perceptually relevant features often lead to robust classification,
too. It is desirable that small change in feature vector leads to small perceptual change (and vice versa). The Mel-frequency cepstral
coefficients fulfill this criterion.

Speech Cepstrum
Signal
Fig. 2 Cepstral analysis
MFCC: This technique is considered as one of the standard method for feature extraction and is accepted as the baseline. MFCCs are
based on the known variation of the human ear‘s critical bandwidths with frequency; filters spaced linearly at low frequencies and
logarithmically at high frequencies have been used to capture the phonetically important characteristics of speech. This is expressed in
the Mel-frequency scale (the Mel scale was used by Mermelstein and Davis [11] to extract features from the speech signal for
improving the recognition performance). MFCC are the results of the short-term energy spectrum expressed on a Mel-frequency scale
[1]. The MFCCs are proved more efficient better anti-noise ability than other vocal tract parameters, such as LPC. Various steps to
calculate MFCC are shown in the figure below:

Speech Signal

Vectors of MFCC
Fig. 3 Block diagram of MFCC processor
LFCC speech features (LFCC-FB40): The methodology of LFCC [11] is same as MFCC. The only difference is that the Mel-
frequency filter bank is replaced by linear-frequency filter bank.. Thus, the desired frequency range is implemented by a filter-bank of
40 equal-width and equal-height linearly spaced filters. The bandwidth of each filter is 164 Hz, and the whole filter-bank covers the
frequency range [133, 6857] Hz.
Speech Signal

LFCC Coefficient
Fig. 4 LFCC Implementation
HFCC-E of Skowronsky & Harris: Skowronski & Harris [12] introduced the Human Factor Cepstral Coefficients (HFCC-E). In the
HFCC-E scheme the filter bandwidth is decoupled from the filter spacing. This is in contrast to the earlier MFCC implementations,
where these were dependent variables. Another difference to the MFCC is that in HFCC-E the filter bandwidth is derived from the
equivalent rectangular bandwidth (ERB), which is based on critical bands concept of Moore and Glasberg‘s expression rather than on
the Mel scale [11]. Still, the centre frequency of the individual filters is computed by utilizing the Mel scale. Furthermore, in HFCC-E
scheme the filter bandwidth is further scaled by a constant, which Skowronski and Harris labelled as E-factor. Larger values of the E-
factor E= {4, 5, 6} were reported [12] to contribute for improved noise robustness.
Sampling &
Pre-
emphasis

Framing &
Windowing

Absolute
Value

DCT

log

Mel-scaled
Filterbank

DFT
DFT
Log
IDFT Window
Sampling &
Pre-
emphasis

Framing &
Windowing

Absolute
Value

DCT

log

Linear
frequency
Filter bank

DFT
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

289 www.ijergs.org

Speech Signal

HFCC Coefficient
Fig. 5 HFCC Implementation
PLP: The Perceptual Linear Predictive (PLP) speech analysis technique is based on the short-term spectrum of speech. PLP is a
popular representation in speech recognition, and it is designed to find smooth spectra consisting of resonant peaks [13]. PLP
parameters are the coefficients that result from standard all-pole modeling [14] which is effective in suppressing speaker-specific
details of the spectrum. In addition, the PLP order is smaller than is typically needed by LPC-based speech recognition systems. PLP
models the human speech based on the concept of psychophysics of hearing [13]. In PLP the speech spectrum is modified by a set of
transformations that are based on models of the human auditory system. The PLP computation steps are critical-band spectral-
resolution, the equal-loudness hearing curve and the intensity-loudness power law of hearing. Once the auditory-like spectrum is
estimated, it is converted to autocorrelation values by doing a Fourier transform. The resulting autocorrelations are used as input to a
standard linear predictive analysis routine, and its output is perceptually-based linear prediction coefficients. Typically, these
coefficients are then converted to cepstral coefficients via a standard recursion [14].
Speech
Signal

PLP Cepstral Coefficient
Fig. 6 PLP Implementation
RASTA-PLP: A popular speech feature representation is known as RASTA-PLP, an acronym for Relative Spectral Transform –
Perceptual Linear Prediction. PLP was originally proposed by H. Hermansky as a way of warping spectra to minimize the differences
between speakers while preserving the important speech information [13]. The term RASTA comes from the words RelAtive
SpecTrA. RASTA filtering is often coupled with PLP for robust speech recognition. RASTA is a separate technique that applies a
band-pass filter to the energy in each frequency sub band in order to smooth over short-term noise variations and to remove any
constant offset resulting from static spectral coloration in the speech channel e.g. from a telephone line [15]. In essence, RASTA
filtering serves as a modulation-frequency band pass filter, which emphasizes the modulation frequency range most relevant to speech
while discarding lower or higher modulation frequencies.
Speech Signal


Cepstral Coefficients of RASTA-PLP

Fig. 7 RASTA-PLP Model

DFT
Intensity-
Loudness
Power-
Law of
hearing

IFFT

LPC to
Cepstral
Coefficient

Critical-
band
Spectral-
resolution

Equal-
Loudness
Hearing
Curve

Autoregressive
coefficients to
LPC

Sampling &
Pre-
emphasis

Framing &
Windowing

Absolute
Value

DCT

log

Human
factor filter
bank

DFT
DFT

Lograithm
& Filtering

Equal-
Loudness
Curve

IDFT

Solving of Set
of Linear
Equations
(DURBIN)

Power-
Law of
hearing

Inverse
Logarithm

Cepstral
Recursion

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

290 www.ijergs.org

CONCLUSION
MFCC, PLP and LPC are the most proposed acoustic features used in language identification. The accuracy and speed of LID system
is enhanced by combining more features of speech signal. In the following table some important conclusion has been made of above
discussed feature extraction technique.

Table No.1 Showing the concluding highlights of the different types of feature extraction methods
S. No.
Method Property Comments
1.
Linear Predictive
Coding
Static feature extraction method, 10 to
16 lower order coefficient.
The LP algorithm is a practical way to estimate formant
of the speech signal especially at high frequencies. It is
used for feature extraction at lower order.
2.
Cepstral Analysis Static feature extraction method, power
spectrum.
The Cepstrum is a practical way to extract the
fundamental frequency of the speech signal. The
Cepstral algorithm shows some limitations in the
localization of formants especially at high frequencies.
3.
Mel Frequency Cepstral
Coefficients
It is the result of short-term energy
spectrum and expressed on Mel-scale
which is linear frequency spacing
below 1000 Hz and a logarithmic
spacing above 1000 Hz.
The MFCC reduces the frequency information of the
speech signal into a small number of coefficients. It is
easy and relatively fast to compute.
4.
Linear Frequency
Cepstral Coefficients
Uses a bank of equal bandwidth filters
with linear spacing of the central
frequencies.
The equal bandwidth of all filters renders unnecessary
the effort for normalization of the area under each
filter.
5.
Human Factor Cepstral
Coefficients
Uses Moore and Glasberg‘s expression
for critical bandwidth (ERB), a
function only of center frequency, to
determine filter bandwidth.
Larger values of the E-factor contribute for improved
noise robustness.


6.
Perceptual Linear
Predictive Analysis
Short term spectrum is modified based
on psychophysically based
transformation.
Lower order analysis results in better estimates of
recognition parameters for a given amount of training
data.
7.
RASTA-PLP Applies a band pass filter to each
spectral component in the critical-band
spectrum estimate.
These features are best used when there is a mismatch
in the analog input channel between the development
and fielded systems.

REFERENCES:
[1] Vibha Tiwari, ―MFCC and its applications in speaker recognition‖, International Journal on Emerging Technologies 1(1): 19-
22(2010).
[2] Premakanthan P. and Mikhad W. B., ―Speaker Verification/Recognition and the Importance of Selective Feature Extraction:
Review‖, MWSCAS. Vol. 1, 57-61, 2001.
[3] B. S. Atal, ―Automatic Recognition of Speakers from their Voices‖, Proceedings of the IEEE, vol. 64, 1976, pp 460 – 475.
[4] Douglas A. Reynolds and Richard Rose, ―Robust Text Independent Speaker Identification using Gaussian Mixture Speaker
Models‖, IEEE transaction on Speech and Audio Processing, Vol.3, No.1, January 1995.
[5] Bageshree V. Sathe-Pathak, Ashish R. Panat, ―Extraction of Pitch and Formants and its Analysis to identify 3 different
emotional states of a person‖, International Journal of Computer Science Issues, Vol. 9, Issue 4, No 1, July 2012.
[6] D.A. Reynolds, ―Experimental evaluation of features for robust speaker identification‖, IEEE Trans. Speech Audio Process. , vol.
2(4), pp. 639-43, Oct. 1994.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

291 www.ijergs.org

[7] Kumar, P., A.N. Astik Biswas and M. Chandra, ―Spoken Language identification using hybrid feature extraction methods‖, J.
Telecomm., 1: 11-5, 2010.
[8] Ming, J., T. Hazen, J. Glass and D. Reynolds, ―Robust speaker recognition in noisy conditions‖, IEEE Trans. Audio Speech
Language Proc., 15:1711-1723, DOI: 10.1109/TASL.2007.899278, 2007.
[9] Hassan Euaidi and Jean Rouaf, ―Pitch and MFCC dependent GMM models for speaker identification systems‖, CCECE IEEE,
2004.
[10] Childers, D.G., Skinner, D.P., Kemerait, R.C., ―The cepstrum: A guide to processing‖ Proceedings of the IEEE Volume 65,
Issue 10, Oct. 1977 Page(s):1428 – 1443.
[11] Merlmestein P. and Davis S., ―Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously
Spoken Sentences‖, IEEE Trans. On ASSP, Aug, 1980. pp. 357-366.
[12] Skowronski, M.D., Harris, J.G., ―Exploiting independent filter bandwidth of human factor cepstral coefficients in automatic
speech recognition‖, J. Acoustic Soc. Am., 116(3):1774–1780, 2004.
[13] H. Hermansky, ―Perceptual linear predictive (PLP) analysis for speech‖, J. Acoustic Soc. Am., pp. 1738-1752, 1990.
[14] L. Rabiner and R. Schafer, ―Digital Processing of Speech Signals‖, Prentice Hall, Englewood Cliffs, NJ, 1978.
[15] H. Hermansky and N. Morgan, ―RASTA Processing of Speech‖, IEEE Trans. On Speech and Audio Processing, Vol. 2, 578-589,
Oct. 1994.

















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

292 www.ijergs.org

Design of Reconfigurable FFT/IFFT for Wireless Application
Preeti Mankar
1
, L.P.Thakare
1
, A.Y.Deshmukh
1

1
Scholar, Department of Electronics Engg. GHRCE, Nagpur,
E-mail- preetimankar414@gmail.com

ABSTRACT -— Communication is one of the important aspects of life. The field of communication has seen a fast growth with
the advancement in age and with growing demands. Digital domain is now being used for the transfer of signals in place of analog
domain. Single – carrier waves are being replaced by multi – carriers for the purpose of better transmission. Multi – carrier systems
like CDMA and OFDM are now – a – days being implemented commonly. The orthogonal frequency division multiplexing (OFDM)
modulation format has been proposed for variety of digital communications applications such as DVB-T and for wideband wireless
communication systems. OFDM requires the use of FFT & IFFT for conversion of signal from time domain to frequency domain and
vice versa respectively.
For number of applications the number of FFT/IFFT required changes and so there comes the concept of reconfiguration.
This concept of reconfiguration may be used for making the system applicable for various specifications. This paper discuss about the
concept of use of reconfigurable FFT in wireless systems to reduce the complexity, cost and power consumption of the system.

Keywords— OFDM, FFT/IFFT, Floating point representation, Complex Multiplier, Reconfigurable FFT/IFFT

INTRODUCTION
OFDM can be seen as either a modulation technique or a multiplexing technique. One of the main reasons to use OFDM is to increase
the robustness against frequency selective fading or narrowband interference. Error correction coding can then be used to correct for
the few erroneous subcarriers. The concept of using parallel data transmission and frequency division multiplexing was published in
the mid-1960s [1, 2]. Some early development is traced back to the 1950s [3]. OFDM has been adopted as a standard for various
wireless communication systems such as wireless local area networks, wireless metropolitan area networks, digital audio broadcasting,
and digital video broadcasting. It is widely known that OFDM is an attractive technique for achieving high data transmission rate in
wireless communication systems and it is robust to the frequency selective fading channels.


Figure1. A basic diagram of OFDM Transceiver
There are many types of FFT architectures used in OFDM systems. They are mainly categorized into three types
namely the parallel architecture, the pipeline architecture and the shared memory architecture. The high performance of a parallel and
pipelined architecture is achieved by having more butterfly processing units but they consume larger area than the shared memory
architecture. On the other hand, the shared memory architecture requires only one butterfly processing unit and has the advantage of
area efficiency.
The rest of the paper is organized as follows. In section II, the FFT algorithm is reviewed. Section III includes
comparative study of various methods and architectures available for reconfiguring FFTs for wireless systems. Section IV gives a
tabular comparison of the all the methods reviewed. Finally a conclusion is given in section V.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

293 www.ijergs.org

FFT ALGORITHM

Fast Fourier transform (FFT) has been playing an important role in digital signal processing and wireless communication systems. The
choice of FFT sizes is decided by different operation standards. It is desirable to make the FFT size changeable according to the
operation environment. The Fourier transform is a very useful operator for image or signal processing. Thus it has been extensively
studied and the literature about this subject is very rich. The Discrete Fourier Transform (DFT) is used for the digital signal processing
and its expression is given below
---(1)
It appears obvious that this expression can not be computed in a finite time due to the infinite bounds. From that, the usully computed
expression is the N-points Fast Fourier Transform
Given below
---(2)
The expression of the FFT is bounded and computable with a finite algorithmic complexity.
This complexity is expressed as an order of multiplications and additions. Computing a N-points FFT without any simplification
requires an algorithmic complexity of O(N2) multiplications and O(N2) where O denotes the "order of" multiplications and additions.
Note that the real number of additions is N(N −1) which is O(N2). This reduction of complexity is however not sufficient for the large
FFT sizes that are used in many digital communications standards.
FFT and IFFT methods have three types. One is fixed radix FFT, Mixed radix FFT and Split Radix FFT.[4] Fixed radix
decompositions are algorithms in which the same decomposition is applied repeatedly to the OFT equation. The most common
decompositions are radix-2, radix-4, radix-8 and radix-16. An algorithm of radix-r can reduce the order of computational complexity
to O(N logr(N)). Mixed-radix refers to using a variety of radices in succession. One application of this method is to calculate FFTs of
irregular sizes. Mixed-radix can also refer to a computation that uses multiple radices with a common factor. This could be a
combination of radices such as 2, 4, and 8. These can be ordered in a way to simplify and optimize calculations of specific sizes or to
increase the efficiency of computing FFTs of variable sized inputs. The split-radix algorithm is a method of blending two or more
radix sizes and reordering the sequence of operations in order to reduce the number of computations while maintaining accuracy.
"Split-radix FFT algorithms assume two or more parallel radix decompositions in every decomposition stage to fully exploit
advantage of different fixed-radix FFT algorithms.
FLOATING POINT REPRESENTATION

Floating point numbers are one possible way of representing real numbers in binary format; the IEEE 754 standard presents two
different floating point formats, Binary interchange format and Decimal interchange format. Fig. 2 shows the IEEE 754 single
precision binary format representation; it consists of a one bit sign (S), an eight bit exponent (E), and a twenty three bit fraction (M or
Mantissa). [5]If the exponent is greater than 0 and smaller than 255, and there is 1 in the MSB of the significand then the number is
said to be a normalized number; in this case the real number is represented by

Figure 2. IEEE single precision floating point format

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

294 www.ijergs.org

Sign Bit: This bit represents whether the number is positive or negative. 0 denotes a positive number and 1 denotes a negative number.
Exponent: This field represents both positive and negative exponents. This is done by adding a bias to the actual exponent in order to
get the stored exponent. For IEEE 754 this value is 127.
Mantissa: This field is also known as the significant and represents the precision bits of the number. It comprises of implicit leading
bits and the fraction bits.
Table I


In proposed work, the BCD input is first converted into floating number format. The process of addition, subtraction and
multiplication in the middle stage i.e. the complex multiplier takes place in floating point format only.
At the end the floating output is again converted into BCD. The system can process signed, unsigned and decimal numbers thereby
increasing the range.

Reconfigurable architecture
A Reconfigurable FFT Architecture can be implemented by cascading several radix-2 stages in order to accommodate different FFT
sizes. The signal-flow graphs for radix-2 to radix- 24 butterflies are shown in Fig. 3

Fig. 3 Various Butterfly Operations
Radix-2
The radix-2 PE applies one stage of radix-2 butterfly computations to its data. It is used when the size of the frame to be processed is
32 or 128 points. [7]The radix-2 PE is realized as a simplified radix-4 PE (Figure 2). The Butterfly Core is replaced with the simpler
radix-2 butterfly network, consisting of two (2) complex adders/subtractors and one (1) complex multiplier. This circuit though is
optimized further. In the split radix 128 and 32 point FFT computation, the twiddle factors for all radix- 2 butterflies have the constant
value of (1+0j). Plugging into the radix-2 butterfly equations we obtain:
Consequently, the complex multiplier (in the butterfly core) and the twiddle generator blocks are omitted.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

295 www.ijergs.org

The general fixed-radix algorithms decompose the FFT by letting r=rl=r2...=rm. The r-point DFT is called the butterfly, which is a
basic operation unit. Higher radix decomposition is often preferred because it can reduce the computational complexity by reducing
the number of complex multiplications required. The trade-off is that the hardware complexity of the butterfly grows as the radix
becomes high. However, a fixed-radix algorithm is sometimes found deficient due to its limitation on FFT size (power of r). As we
prefer higher radix algorithms to reduce the computational complexity, the flexibility of the FFT size is also limited. Therefore, the
mixed-radix algorithm is adopted in our design to keep the architecture flexible while using a high radix algorithm.
SIMULATION RESULTS

1. BCD to Floating Point Representation:










2. FLOATING POINT REPRESENTATION TO BCD:



3. 2- POINT FFT:


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

296 www.ijergs.org



4. 4-POINT FFT:


5. 4-POINT IFFT


CONCLUSIONS

This paper presents various methods for programmable FFT/IFFT processor design has been for OFDM applications.
The paper includes various low power, reduced complexity, and low cost methods of reconfigurable FFT/IFFT. The method used
shows that by making use of floating point format the FFT/IFFT of signed, unsigned, decimal numbers can be obtained efficiently.
Also by making use of reconfigurable architecture the system itself can be capable of switching into the radix algorithm according to
the provided input and can provide the correct computation result. Using vedic mathematics for the complex computation helps to
increase the speed of the computations and provide efficient results.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

297 www.ijergs.org

REFERENCES:

[1] BaigI,Jeoti V ‗DCTprecoded SLM technique for PAPR Reduction‘ Intelligent and Advanced System international
Conference,15-17 June 2010.
[2] S.P.Vimal ,K.R.Shankar Kumar ‗A New SLM Technique for PAPR Reduction in OFDM Systems‘, European Journal for
scientific Research ISSN 1450-216X vol.65,No.2,2011.
[3] OFDM Simulation using Matlab ‗Smart Research Laboratory‘faculty advisor Dr.Mary Ann Ingram ,Guillermo
Acosta ,Aug2000.
[4] Md Nooruzzaman Khan M.Mohamed Ismail Dr.P .K.J awahar II M.Tech(VLSI & Embedded Systems) Associate Professor
Professor ―An Efficient FFT IIFFT Architecture for Wireless communication‖ ICCSP-'12
[5] Preethi Sudha Gollamudi, M. Kamaraju Dept. of Electronics & Communications Engineering Gudlavalleru Engineering
College, Andhra Pradesh, India-769008 ―Design Of High Performance IEEE- 754 Single Precision (32 bit) Floating Point
Adder Using VHDL‖ International Journal of Engineering Research & Technology (IJERT) Vol. 2 Issue 7, July – 2013
IJERTIJERT ISSN.
[6] Sharon Thomas & 2V Sarada 1 Dept. of VLSI Design, 2Department of ECE, 1&2 SRM University ― Design of
Reconfigurable FFT Processor With Reduced Area And Power ―ISSN (PRINT) : 2320 – 8945, Volume -1, Issue -4, 2013
[7] Konstantinos E. MANOLOPOULOS, Konstantinos G. NAKOS, Dionysios I. REISIS and Nikolaos G. VLASSOPOULOS
Electronics Laboratory, Department of Physics National and Capodistrian University of Athens ―Reconfigurable Fast Fourier
Transform Architecture for Orthogonal Frequency Division Multiplexing Systems‖
[8] Anuj Kumar Varshney 1, Vrinda Gupta 2 Department of Electronics and Communication Engineering .National Institute of
Technology, Kurukshetra, Haryana Kurukshetra 136119, India ―Power-Time Efficient Algorithm for Computing
Reconfigurable FFT in Wireless Sensor Network‖ International Journal of Computer Science & Engineering Technology
(IJCSET)
SPNA071A–November 2006 Implementing Radix-2.













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

298 www.ijergs.org

A New Technique for Protecting Confidential information Using
Watermarking
Gayathri.M
1
,Pushpalatha.R
1
, Yuvaraja.T
2

1
PG Scholar, Department of ECE, kongunadu College of Engineering and Technology, Tamilnadu, India
2
Assitant Professor, Department of ECE, kongunadu College of Engineering and Technology, Tamilnadu, India
E-mail- mgayathri01@gmail.com
ABSTRACT - A new approach of image watermarking based on RSA encryption technique for the lossless medical images has
been proposed. This paper presents a strategy of attaining maximum embedding capacity in an image in a way that to determine the
amount of information to be added in each pixel, maximum possible neighboring pixels are analyzed for their frequencies. The
technique provides a seamless insertion of image into carrier video, and reduces the error assessment and artifacts insertion required
to a minimal. Two or more bits in each pixel can be used to embed message, which has high risk of delectability and image
degradation to increase the embedding capacity. The RSA techniques might use a significant bit insertion scheme, the bits of data
added in each pixel remains constant or a variable least significant bit insertion in which the number of bits added in each pixel vary
on the surrounding pixels to avoid degrading the image fidelity.
Keywords: watermarking, mean square error, encryption, decryption, SPHIT, wavelet, RSA algorithm.
1. INTRODUCTION
A watermark is a recognizable image or pattern in paper that appears as various shades of lightness/darkness when viewed by
transmitted light, caused by density variations or thickness in the paper. Watermarks have been used have been on currency, postage
stamps, , and other government documents to discourage counterfeiting. Watermarks are often used as security features of passports,
banknotes, postage stamps, and other documents to prevent counterfeiting. Encoding an identifying code into digitized video, music,
picture, or other file is known as a digital watermark.

A watermark is made by impressing a water-coated metal stamp or dandy roll onto the paper during manufacturing. Artists can
copyright their work by hiding their name within the image. It is also applicable to other media, such as digital video and audio. There
are a number of possible applications for digital watermarking technologies and this number is increasing rapidly. For example, in
data security, watermarks may be used for authentication, certification, and conditional access. Certification is a vital issue for official
documents, like identity cards or passports.

2. RSA Algorithm
RSA is associate algorithmic program for public-key cryptography that's supported the probable problem of factorization large
integers, the factorization drawback. A user of RSA creates and so publishes the product of two large prime numbers, in conjunction
with an auxiliary value, as their public key. The prime factors should be unbroken secret. Anyone will use the general public key to
encrypt a message, however with presently published methods, if the general public key is large enough, only someone with
information of the prime factors will feasibly rewrite the message. whether or not breaking RSA secret writing is as hard as factoring
is associate open question referred to as the RSA drawback.
The RSA algorithmic program involves 3 steps, it is given below
• Key generation
• Encryption
• Decryption

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

299 www.ijergs.org

2.1 Key generation
RSA involves a public key and a non-public key(private key). The general public key is well-known by everybody and is employed
for encrypting messages.
Messages encrypted with the general public key will solely be decrypted in a very affordable quantity of your time using the non-
public key.
2.2 Encryption
For example, Alice transmits her public key (n, e) to Bob and keeps the private key secret. Bob then desires to send message M to
Alice.
He initial turns M into a whole number m, such 0 ≤ m < n by exploitation an agreed-upon reversible protocol referred to as a padding
scheme. He then computes the cipher text c like

2.3 Decryption
Alice will recover m from c by exploitation her non-public key (private key) exponent d via computing


Given m, she will be able to recover the original message M by reversing the artifact theme

3. DISCRETE WAVELET TRANSFORM
It permits the image decomposition in several styles of coefficients conserving the image information. Such coefficients coming from
completely different images are suitably combined to get new coefficients in order that the data within the original images is collected
befittingly. In discrete wavelet transform (DWT), two channel filter bank is employed. When decomposition is performed, the
approximation and detail element is separated 2-D discrete wavelet Transformation (DWT) converts the image from the spatial
domain to frequency domain.

4. PEAK SIGNAL TO NOISE RATIO
PSNR is most typically used to measure the standard of reconstruction of loss compression codec‘s (e.g., for image compression). The
signal during this case is that the original information, and therefore the noise is that the error introduced by compression. Once
examination compression codec‘s, PSNR is approximation to human perception of reconstruction quality. Although the next PSNR
typically indicates that the reconstruction is of upper quality, in some cases it's going to not.


PSNR is most simply outlined via the mean square error (MSE).

Given a noise-free m×n monochrome image I and its noisy approximation K, MSE is outlined as:

=
1

, −,
2
−1
=0
−1
=0

The PSNR is defined as:
= 20. log
10
−10. log
10

Here, MAX
I
is that the most attainable pixel value of the image. when the pixels are delineated mistreatment eight bits per sample, this
can be 255.

For color pictures with three RGB values per images, the definition of PSNR is that the same except the MSE is that the total over all
squared value variations divided by image size and by three. Alternately, for color pictures the image is regenerate to a unique color
space and PSNR is reported against every channel of that color space.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

300 www.ijergs.org


4.1 Testing Topology
Depending on the knowledge that's created obtainable to the algorithmic rule, video quality check algorithms may be divided into 3
categories:
1. A ―Full Reference‖ (FR) algorithm has access to and makes use of the original reference sequence for a comparison (i.e. a
difference analysis). It can compare each pixel of the reference sequence to each corresponding pixel of the degraded sequence. FR
measurements give the highest accuracy and repeatability but tend to be processing intensive.
2. A ―Reduced Reference‖ (RR) algorithm uses a reduced side channel between the sender and the receiver which is not capable of
transmitting the complete reference signal. Instead of parameters are extracted at the causation aspect that helps predicting the
standard at the receiving side. RR measurements might supply reduced accuracy and represent a working compromise if information
measure for the reference signal is restricted.
3. A ―No Reference‖ (NR) algorithm only uses the degraded signal for the quality estimation and has no information of the
original reference sequence. NR algorithms are low accuracy, estimates only as the originating quality of the source reference is
completely unknown. A common variant of NR algorithms does not analyze the decoded video on a pixel level but work on an
analysis of the digital bit stream on an IP packet level, only. The measurement is consequently restricted to a transport stream analysis.
Peak Signal to Noise magnitude relation (PSNR) could be a ubiquitously used image process performs to compare two pictures. It the
foremost rudimentary estimate on the distinction between two pictures and is predicated on mean square error (MSE).

5. BLOCK DIAGRAM

Fig-1 Block diagram of the System
To hide an image into the carrier video, the image is encoded using SPIHT and then apply the discrete wavelet transform.
Watermarking is used to hide that image into video. After hide tha image into the video. There is no difference between the input
video and the watermarking video. The image can be recovered by using the SPIHT decoding and inverse wavelet transform.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

301 www.ijergs.org

The output is given below

Fig-2 output
6. CONCLUSION
The watermarking is used in the covert communication to transport secrete information. if to hide the secret message into an image
means the secret message is embedded into smaller matrix of size 8x8 and inserted into input image. In this paper the RSA algorithm
process is used to hide an image into the video. Video is used as a carrier. The improvement of this application would be extending its
functionality to support hiding data in video files or in other file format.
REFERENCES:
[1] Ayman Ibaida, Ibrahim Khalil(2013), ‗Wavelet Based ECG Steganography for Protecting Patient Confidential Information
in Point-of-Care Systems‘ IEEE Transactions on Biomedical Engineering,pp.1-9.
[2] Golpira. H and Danyali. H(2009), ‗Reversible blind watermarking for medical images based on wavelet histogram shifting,
IEEE,pp. 31–36.
[3] Ibaida. A, Khalil. I, and Sufi. F(2010), ‗Cardiac abnormalities detection from compressed ECG in wireless telemonitoring using
principal components analysis(PCA),‘ pp.207–212.
[4] Kaur. S, Singhal. R, Farooq. O, and Ahuja. B(2010), ‗Digital Watermarking of ECG Data for Secure Wireless Commuication‘, pp.
140–144.
[5] Lee .W and Lee C.(2001), ‗A cryptographic key management solution for hipaa privacy/security regulations‘, vol. 12, no.
1.,pp. 34-41.
[6] Malasri. K and Wang. L (2007), ‗Addressing security in medical sensor networks‘, ACM, p. 12.
[7] Marvel. L, Boncelet. C, and Retter. C.(1999), ‗Spread spectrum image steganography‘, vol. 8, no.8, pp. 1075–1083.
[8] Ming Li, Shucheng Yu, Yao Zheng, Kui Ren, and Wenjing Lou(2013), ‗Scalable and Secure Sharing of Personal Health Records
in Cloud Computing Using Attribute-Based Encryption‘ vol. 24, no. 1., , pp. 131-143.
[9] Wang. H, Peng. D, Wang .W, Sharif. H, Chen. H, and Khoynezhad. A(2010), ‗Resource-Aware secure ECG healthcare
monitoring through body sensor‘ vol.17, no.1., pp. 12-19,.
[10] Zheng. K and Qian. X (2008), ‗Reversible Data Hiding for Electrocardiogram Signal Based on Wavelet Transform‘ CIS‘08,
vol. 1,.
[11] Fei Hu, Meng Jiang(2007), ‗Privacy-Preserving Telecardiology Sensor Networks:Toward a Low-Cost Portable Wireless
Hardware/Software Codesign‘ vol.11,no.6.
[12] Y. Lin, I. Jan, P. Ko, Y. Chen, J. Wong(2004), ―A wireless PDA-based physiological monitoringsystem for patient transport‖,
vol. 8, no. 4,pp. 439–447,.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

302 www.ijergs.org

Design of Substrat Integerated Waveguide Bandpass Filter of SCRRs in the
Microstrip Line
DAMOU Mehdi
1,2
, NOURI keltouma
1,2
, Taybe Habib Chawki BOUAZZA
1
, Meghnia.Feham
2

1
Laboratoire de Technologies de Communications LTC, Faculté de technologie,-Université Dr Moulay Tahar, – BP 138–
Ennasr, Saida, Algérie
2
Laboratoire de recherche Systèmes et Technologies de l‘Information et de la communication STIC, Faculté des Sciences –
Université de Tlemcen – BP 119 – Tlemcen, Algérie
Email- bouazzamehdi@yahoo.fr
Abstract— In this paper A novel band-pass Substrate Integrated Waveguide (SIW) filter based on complementary Split ring
Resonators (CSRRs) is presented in this work.a X-band wideband bandpass filter based on a novel substrate integrated waveguide-to-
Complementary split ring resonators (SIW-CSSRs) cell is presented. In the cell, the (CSRRs) is etched on the top plane of the SIW
with high accuracy, so that the performance of the filter can be kept as good as possible. Finally, the filter, consisting of three cascaded
cells, is designed meet compact size, Three different CSRRs cells are etched in the top plane of the SIW for transmission zero control.
A demonstration band-pass filter is designed, It agreed with the simulated results well. This structure is designed with Numeric
Method (MOM) using CST on a single substrate of RT/Duroid 5880. Simulated results are presented and discussed..
Index Terms— Substrate Integrated Waveguide, Complementary split ring resonators CSRRs, band-pass, via, SIW, simulation

Introduction : very recently, Complementary split ring resonators (CSSRs) elements have been proposed for the synthesis of
negative permittivity and left-handed (LH) metamaterials in planar configuration [1] (see Fig 1). As explained in [2], CSRRs are the
dual counterparts of split ring resonators (SRRs), also depicted in Fig. 1, which were proposed by pendry in 1999. It has been
demonstrated that CSRRs etched in the ground plane or in the conductor strip of planar transmission media (microstrip or CPW)
provide a negative effective permittivity to the structure, and signal propagation is precluded (stopband behavior) in the vicinity of
their resonant frequency [2]. CSSRs have been applied to the design of compact band-pass filters with high performance and
controllable characteristics [3]. Recently, a new concept ―Substrate Integrated Waveguide (SIW)‖ has already attracted much interest
in the design of microwave and millimeter-wave integrated circuits. The SIW is synthesized by placing two rows of metallic via-holes
in a substrate. The field distribution in an SIW is similar to that in a conventional rectangular waveguide. Hence, it takes the
advantages of low cost, high Q-factor etc., and can easily be integrated into microwave and millimeter wave integrated circuits [4].
This technology is also feasible for waveguides in lowtemperature co-fired ceramic (LTCC). The SIW components such as filter,
multiplexers, and power dividers have been studied by researchers in [5]. In this paper, a band-pass SIW filter based on CSRRs is
proposed for the first time. The filter is consisted of the input and output coupling line with the CSRRs loaded SIW. Using the high-
pass characteristic of SIW and band-stop characteristic of CSSRs, a bandpass SIW filter is designed. In this paper, we will do a
detailed investigation of CSRR based stop band filters: starting with a single CSRR etching in the microstrip line, finding its stop
band characteristics and quality factor. Then the effect of number of CSRRs etching and periodicity on the stop band filter
performance will be investigated.
ANALYSIS OF SIW-CSRRs CELL
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

303 www.ijergs.org

The proposed SIW-CSRRs cell is shown in Fig 1. Since the CSRRs is etched into the top metal cover of SIW, it is quite convenient to
do system integration. For this proposed SIW-CSRRs cell, its bandpass function is the composite high-low (Hi-Lo) type, i.e., it is a
combination of the highpass guided wave function of SIW and the bandgap function of CSRRs.







filters: starting with a single CSRR etching in the microstrip line, finding its stop band characteristics and quality factor. Then the
effect of number of CSRRs etching and periodicity on the stop band filter performance will be investigated.
PARAMETER DESIGN OF SIW
The SIW was constructed from top and bottom metal planes of substrate and having two arrays of via holes in the both side walls as
shown in Fig. 2. Via hole must be shorted to both planes in order to provide vertical current paths, otherwise the propagation
characteristics of SIW will be significantly degraded. Since the vertical metal walls are replaced by via holes, propagating modes of
SIW are very close to, but not exactly the same as in rectangular waveguide [6].







By using equivalence resonance frequency, the size of SIW cavity is determined from [7]:



Fig. 2 Topology of the substrate Integrated Waveguide





(1)
Fig1. Geometries of the CSRRs and the SRRs, grey zones represent the metallization.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

304 www.ijergs.org

This is to ensure that the SIW filter be able to support TE
10
mode in the operating frequency range. The TE-field distribution in SIW is
just like in the conventional ectangular waveguide. The effective length of SIW cavity can be determined from:


Where w and l are the real width and length of SIW cavity. However D is the diameter and P is the pitch, also known as distance
between center to center of adjacent via hole shown in Fig. 3.





Via holes form a main part of SIW in order to realize the bilateral edge walls, the reduction and huge scale combination of electronic
devices place a remarkable request on multilayer geometries and also important for discontinuities in multilayered circuits. The
diameter and pitch is given by:
d < λg/ (3)
p ≤ 2d (4)
In order to minimize the leakage loss between nearby hole, pitch needs to be kept as small as possible based on (3) and (4) above. The
diameter of via hole also contributes to the losses. As consequences, the ratio d/p reflected to become more critical than pitch size of
via hole. This is because the pitch and diameter are interconnected and it might distract the return loss of the waveguide section in
view of its input port [21, 11]. The SIW components can be initially designed by using the equivalent rectangular waveguide model in
order to diminish design complexity. The effective width of SIW can be defined by:







Figure 3: Via hole







(2)





(5)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

305 www.ijergs.org

Substrate Integrated Waveguide
The SIW features high-pass characteristics, it was demonstrated in [8] that a TE10-like mode in the SIW has dispersion characteristics
that are almost identical with the mode of a dielectric filled rectangular waveguide with an equivalent width. This equivalent width is
the effective width of the SIW, namely, can be approximated as follows:

Then, the cutoff frequency for the SIW can be defined as fc = (c/2εr · aeqv), in which C is the light velocity in vacuum. Based on this
property, existing design techniques for rectangular waveguide can be used in a straightforward way to analyze and design various
components just knowing aeqv of the SIW. In this case, the SIW geometry size can be initially designed by
CSSR Loaded SIW
Fig 4 shows the Layout of a SIW with CSSRs etched in the top substrate.







Let us now analyze the CSSRs loaded SIW. Since CSRRs are etched in centre of the top layer, and they are mainly excited by the
electric field induced by the SIW, this coupling can be modeled by connecting the SIW capacitance to the CSRRs. According to this,
the proposed lumped-element equivalent circuit for the CSRR loaded SIW is that depicted in Fig. 4. As long as the electrical size of
the CSRRs is small, the structure can be described by means of lumped elements. In these models, L is the SIW inductance, C is the
coupling capacitance between the SIW and the CSRR. The resonator is described by means of a parallel tank [9], Lc and Cc being the
reactive elements and R accounting for losses.




Figure 4. Layout of a SIW with CSSR etched in the top substrate side, (a) top layer


Fig. 5 The depicted equivalent circuit models

(6)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

306 www.ijergs.org


In order to demonstrate the viability of the proposed technique, we have applied it to the determination of the electrical parameters of
the single cell CSSRs loaded SIW.
First Design Example
The specifications for the design example are:
• Frequency Band : 2 to 15 GHz
• Substrate : Duroid (cr = 2.2, h = 0.254 mrn)








The dimensions to the SIW are: a = 14 mm. The equivalent width of microstrip line w = 0.8 mm. The taper of microstrip line of
length equal to 5.5 mrn. and SIW dimensions are a = 14mm, D = 0.8mm and P = 1.6 mm, respectively. The width of the access lines
is 0.76 mm. The simulated (using CST Microwave Studio) S-parameters of Figure. 5 are shown in Fig. 6. It can be clearly found that
these structures exhibit similar characteristics except Figure. 6 Excellent results are also obtained for this transition, as shown in fig. 7

RESULTS AND DISCUSSION
A CSRR structure is designed to resonate at 9.17 GHz of the X-band microwave frequency region. The dimensions of the CSRR
structure are c = 4mm, d = 2 mm, f = 0.3 mm, s = 0.2mm and g = 0.4mm. The dependence on dimensions of the CSRR structure for
the resonant frequency is observed as follows: with the increase of the ring width (c) and gap width (d) resonant frequency increases.
The CSRR structure is placed in the microstrip line exactly below the center of a ground plane of width 2.89mm for a RT/Duroid 5880
substrate (dielectric constant εr = 2.22, thickness h = 0.254mm and tan δ = 0.002) as shown in Fig. 6. Same substrate is used for all
other later designs. All the designs are simulated using Microwave CST software [8]. The simulation results for a single CSRR etching
in a microstrip line are shown in Fig 7.
Fig. 6 Topology of the substrate Integrated Waveguide

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

307 www.ijergs.org












The results of scattering parameters versus frequency (GHz) show narrow stop band characteristics at the resonant frequency of CSRR
at 8.3 GHz. By placing a single CSRR structure in the strip line, we can obtain a narrow stop band with a very low insertion loss level,
which is not possible with conventional microstrip resonators. It is difficult to achieve such a good narrowband stop band response
with a single element of conventional resonators. Stop bandwidth of the above single CRRR loaded microstrip line filter is
approximately 456 MHz at the resonant frequency of 9.17 GHz.
Design of proposed transition
In order to combine SIW and microstrip technologies, SIW-microstrip transitions are very required [10]-[11]. SIW filter and tapered
transition shown in Fig. 8 has been studied. This structure is simulated on a The substrate used in the filter is RT/Duroid 5880 which
has permittivity of 2.22, height of 0.254mm, the distance between the rows of the centres of via is w = 15 mm, the diameter of the
metallic via is D = 0.8 mm and the period of the vias P = 1.6 mm. The width of tapered Wt

is 1.72 mm, its length is Lt

= 5.5 mm, and
thickness t = 0.035mm of the ground plane and microstrip line.







Fig 7. Simulate frequency response corresponding to the basic cell

Fig 8. Configuration for the proposed SIW Filter.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

308 www.ijergs.org

Table 1: The simulated performance of this structure




















Here our concern is to enhance the stop band filter characteristics by increasing the number of CSRR structures in the ground plane.
This is achieved by placing more CSRRs with the same resonant frequencies periodically. Such a stop band filter structure is shown in
Fig.8, which has three CSRR structures in the strip line and all the CSRRs are resonating at the same frequency of 8.3245 GHz. The
distance between the centers of any two adjacent CSRRs is known as period and it is 6 mm for this filter. The simulation results are
shown in Fig. 8.
CSSRs1 dimensions
Symbol Quality (mm) Symbol Quality (mm)
c 3.7 f 0.3
d 1.85 s 0.2
f 0.3 g 0.4
CSSRs2 dimensions
Symbol Quality (mm) Symbol Quality (mm)
c 4 f 0.3
d 2 s 0.2
f 0.3 g 0.4
CSSRs3 dimensions
Symbol Quality (mm) Symbol Quality (mm)
c 3.8 f 0.3
d 1.9 s 0.2
f 0.3 g 0.4
SIW dimensions
Symbol Quality (mm) Symbol Quality (mm)
Lt
5.5
Wt
1.72
WSIW
0.8
LSIW
1.9
D 0.8 P 1.6
a 14 L 32

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

309 www.ijergs.org

The simulation results depicted in Fig. 9 shows a stop band at 8.3245 GHz with a stop bandwidth of approximately 1.75GHz
(1750Mhz) .























Fig 9. Simulation results for the proposed filter SIW-CSRRs cell with different values




Fig 10. Simulation results S11 for the proposed filter SIW-CSRRs cell with different values for t=0.015,t=0.025 and t=0.035



Fig 11. Simulation results S21 for the proposed filter SIW-CSRRs cell with different values for
t=0.015,t=0.025 and t=0.035


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

310 www.ijergs.org

In order to achieve a low-loss broadband response, the transition is designed by simultaneously considering both impedance matching
and field matching. Thus, due to the electric field distribution in the SIW, each transition is connected to the center of the width of the
SIW, since the electric field of the fundamental mode is maximum in this place [8]. The optimization of the transition is performed by
means of electromagnetic simulations by varying the dimensions (Lt, Wt) of the stepped geometry. After optimization, the dimensions
retained are Wt = 1.72 mm and Lt = 5.5 mm.
The distribution of the electric field is given in Fig12.























Fig 12. Electric field distribution of proposed filter with three cascaded SIW-CSRRs cells (a) bottom layer, (b) top layer.





a




b
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

311 www.ijergs.org

DESIGN OF SIW FILTER
Filter Configuration
Fig.13 Shows the proposed design of filter, this filter includes two microstrip tapered transitions and four SIW resonators cavities.









Table 2: The simulated performance of this structure












Fig. 13. Configuration for the proposed SIW Filter
d = 2 mm, s = 0.2 mm, g = 0.4mm, a = 14mm, d = 0.8mm and b = 1.6 mm.





CSSRs dimensions
Symbol Quality (mm) Symbol Quality (mm)
c 1.5 f 0.3
d 1 s 0.1
f 0.15 g 0.2
L 4 x 2

SIW dimensions
Symbol Quality (mm) Symbol Quality (mm)
Lt 5.5 Wt 1.72
WSIW 0.8 LSIW 1.9
D 0.8 P 1.6
a 14 L 32

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

312 www.ijergs.org

Since the field distribution of mode in SIW has dispersion characteristics similar to the mode of the conventional dielectric waveguide,
the design of the proposed SIW band-pass filter, makes use of the same design method for a dielectric waveguide filter. The filter can
be designed according to the specifications [9]-[10]. Fig. 14 shows the simulation results of the opasse band filter structure shown in
Fig. 13. The results are plotted for the scattering parameters (S11 and S12) against frequency from 1GHz to 3GHz. These results show
a stop band mid band frequency of 1.9GHz, stop bandwidth ranges from 8 GHz to 12 GHz approximately 4 GHz. The period of the
CSRRs based stop band filter is changed to 6 mm. The number of CSRRs in the ground plane is same as in the previous design.









Its simulated S-parameters in Fig14 . From the simulated results, the filter has a central frequency of 10 GHz, a fractional bandwidth
of 72% and return loss better than 20 dB in the whole passband.






Fig.14. Stop band filter having 3 CSSRs in the stripline
Scattering parameters













Fig 15. Electric field distribution of proposed filter with three cascaded
SIW-CSRRs cells(a) top layer, (b) bottom layer.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

313 www.ijergs.org

CONCLUSION
Using the sub-wavelength resonator components of left handed metamaterials namely CSRR, more compact planar microstrip stop
band filtersIn this paper, Substrate Integrated Waveguide (SIW) filter based on complementary Split ring Resonators (CSRRs) is
presented in this work for X-band applications has been designed. The simulation process of the structure is done by using CST
software. This type of filter is suitable for highdensity integrated microwave and millimeter wave applications. The design method is
discussed; the effect of the aperture width of coupling and isolation is studied. By using SIW techniques, the compact size of the
CSSRs is produced and easy to integrate with other planar circuit compared by using conventional waveguide. Single CSSR particle in
the microstrp line gives a very narrow stop band at its resonant frequency with an extremely high Q factor but periodically placing
these CSRR structures gives wide stop bands. This is especially of benefit for the growing numbers of microwave circuits required for
the compact integrated circuits (ICs) for wireless communications.

REFERENCES:
[1] David M. Pozar, ―Microwave Engineering‖, Third Edition, John Wiley & Sons Inc, 2005.
[2] Djerafi, T.; Ke Wu; , "Super-Compact Substrate Integrated Waveguide Cruciform Directional Coupler," Microwave and Wireless
Component Letters, IEEE, vol.17, no.11, pp.757-759, Nov. 2007.
[3] Peng Chen; Guang Hua; De Ting Chen; Yuan Chun Wei; Wei Hong; , "A double layer crossed over Substrate Integrated
Waveguide
wideband directional coupler, ―Microwave Conference, 2008. APMC 2008. Asia Pacific‖, vol., no., pp. 1-4, 16-20 Dec. 2008.
[4]Pendry, J. B., A. J. Holden, D. J. Robbins, and W. J. Stewart, ―Magnetism from conductors and enhanced nonlinear phenomena,‖
IEEE Trans. Microw. Theory Tech., Vol. 47, No. 11, Nov. 1999.
[5] Falcone, F., T. Lopetegi, J. D. Baena, R. Marqu´es, F. Mart´ın, and M. Sorolla, ―Effective negative-ε stop-band microstrip lines
based on complementary split ring resonators,‖ IEEE Microw. Wireless Compon. Lett., Vol. 14, No. 6, 280–282, Jun. 2004.
[6] Burokur, S. N., M. Latrach, and S. Toutain, ―Analysis and design of waveguides loaded with split-ring resonators,‖ Journal of
Electromagnetic Waves and Applications, Vol. 19, No. 11, 1407– 1421, 2005.
[7] Xu, W., L. W. Li, H. Y. Yao, T. S. Yeo, and Q. Wu, ―Lefthanded material effects on waves modes and resonant frequencies:
filled waveguide structures and substrate-loaded patch antennas,‖Journal of Electromagnetic Waves and Applications, Vol. 19,
No. 15, 2033–2047, 2005.
[8] Bonache, J., I. Gil, J. Garc´ıa-Garc´ıa, and F. Mart´ın, ―Novel microstrip bandpass filters based on complementary split-ring
resonators,‖ IEEE Trans. Microw. Theory Tech., Vol. 54, No. 1, 265–271, Jan. 2006.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

314 www.ijergs.org

[9] Bonache, J., F. Martin, I. Gil, J. Garcia-Garcia, R. Marques, and M. Sorolla, ―Microstrip bandpass filters with wide bandwidth and
compact dimensions,‖ Microw. Opt. Technol. Lett., Vol. 46, No. 4,
343–346, Aug. 2005.
[10] Cassivi, Y., L. Perregrini, P. Arcioni, M. Bressan, K. Wu, and G. Conciauro, ―Dispersion characteristics of substrate integrated
rectangular waveguide,‖ IEEE Microw. Wireless Compon. Lett.,Vol. 12, No. 9, 333–335, Sep. 2002.
[11] Lee, J. H., P. Stephane, P. J. Papapolymerou, L. Joy, and M. M. Tentzeris, ―Low-loss LTCC cavity filters using system-
onpackage technology at 60 GHz,‖ IEEE Trans. Microwave Theory Tech., Vol. 53, No. 12, 3817–3824, Dec. 2005




















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

315 www.ijergs.org

Comparative Study on Hemispherical Solar Still with Black Ink Added
Ajayraj S Solanki
1
,Umang R Soni
2
, Palak Patel
1

1
Research Scholar (M.E), Department of mechanical Engineering, Sardar Patel Institute of Technology, Piludara, Mehsana
2
Research Scholar (PHD), Department of mechanical Engineering, PAHER, Udaipur, Rajasthan, Gayatrinagar Society, Gujarat
E-mail soniur@gmail.com
Abstract— Water is the basic need for man to sustaining life on the earth. With the passage of time due to technical usage and their
waste disposal along with ignorance of human being caused water pollution, which led the world towards water scarcity. To resolve
this problem Solar Distillation is one of the best Techniques from available another techniques. But, due to its lower productivity it
cannot be commercial in the market. So that Lots of work can be done to improve the solar still efficiency or productivity. This
experimental has been carried out to measure the effect of black ink on the hemispherical solar still. With different water depth and
constant proportion of ink in the water and with same depth of water increasing the proportion of ink in the water has been compared
to the simple hemispherical solar still. From this experimental study, it has been observed that the productivity of hemispherical solar
still increased with decreasing the water depth. The productivity of hemispherical solar still has been increased with 1.25% black ink
added up to 17% to 20%, and for 2% black ink added it increased up to 25%.
Keywords— passive, hemispherical solar still, black ink, polycarbonate glass, condensing glass cover, Active solar still, absorbing
material.
INTRODUCTION
Water is the basic need for sustaining life on the earth. With the passage of time due to technical usage and their waste disposal along
with ignorance of human being caused water pollution. This led the world towards water scarcity. Due to water pollution the surface
and underground water reservoirs are now highly contaminated. Most of the human dices are due to brackish water problem. Around
1.5 to 2 million children are dies and 35 to 40 million people are affected by water borne dices. However the increasing industrial
activities may lead to a situation where by countries need to reconsider their option with respect to the management of its water
resources. Around 3% of the world water is potable and this amount is not evenly distributed on the earth. So, developed and under
developed countries are suffering the problem of potable water.
Distillation is an oldest technique to distillate brackish or salty water in to potable water. Various technologies were invented for
desalination from time to time and it has been accepted by people without knowing future environmental consequences. Desalination
techniques like vapour compression distillation, reverse osmosis and electrolysis used electricity as input energy. But in the recent
years, most of the countries in the world have been significantly affected by energy crisis because of heavy dependency on
conventional energy sources (coal power plants, fossil fuels, etc.), which has directly affected the environment and economic growth
of these countries. The changing climate is one of the major challenges the entire world is facing today. Gradual rise in global average
temperatures, increase in sea level and melting of glaciers and ice sheets have underlined the immediate need to address the issue.
All these problems could be solved only through efficient and effective utilization of renewable energy resources such as solar, wind,
biomass, tidal, and geothermal energy etc. The alternative solution of this problem is solar distillation system and a device which
works on solar energy to distillate the water is called solar still. Solar still is very simple to construct, but due to its low productivity
and efficiency it is not popularly used in the market. Solar still is working on solar light which is free of cost but it required more
space.
SOLAR DISTILLATION SYSTEM
G.N .Tiwari et al reviewed the present status of solar distillation systems for both passive and active models. In this field a large
group of authors reported that the passive solar distillation system is a slow process for purification of brackish water. The yield of this
still is about 2L/day per m
2
of still area, which is much less and may not be economically useful. However, there is a method to
increase the yield by integration of solar collector into the basin. This is generally referred to as active solar stills. These may be flat
plat collector, solar concentrator or evacuated collector. These collectors may produce temperatures within the range of 80–120°C
depending upon the type of solar collector. However, the range of temperature within solar stills is reduced to about 80°C due to high
heat capacity of water mass within the basin. Hence there is a practical application of such active systems to extract the essence of
medicinal plants placed under the solar still at about 80°C. The systems used for extraction of the essence of medicinal plants have
become economical.
[1]
Salah Abdallah et al. worked to measuring the Effect of various absorbing materials on the thermal performance of solar stills. From
this Experiment they found that there is a strong need to improve the single slope solar still thermal performance and increase the
production rate of distilled water. Different types of absorbing materials were used to examine their effect on the yield of solar stills.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

316 www.ijergs.org

These absorbing materials are of two types: coated and uncoated porous media (called metallic wiry sponges) and black volcanic
rocks. Four identical solar stills were manufactured using locally available materials. The first three solar stills contain black coated
and uncoated metallic wiry sponges made from steel quality AISI 430 type and black rocks collected from Mafraq Area in north-
eastern Jordan. The fourth still is used as reference still which contains no absorbing materials (only black painted basin). The results
showed that the uncoated sponge has the highest water collection during day time, followed by the black rocks and then coated
metallic wiry sponges.
[2]
On the other hand, the overall average gain in the collected distilled water taking into the consideration the overnight water collections
were 28%, 43% and 60% for coated and uncoated metallic wiry sponges and black rocks respectively.

V.K. Dwivedia et al. can compare the internal heat transfer coefficients in passive solar stills by different thermal models by an
experimental validation. In this paper, an attempt has been made to evaluate the internal heat transfer coefficient of single and double
slope passive solar stills in summer as well as winter climatic conditions for three different water depths (0.01, 0.02 and 0.03 m) by
various thermal models. The experimental validation of distillate yield using different thermal models was carried out for composite
climate of New Delhi, India (latitude 28°35′N, longitude 77°12′E). By comparing theoretical values of hourly yield with experimental
data it has been observed that Dunkle‘s model gives better agreement between theoretical and experimental results. Further, Dunkle‘s
model has been used to evaluate the internal heat transfer coefficient for both single and double slope passive solar stills. With the
increase in water depth from 0.01 m to 0.03 m there was a marginal variation in the values of convective heat transfer coefficients. It
was also observed that on annual basis output of a single slope single slope solar still is better (499.41 l/m
2
) as compared with a double
slope solar still (464.68 l/m
2
).
[3]
SangeetaSuneja et al. measured n Effect of water depth on the performance of an inverted absorber double basin solar still. They
perform transient analysis of a double basin solar still has been presented. Explicit expressions have been derived for the temperatures
of various components of the inverted absorber double basin solar still and its efficiency. The effect of water depth in the lower basin
on the performance of the system has been investigated comprehensively. For enunciation of the analytical results, numerical
calculations have been made by them using meteorological parameters for a typical winter day in Delhi. They observed that the daily
yield of an inverted absorber double basin solar still increases with the increase of water depth in the lower basin for a given water
mass in the upper basin.
[4]
G.N. Tiwari et al. worked on Computer modelling of Passive/Active Solar Stills by using inner Glass Temperature. Expressions for
water and glass temperatures, hourly yield and instantaneous efficiency for both passive and active solar distillation systems have been
derived. The analysis is based on the basic energy balance for both the systems. A computer model has been developed by them to
predict the performance of the stills based on both the inner and the outer glass temperatures of the solar stills. In this work two sets of
values of C and n (C
inner
, n
inner
and C
outer
, n
outer
), obtained from the experimental data of January 19, 2001 and June 16, 2001 under
Delhi climatic condition, have been used. It is concluded that (i) there is a significant effect of operating temperature range on the
internal heat transfer Coefficients and (ii) by considering the inner glass cover temperature there is reasonable agreement between the
experimental and predicted theoretical results.
[5]
Bhagwanprasad and G N Tiwari et.al. Perform an analysis of a double effect active solar distillation unit has been presented by
incorporating the effect of climatic and design parameters. Based on an energy balance in a quasi-steady condition, an analytical
expression for hourly yield for each effect has been derived. Numerical computations have been carried out for a typical day in Delhi,
and the results have also been compared with single effect, active solar distillation unit. It has been observed that there is a significant
improvement in the performance for a minimum flow rate of water in the upper basin.
[6]
T. Arunkumaret.al. Working on An Experimental Study on a Hemispherical Solar Still and This work reports a new design of solar
still with a hemispherical top cover for water desalination with and without flowing water over the cover. The daily distillate output of
the system was increased by lowering the temperature of the cover by water flowing over it. The fresh water production performance
of this new still was observed in Sri Ramakrishna Mission Vidhyalaya College of Arts and Science, Coimbatore (11° North, 77° East),
India. The efficiency was 34%, and increased to 42% with the top cover cooling effect. Diurnal variations of a few important
parameters were observed during field experiments such as water temperature, cover temperature, air temperature, ambient
temperature and distillate output. Solar radiation incident on a solar still is also discussed here.
[7]
Basel I. Ismail et al. represents a Design and performance of a transportable hemispherical solar still. A simple transportable
hemispherical solar still was designed and fabricated, and its performance was experimentally evaluated under outdoors of Dhahran
climatic conditions. It was found that over the hours of experimental testing through daytime, the daily distilled water output from the
still ranged from 2.8 to 5.7 l/m2 day. The daily average efficiency of the still reached as high as 33% with a corresponding conversion
ratio near 50%. It was also found that the average efficiency of the still decreased by 8% when the saline water depth increased by
50%.
[8]

S. Siva kumar et. al. worked on a single basin double slope solar still made up of mild steel plate with different sensible heat storage
materials like quartzite rock, red brick pieces, cement concrete pieces, washed stones and iron scraps. Out of different energy storing
materials used, ¾ in. quartzite rock is the more effective.
[9]

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

317 www.ijergs.org

Yousef H. Zurigat et. al. Worked on regenerative solar still. They have been observed that the Insulation has higher effect on the
regenerative still compared to simple still. Productivity will increase up to 50% if the wind speed was increase from 0 to 10 m/s.
[10]

Hiroshi Tanaka et. al. represented a theoretical analysis of a basin type solar still with an internal reflector (two sides and back
walls). They have observed that the benefit of vertical external reflector would be smaller or even negligible. The daily productivity
with external reflector was 16% greater than that with the vertical external reflector.
[11]

Badshah alam et. al. represented the comparative evaluation of the annual performance of single slope passive and hybrid (PVT)
active solar stills. Higher yield obtained from the active solar still and ratio depends on the climatic conditions during the year.
Efficiency of 9.1- 19.1% was obtained by the active solar still while the passive solar still performed at 9.8-28.4% during the year.
[12]
EXPERIMENTAL STUDY OF SOLAR STILL
Experimental measurements were performed to evaluate the performance of the solar still under the outdoors of Mehsana climatic
condition. Mehsana has geographical condition as latitude 23˚13‘N and Longitude 72˚39' campus area. Entered assembly was made
air tight with the half of a silicone gel. Basin of solar still has been constructed from 14 gauge of galvanized iron steel. Condensing
glass cover made up of clear type polycarbonate material. Thickness of the polycarbonate condensing cover was 2mm.The basin liner
is black oil painted on the inner surface of the basin. This has the dimension 0.08m² effective absorber area of flat based circular
section. Thickness of insulation was 10 mm in each side and the thermocol used as insulating material to minimize the heat loss over
the sides of the basin. One water inlet, condensing water outlet and excess water outlet was provided in the basin. After the black
coating of the basin scale was fixed in the basin wall with the help of solution to measure the Depth of water. Thermocouple was
inserted from water inlet hole and located in different place of the still before fixed the glass cover. They record the different
temperature such as inner surface of glass cover, basin water, vapour temperature inside the still and atmospheric temperature outside
the still.


Fig 2 Hemispherical Polycarbonate Condensing Cover diagram
Before the commencement of each test the basin was filled with saline water using the inlet port and hemispherical cover was cleaned
from dust. The water depth was kept 0.5cm, 1cm, 1.5cm respectively and ink add in proportion of 1.25% water and depth is
0.5cm,1cm,1.5cm respectively. The experiment was carried out on sunny day. The temperature of glass cover vapour, inside water
temperature and atmosphere temperature was measured by J-type thermocouple and record to the note. Daily Solar radiation was
measured by solarimeter in w/m
2
. The distil water was collected hourly in the measuring jar. Experiment was carried out in month of
March and April, Experiment starting from 9 AM TO 5 PM in sunny days. The Maximum amount of potable was collected at 1pm to
2: 30pm. This simple hemispherical results can compared to the with ink proportion used in water.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

318 www.ijergs.org

PHOTOGRAPH OF EXPERIMENTAL SETUP

Fig 3 Photograph of Experimental Setup
RESULT AND DISCUSSION
Typical results of the variation of the saline water temperature and glass cover temperature and ambient temperature have been
measured during reprehensive day of testing. Temperature difference between the water and cover has similar trends, as they increase
in the morning hours to maximum value around noon time. They start to decrease late in the afternoon. This is due to increase of solar
irradiance in the morning and the decrease after 2.00 pm. After assembling the solar still, a set of experiments were performed to test
its efficiency and productivity per hour per day. The experiments were carried out on days of bright sunny days. The amount of water
and some temperature values was measured from 9:00AM to 5:00PM in campus area. Some factors are affecting on the active solar
still are: solar radiation intensity, ambient temperature, wind velocity, humidity, condensing glass cover inclination, solar collector
inclination, solar collector area, absorber material, etc.
The quantity of fresh water obtained from the solar still was 1.5 l/m². 2
nd
day of March the still area for 0.08m² has 1.5 cm of water
depth. Total Mass of water obtained from the solar still was 255ml and it comes as a 3.180 L/m
2
.The Hemispherical still has 0.5 L/m
2

day on the day of 9
th
day of March the still area with 0.08m², Total mass of water gain was 270 ml. Figure 5.1, Figure 5.2, Figure 5.3
and Figure 5.4 Figure 5.5, Figure 5.6, Figure 5.7 and Figure 5.8 shows the graph of hourly variation of solar radiation, mass of
distilled water ml during the day 2
nd
and 9
th
of the march, 2014. The maximum solar radiation is in between 12:00 to 14:00 and the
ambient temperature maximum in between 13:00 to 14:00 of the day period and drastically change in the solar radiation can shows the
weather effect.
A hemispherical solar still has been fabricated and tested with and without black ink. Black ink is also an absorptive ink. So, if it is
used in the form of mixture with proportion of water then it‘s helpful to increase the productivity in the solar still. It was the best
absorbing material used in terms of water productivity. I hope to the resulted in an enhancement of about 60%.
Measuring Jar
Temperature
Indicator
Solar Power Meter
Hemispherical glass
Stand
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

319 www.ijergs.org

WITHOUT ABSORBER INK

Fig 4 hourly variations in temperature and productivity during the day water depth 0.5 cm

Fig 5 hourly variations in temperature and productivity during the day water depth 1 cm
0
200
400
600
800
0
20
40
60
80
1 2 3 4 5 6 7 8 9
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIME HR
9 March 2014
Tv Tw Tg Ta Productivity in ml/m2
0.00
200.00
400.00
600.00
800.00
0.00
20.00
40.00
60.00
80.00
9.00
am
10.00
am
11.00
am
12.00
pm
1.00
pm
2.00
pm
3.00
pm
4.00
pm
5.00
pm
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIME HR
1 MARCH 2014
Tv Tw Tg Ta Productivity in ml/m2
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

320