International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014

ISSN 2091-2730


1
www.ijergs.org


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


2
www.ijergs.org

Table of Content
Topics Page no
Chief Editor Board 3-4
Message From Associate Editor 5
Research Papers Collection

6-775




















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


3
www.ijergs.org

CHIEF EDITOR BOARD
1. Dr Gokarna Shrestha, Professor, Tribhuwan University, Nepal
2. Dr Chandrasekhar Putcha, Outstanding Professor, University Of California, USA
3. Dr Shashi Kumar Gupta, , Professor, IIT Rurkee, India
4. Dr K R K Prasad, K.L.University, Professor Dean, India
5. Dr Kenneth Derucher, Professor and Former Dean, California State University,Chico, USA
6. Dr Azim Houshyar, Professor, Western Michigan University, Kalamazoo, Michigan, USA
7. Dr Sunil Saigal, Distinguished Professor, New Jersey Institute of Technology, Newark, USA
8. Dr Hota GangaRao, Distinguished Professor and Director, Center for Integration of Composites into
Infrastructure, West Virginia University, Morgantown, WV, USA
9. Dr Bilal M. Ayyub, professor and Director, Center for Technology and Systems Management,
University of Maryland College Park, Maryland, USA
10. Dr Sarâh BENZIANE, University Of Oran, Associate Professor, Algeria
11. Dr Mohamed Syed Fofanah, Head, Department of Industrial Technology & Director of Studies, Njala
University, Sierra Leone
12. Dr Radhakrishna Gopala Pillai, Honorary professor, Institute of Medical Sciences, Kirghistan
13. Dr P.V.Chalapati, Professor, K.L.University, India
14. Dr Ajaya Bhattarai, Tribhuwan University, Professor, Nepal
ASSOCIATE EDITOR IN CHIEF
1. Er. Pragyan Bhattarai , Research Engineer and program co-ordinator, Nepal
ADVISORY EDITORS
1. Mr Leela Mani Poudyal, Chief Secretary, Nepal government, Nepal
2. Mr Sukdev Bhattarai Khatry, Secretary, Central Government, Nepal
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


4
www.ijergs.org

3. Mr Janak shah, Secretary, Central Government, Nepal
4. Mr Mohodatta Timilsina, Executive Secretary, Central Government, Nepal
5. Dr. Manjusha Kulkarni, Asso. Professor, Pune University, India
6. Er. Ranipet Hafeez Basha (Phd Scholar), Vice President, Basha Research Corporation, Kumamoto, Japan
Technical Members
1. Miss Rekha Ghimire, Research Microbiologist, Nepal section representative, Nepal
2. Er. A.V. A Bharat Kumar, Research Engineer, India section representative and program co-ordinator, India
3. Er. Amir Juma, Research Engineer ,Uganda section representative, program co-ordinator, Uganda
4. Er. Maharshi Bhaswant, Research scholar( University of southern Queensland), Research Biologist, Australia
















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


5
www.ijergs.org

Message from Associate Editor In Chief
Let me first of all take this opportunity to wish all our readers a very happy, peaceful and
prosperous year ahead.
This is the Fifth Issue of the Second Volume of International Journal of Engineering Research and
General Science. A total of 90 research articles are published and I sincerely hope that each one
of these provides some significant stimulation to a reasonable segment of our community of
readers.
In this issue, we have focused mainly on the Recent Technology and its implementation approach with Research. We also
welcome more research oriented ideas in our upcoming Issues.
Author‘s response for this issue was really inspiring for us. We received many papers from more than 15 countires in this
issue and we received many research papers but our technical team and editor members accepted very less number of
research papers for the publication. We have provided editors feedback for every rejected as well as accepted paper so that
authors can work out in the weakness more and we shall accept the paper in near future. We apologize for the
inconvenient caused for rejected Authors but I hope our editor‘s feedback helps you discover more horizons for your
research work.
I would like to take this opportunity to thank each and every writer for their contribution and would like to thank entire
International Journal of Engineering Research and General Science (IJERGS) technical team and editor member for their
hard work for the development of research in the world through IJERGS.
Last, but not the least my special thanks and gratitude needs to go to all our fellow friends and supporters. Your help is
greatly appreciated. I hope our reader will find our papers educational and entertaining as well. Our team have done good
job however, this issue may possibly have some drawbacks, and therefore, constructive suggestions for further
improvement shall be warmly welcomed.



Er. Pragyan Bhattarai,
Assistant Editor-in-Chief, P&R,
International Journal of Engineering Research and General Science
E-mail -Pragyan@ijergs.org
Contact no- +9779841549341


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


6
www.ijergs.org

The Influence of Length of the Stem of Klutuk Banana (Musa Balbisiana)
Toward Tensile Strength: A Review of the Mechanical Properties
Achmad Choerudin
1
, Singgih Trijanto
1

1
Lecturer, Academy of Technology AUB Surakarta, Central Java, Indonesia
Abstract - This study is testing the effects of fibre length on tensile strength and strain on the stem of klutuk banana fibre specimen
with length of fiber are variation of 10 cm, 7 cm, 5 cm and 3 cm. The results showed that composite fibres stem of banana with tensile
strenght test result is specimen of 10 cm (25.38 N/mm
2
), specimen of 7 cm (14,47 N/mm
2
), specimen of 3 cm (17,27 N/mm
2
) and
specimen of 5 cm (12,08 N/mm
2
). The result is strain specimen and extension of 10 cm (20,14), specimen of 7 cm (9.95), specimen of
5 cm (5,49) and specimen of 3 cm (5,39). The conclusions of this study are (1) the length of the stem of banana fibres in composite
will further improve the tensile strenght on an existing specimen, by not being influenced by other factors, (2) The length of the stem
of banana fibres will be increasingly small elongation that occurred compared with shorter fibres, and (3) the length of the stem of
banana fibres are the higher values of the strain that occurs.
Keywords: the length of fibre, the stem of banana, tensile strenght

INTRODUCTION
The tensile strength is the waste from banana crop has been cleared for the rosaceae and agricultural waste is a potential that has not
been much used. The composite is a material that is formed from a combination of two or more materials whose mechanical properties
of constituent materials. Composite consists of two parts namely the matrix as a binder or protective composite and composite filler as
filler. Natural fibre composite filler is a great alternative for a wide range of polymer composite because its superiority compared to
synthetic fibres. Natural fibres are easily obtained at low prices, easy processing, low, its are environmentally friendly and can be
described in biologist (Kusumastuti, 2009).
Fiber obtained from the stem of banana tree fibre has good mechanical properties. Mechanical properties of fibres of the stem of
banana has a density of 1.35 gr/cm, the 63-64% of cellulose, hemicellulose 20%, lignin 5%, the average tensile strength 600 Mpa
tensile modulus, an average of 17.85 GPa and long added 3.36% (Lokantara, 2007). The diameter of the stem of banana leaf fibres is
5.8% , whereas the length of around 30,92-40,92 cm.
Suwanto (2006) have observed the influence of temperature post-curing composite tensile strength of epoxy resin reinforced with
woven banana fibres. Maximum tensile strength that occurs on a composite experience a process of post curing temperature on
10000
o
C of 42,82 MPa, an increase in tensile strength of 40,26% if compared to the composite without warming up. Tensile strength
that occurs on a composite of smaller if compared to the tensile strength of material constituting the two. This can be caused by a high
degree of porosity in fibre composites, conditions are less uniform, the on set of delamination between fibre and matrix, a low surface
bonding between fiber and matrix.
Surani (2010) examined utilization of banana as raw fibre boards with thermo-mechanical treatment. Thermo-mechanical treatment is
carried out through the establishment of mat wet way. The best fibre board quality is obtained at the treatment temperature of boiling
100
0
C flakes without the use of synthetic adhesives. Syaiful Anwar (2010) examine the stem of banana stems is to know the influence
of the length of the stem of banana leaf fibre 10 mm, 20 mm, 30 mm, 40 mm against the stem of banana leaf fiber tensile strength with
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


7
www.ijergs.org

matric polyester. Study on the fiber used is the stem of banana leaf fibers with 50% volume fraction, fiber length of 10 mm, 20 mm,
30 mm, 40 mm.
The standard reference for the manufacture and testing of specimens used ASTM D 638-03 type I for tensile test. The results of the
study concluded the specimen with an increasingly long fibres will be more durable in the hold the load pull because the long fibres
have a more perfect structure that were installed along the axis of the fiber and internal defects on fiber less than the material fibers are
short. Evi Indrawati (2010) states that the stem of banana leaf is one part of the banana which consists of a collection of the stem of
the composition and grow erect. Fibre obtained from the banana is a strong fibre and has a high store and has cellular tissue with pores
interconnected. Based on the background in this research existing problems the influence of long fiber of the klutuk banana, tensile
strength while many banana is not explored. The results can be achieved is materially qualified natural and good.

LITERATURE REVIEW
Banana Fibre
Banana stems is a type of fibre that is of good quality, and is one of the potential alternative materials that can be used as a filler in
composite manufacture polyvinyl chloride or commonly abbreviated PVC. The stem of banana waste can be used as a source of fibre
in order to have economic value. Rahman (2006) states that the comparison between fresh weight of leaves, stems, and fruit of the
banana row 63, 14, and 23%. The stem of banana has a kind of weight 0,29 g/cm with a length of fibre 4.20- 5.46 mm and lignin
content 33,51% (Syafrudin, 2004).
Klutuk Banana
The banana is a kind of typical, not because it tastes sweet, but because his meat filled with black. The seeds have rough skin texture
and a hard shell. The klutuk banana tree traits has a height up to 3 meters with trunk circumference ranging from 60 cm to 70 cm.
Stem is green with or without patches of spots. Klutuk banana tree leaf is usually along 2 metre with lenar 0.6 metres. Its leaves if
noted in details seem thin wax layer has a unique and not easy to rip like other types of banana leaves.
Composite
Composite is a combination of two or more different materials, and it is made to acquire properties that are better are not retrieved
from the respective compilers composite (Fajriyanto and Firdaus, 2007). Composites consist of a matrix as a fixed phase and filler and
the second phase is separated by interface condition. The resulting composite material depends on the matrix and the matrix filler
material used. Each composite that is made with different materials, then the nature of that form will be different and depending on the
type of filler material, filler and matrix material for amplifiers are used (Hanafi, 2004).
Tensile Testing
Tensile testing is used to determine the mechanical properties of a material, such as the maximum tensile strenght. A test object that is
used is solid and there are some cylindrical, shaped sheet and plate-shaped pipes in a variety of sizes. A specimen is then gripped
between the two pegs on the test machine is equipped with a variety of control so that specimens can be tested.

MATERIALS AND METHODS
The tools and materials used in this research is the Universal Testing Machine with a maximum capacity of 500 kg specification and
control of automated testing and stem fiber banana as a composite. This research method using hand lay-up include the preparation of
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


8
www.ijergs.org

molds, coating, alignment, and drying. After taking a fibre made by selecting the stem of banana leaf that is still in a state of good,
moist, and start to dry out, discard the leaves on the stem of banana leaf and cut the stem of which is already drying up, release the
outer skin of the stem of banana leaf, drying out is carried out in a place not exposed to direct sunlight.
Then do the creation of samples. Manufacture of composite refers to the standard ASTM D-3039. A homogeneous mixture is poured
in the mold of mold size tension-test leveled with a brush.








Figure 2: Sample of tensile strenght test
Scheme for sample fibre tensile strength test of the stem of banana leaf arranged lengthwise above the mold parallel with the size of
the mold. The repeated these steps for samples 1, 2, 3 and 4, with a length of fiber is a variation of 10 cm, 7 cm, 5 cm and 3 cm. After
making a sample done, drained for 24 hours, then made the characterization of the mechanical properties of the composite is a strenght
testing.

RESULTS AND DISCUSSION
The Results
This research use the equipment Universal Testing Machine. Tensile test specimen made in the form of a composite plate
manufactured by the method of hand lay-up. The geometry and dimensions of specimen tensile test customized standard ASTM D
3039. Set-up tool test on static tensile tests tailored to the holder of the specimen on the tension testing machine. Tensile loading
provided parallel to the axis of the axial and is assumed to be uniform in every point of testing.
Tensile test specimen holder are designed in accordance with the test tool holder to be used as a specimen holder shaped plates. In
order to be considered the holder of the specimen must be capable of holding the specimen with strong and attempted to slip does not
occur. Tensile measurement of tensile specimen based on the theory of Hooke's Law. The theory states that a materials behave in an
elastic and showed a relationship between stress and strain liniear called elastic finite. Variables that will be observed in this study i.e.
tensile strenght so that the maximum tensile stresses get value added, the length of which indicates the strain that occurs.

Table 1: The Results of Tensile Testing
No. Specimen Area Max (mm2) Max Force (N) Tensile Strenght (N/mm2) Break Force (N)

1. 10 cm 25.000 1935.6 38.71 689.90
124.400 1311.6 10.30 585.97
124.400 1932.9 15.17 910.62
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


9
www.ijergs.org

124.400 1598.5 12.55 759.11
124.400 1317.4 10.77 600.88
2. 7 cm 25.000 849.9 17.00 420.45
124.400 1075.7 8.44 400.64
124.400 1570.4 12.33 777.04
124.400 1165.3 9.15 503.61
124.400 935.7 7.35 432.40
3. 5 cm 25.000 937.6 18.75 468.20
124.400 954.7 7.49 439.86
124.400 547,4 4.30 272.35
124.400 557.6 4.38 261.76
124.400 793.9 6.23 378.33
4. 3 cm 25.000 1280.4 25.61 591.94
124.400 1175.8 9.23 467.86
124.400 981.8 7.71 489.15
124.400 1192.5 9.36 514.65
124.400 1024.2 8.04 445.49
(Sources: primary data, 2014)











(Sources: primary data, 2014)
Figure 1: The Comparation of Tensile Strenght in Specimen 10 cm, 7 cm, 5 cm and 3 cm

Discussion
The test results showed that the composite fibres strength stem of banana leaf with a tensile strenght test result was consecutive from
the highest is specimen 10 cm (25.38 N/mm
2
), specimen of 7 cm (14,47 N/mm
2
), specimen of 3 cm (17,27 N/mm
2
) and specimen of 5
cm (12,08 N/mm
2
). The results of the extension and the strain that occurs in row are specimen of 10 cm (20,14), specimen of
7 cm (9.95), specimen of 5 cm (5,49) and specimen of 3 cm (5.39).

Table 2: Analysis of Results Tensile Strenght
No. Specimen Area Max
(mm2)
Max
Force
(N)
Tensile
Strenght
(N/mm2)
Break Force
(N)
Elongation Strain
1. 10 cm 25.000 1935.6 77 689.90 1,26 20,14
0
50
100
150
200
250
Test 1 Test 2 Test 3 Test 4 Test 5
3 cm
5 cm
7 cm
10 cm
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


10
www.ijergs.org

124.400 1311.6 10.54 585.97 1,27
124.400 1932.9 15.53 910.62 1,26
124.400 1598.5 12.85 759.11 1,25
124.400 1317.4 11.02 600.88 1,27
Mean (1) 25,38 1,26
2. 7 cm 25.000 849.9 33,99 420.45 1,42
9,95
124.400 1075.7 8,65 400.64 1,43
124.400 1570.4 12,63 777.04 1,47
124.400 1165.3 9,37 503.61 1,46
124.400 935.7 7,52 432.40 1,48
Mean (2) 14,43 1,45
3. 5 cm 25.000 937.6 37,5 468.20 2,0
5,49
124.400 954.7 7,67 439.86 2,1
124.400 547,4 4,40 272.35 2,3
124.400 557.6 4,48 261.76 2,4
124.400 793.9 6,38 378.33 2,1
Mean (3) 12,08 2,2
4. 3 cm 25.000 1280.4 51,23 591.94 3,2
5,39
124.400 1175.8 9,45 467.86 3,1
124.400 981.8 7,89 489.15 3,5
124.400 1192.5 9,58 514.65 3,3
124.400 1024.2 8,23 445.49 3,2
Mean (4) 17,27 3,2
(Sources: primary data, 2014)








(Sumber: data primer, 2014)

Figure 2: The Comparation Strain, Elongation dan Stress
in Specimen 10 cm, 7 cm, 5 cm and 3 cm

Based on the results of these tests, that the length of the stem of banana leaf fibres will be higher tensile stresses, though there is still a
difference between specimen of 5 cm and 3 cm, this is because due to factors outside of testing such as the making of the specimen is
still not perfect and the density of specimens of different factions. It can be seen from the strain that occurs that is the length of the
stem of banana leaf fibres would further increase the strain these specimens, so the stem of banana leaf fibre length factor affects the
value of the voltage drop and the strain that occurs. Besides the comparison judging from length (elongation) that the length of the
stem of banana leaf fibres will be inversely proportional to the extension took place, namely the length of the fiber will be getting
smaller in comparison with the long stem of the short fibres.

0
5
10
15
20
25
30
Strain Elongation Stress
10 cm
7 cm
5 cm
3 cm
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


11
www.ijergs.org

CONCLUSION
Based on research that has been done can be inferred that the tensile strenght obtained:
1. The length of the stem of banana leaf fibres in composite will further improve the tensile strenght on an existing specimen, by not
being influenced by other factors.
2. The length of the stem of banana leaf fibres will be increasingly small elongation that occurred compared with shorter fibres.
3. The length of the stem of banana leaf fibres are the higher values of the strain that occurs.

ACKNOWLEDGEMENT
1. Directorate General of Higher Education, Ministry of Education and Culture of The Republic of Indonesia, in Research of
Penelitian Dosen Pemula (Grants Lecturer Beginner), 2014.
2. Laboratory Sciences and Laboratory Materials Technology, University Sebelas Maret of Surakarta, Indonesia, 2014.
3. Academy of Technology AUB Surakarta, Central Java, Indonesia.

REFERENCES:
[1] ASTM D 3039, 2005, Standard Test Methode for Tensile Properties of Plastics, American Society for Testing Materials,
Philadelphia, PA.
[2] Bramayanto, A., 2008, Pengaruh Konsentrasi terhadap Sifat Mekanik Material Komposit Poliester Serat Alam, Fakultas
Teknik, University of Indonesia, Indonesia.
[3] Fajriyanto dan Firdaus, F., 2007. Karakteristik Mekanik Panel Dinding dari Komposit Sabut Kelapa (Coco Fiber) - Sampah
Plastik (Thermoplastics), Logika, Vol. 4, No. 1, Januari 2007, Fakultas Teknik Sipil dan Perencanaan UII Yogyakarta
[4] Hanafi, I., 2004. Komposit Polimer Diperkuat Pengisi dan Gentian Pendek Semula Jadi, Universiti Sains, Malaysia.
[5] Hardoyo, K., 2008, Karakterisasi Sifat Mekanis Komposit Partikel SiO
2
dengan Matrik Resin Polyester, Tesis FMIPA, Program
Studi Ilmu Material, University of Indonesia, Indonesia.
[6] Kusumastuti, A., 2009, Aplikasi Serat Sisal sebagai Komposit Polimer, Jurusan Teknologi Jasa dan Produksi, Universitas Negeri
Semarang, Jurnal Kompetensi Teknik Vol. 1, No. 1, 27 November 2009, Indonesia.
[7] Lokantara, P., 2012, Analisis Kekuatan Impact Komposit Polyester-Serat Tapis Kelapa dengan Variasi Panjang dan Fraksi
Volume Serat yang diberi Perlakuan NaOH, Fakultas Teknik, Universitas Udayana, Kampus Bukit Jimbaran, Bali, Indonesia.
[8] Rahman, H., 2006. Pembuatan Pulp dari Batang Pisang Uter (Musa paradisiaca Linn. var uter) Pascapanen dengan Proses Soda.
Fakultas Kehutanan. Yogyakarta : Universitas Gadjah Mada, Indonesia.
[9] Syafrudin, 2004. Pengaruh Konsentrasi Larutan dan Waktu Pemasakan Terhadap Rendemen dan Sifat Fisis Pulp Batang
Pisang Kepok (Musa spp) Pascapanen. Fakultas Kehutanan. Yogyakarta: Universitas Gadjah Mada, Indonesia.
[10] Schwartz, MM., 1984, Composite Materials Handbook, McGraw-Hill Book Co, New York.
[11] Surrani, L., 2010, Pemanfaatan Batang Pisang (Musa Sp.) sebagai Bahan Baku Papan Serat dengan Perlakuan Termo-
Mekanis, Balai Penelitian Kehutanan, Manado, Indonesia.
[12] Suwanto, B.,2006, Pengaruh Temperatur Post-Curing terhadap Kekuatan Tarik Komposit Epoksi Resin yang diperkuat Woven
Serat Pisang, Jurusan Teknik Sipil Politeknik Negeri Semarang, Semarang, Indonesia
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


12
www.ijergs.org

Technical Competency of Engineer Expert in Brazil and the USA approach
Alexandre A.G. Silva
1
, Pedro L.P.Sanchez
2

1
Lawyer, Master and Ph.D. student in Electrical Engineering from the Polytechnic School of the University of São Paulo. Auditor and
responsible for legal affairs of the Innovation Agencyin Federal University of ABC.
alexandre.silva@ufabc.edu.br
prof.alealberto@gmail.com
2
Lawyer,electrical engineer, Ph.D. and Associate Professor in Electrical Engineering at the Polytechnic School of the University of
São Paulo.
pedro.sanchez@poli.usp.br
Abstract—This article discusses the system of choice of experts, especially engineers experts in Brazil and the United States.
Despite being different legal systems, there are common issues that have different solutions in both countries. First will be exposed as
is the Brazilian legal system, and will be described how is the choose of experts in the judiciary. Next will be described the American
legal system and how is the choice of the experts in this system. Possible solutions to the problems of the Brazilian system based on
the American system will be pointed out.

Keywords—engineer expert; technical competency; forensic; Brazilian judiciary; legal system; forensic science;
1. INTRODUCTION
The purpose of this paper is to compare the system adopted by Brazilian courts with the system adopted by the
United States for the use of legal experts.
Before discuss reforms based on American common law procedures, this article will examine the role of the expert
in Brazilian civil law system, in special case of the engineering.
Americans procedures are different in many aspects of the Brazilian system to choose the expert and the expert
acting in court proceedings. These differences become very important at the time that judges, in Brazil, has to make the choice of an
expert that will do his job using scientific methods.
Despite the legal systems differ, the issues related to the "quality" of the applied science as well as the professional
qualification are common concerns in both countries.
In this regard, the United States in recent pioneer report by the National Academy of Sciences (NAS), acknowledged
that part of forensic science is not based on an established science. The report notes that many disciplines such as hair microscopy, bite
mark comparisons, fingerprint analysis, testing firearm and tool mark analysis, were developed just to solve criminal cases, being used
in the context of individual cases which have significant variations in research and expertise. These have not gone through a rigorous
experimental scrutiny, as there are no standards in the United States or anywhere else that can validate these methods consistently,
with the exception only of DNA testing.[1]
2. BRAZIL’S LEGAL SYSTEM
The forensic engineer is a professional engineer who deals with the engineering aspects of legal problems. Activities
associated with forensic engineering include determination of the physical or technical causes of accidents or failures, preparation of
reports, and presentation of testimony or advisory opinions that assist in resolution of related disputes. The forensic engineer may also
be asked to render an opinion regarding responsibility for the accident or failure.[2]
It is also the application of the art and science of engineering in the judiciary, including the investigation of the
physical causes of accidents and other types of claims and litigation, preparation of engineering reports, testimony at hearings and
trials in administrative or judicial proceedings, and interpretation of advisory opinions to assist the resolution of disputes affecting life
or property.
The first skill that expert must have is competency in his specialized engineering discipline. This competency must
be acquired by education and experience, so a professional who has a large professional experiencewill be better than an engineer that
does not have much experience, even with the same education.
Another skill that is very important is the knowledge of legal procedures and the vocabulary used in Courts not to
cause trouble or misunderstanding during the process.
Brazil is a federal republic formed by the indissoluble union of the states, municipalities and the Federal District.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


13
www.ijergs.org

The government is composed of the legislative, executive and judiciary. The country adopts the system of Civil Law, which has its
origin in Roman law and was introduced by the Portuguese colonizers. The system is based on codes and laws enacted by the federal
legislature, as well as by state and local legislatures.[3] [4]
The federal legislature is practiced by Congress, which is composed of the Chamber of Deputies and the Federal
Senate, through the legislative process. The President and the Ministers of State make up the Executive Branch, and the Supreme
Court, the National Council of Justice, the Superior Court of Justice, the Federal Court, the Labour Court, the Electoral Court, the
Military Court, and state courts make up the Judiciary.
The Federal Supreme Court is the highest court and is entrusted with the responsibility of safeguarding the
Constitution, as well as functioning as a court of review. The Federal Supreme Court also has original jurisdiction to try and decide
direct actions of unconstitutionality of a federal or state law or normative act, or declaratory actions of constitutionality of a federal
law or normative act, which somewhat resemble the issuance of advisory opinions. This situation not allowed in the Supreme Court of
the United States, for example.
Brazil does not follow the doctrine of stare decisions and only after the amendment of the Federal Constitution in
2004 that the Supreme Court started to adopt, in special situations, binding decisions.
The Common Law admitted its origins in the precedents from previous cases as sources of law. The principle is
known as stare decisis and recommends that once a court has answered a question, the same question in other cases must elicit the
same response from the same court or lower courts in that jurisdiction. In turn, the system of Civil Law, attaches great importance to
the codes, laws and opinions of jurists.
The principle of free conviction of the judge's what guides all Brazilian judicial decisions, and this should be
beaconed by the law.
In Brazil there are two types of forensic experts: the criminal and non-criminal. The first ones are public employees,
in almost cases, and the others are hired for all other types of judicial cases, by the parts involved. Obviously exists exceptions in this
two cases, but they aren´t relevant for these observations.
There are criminal forensics experts only in two spheres of government, namely in the federal scope and in state
government, with no performance in the district, or county, for such experts. In Brazil there is no specific jurisdiction in the city and
district. Municipal issues are resolved in state courts.
These forensic experts to be hired by the government go through a public examination to evaluate their knowledge.
On the other hand, not forensic experts are hired for their expertise, but there is no effective way of measuring the level of such
knowledge. The judges pick their experts in non-criminal cases, among professionals inscribed on a prior list in state and federal
courts.
Likewise there is no specific government agency that regulates the forensic science to bring regulations to both
categories of experts, nor the ―quality of the science‖ applied in the cases by the experts.
The criminal expertise is regulated by the Code of Criminal Procedure [5] and the non-criminal skills are covered by
the Code of Civil Procedure. [6]
The Code of Criminal Procedure provides in Article 158 that when the violation leaving any trace will be
indispensable examination of corpus delicti, direct or indirect, cannot supply him the confession of the accused.The examination of
corpus delicti and other skills will be conducted by official expert, holder of college degree.
In civil investigations, the judge chooses the expert from among those previously enrolled in that jurisdiction. The
parties have five days to submit their questions for the expert. Article 436 of the Code of Civil Procedure provides that a judicature is
not attached to the expert report, his conviction may form with other elements or facts proven in the case.
Specifically for the engineering, professional regulation in Brazil is a responsibility of the professional supervisory
board of Engineering (CONFEA), which was created by federal law and has to care for and regulate the engineering profession.
Considered a landmark in the history of professional and technical regulation in Brazil, the Brazilian Confederation
of Engineering and Agronomy came up with that name on December 11, 1933, by Federal Decree No. 23,569. In its current design,
the Federal Council of Engineering and Agronomy (CONFEA) is governed by Law 5,194 of 1966 and also represents geographers,
geologists, meteorologists, technologists such arrangements, industrial and agricultural technicians and their specializations, totaling
hundreds of professional titles.
The CONFEA system has in their records about one million of professionals who account for about 70% of Brazil's
GDP. It is a demanding job in the expertise and knowledge of technology, fueled by intensely technical and scientific findings of man.
Takes charge of social and human interests of the whole society and, on that basis, regulates and supervises the professional practice in
the areas. The Federal Council is the highest level at which a professional can use with regard to the regulation of professional practice
[7]. The Council also gives permission for expert activity of the engineering professional, but this is just an administrative issue.
The principles adopted for expert opinions, is the state of art and also good practices in each specialty, as well as,
eventually, which is regulated by the Brazilian Association of Technical Standards (ABNT), private institution that aims to promote
the development of technical standards and promote their use in scientific, technical, industrial, commercial, agricultural, among
others, keeping them updated, counting both the best technical expertise and lab work, as well as encourage and promote the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


14
www.ijergs.org

participation of communities in the technical research, development and dissemination of technical standardization in the country.
ABNT is also collaborating with the state in the study and solution of problems that relate to technical
standardization in general and also with the public authorities mediates the interests of civil society with regard to matters of technical
standardization.[8]
A recurring drawback for the lack of a mechanism or official body control expert activity is that regular
professionals sometimes do not have adequate technical knowledge to perform certain work or not using methodology consistent with
the need of expertise.
In cases where the expert has no scientific or technical knowledge and without reasonable cause, fails to comply
with the charge given to him he may be replaced in accordance with Article 424 of the Code of Civil Procedure. Normative Decision
69 CONFEA also predicts this hypothesis and treats this as an ethical infraction.
Also the Code of Criminal Procedure provides in Articles 343 and 344 punishments ranging from two to four years
and a fine for "perjury or false expertise", but these crimes must be intentional. [9]
When talking about professional assignment, it is necessary to make the distinction between academic ability, legal
requirements and professional qualification, since there is a relationship of dependency between them, while distinct, each being
arising from other.
Gotten the graduation course, acquires academic ability, but it is not possible yet practice the profession, which
happens only with enrollment in respective professional Council, that is the legal authorization, and the professional qualification is
acquired only through constant training and experience.
Professional Assignments and technical knowledge are not necessarily associated in the field of engineering.
Professional assignments are conferred by CONFEA resolutions, being differentiated by each type of professional.
It is not necessary and sufficient for the exercise charge of the expert's mere record of the professional class organ as
the expert charge depends on its technical and scientific knowledge condition.
This knowledge will be built with the knowledge acquired during graduation and specific courses. The classic
example is the case of the newly formed engineer that receives his properly registered title in the class organ that enables expert
activity, but there is lack of expertise and knowledge of legal aspects inherent to it, because technical knowledge of application in
judicial skills are not included in undergraduate courses, and needed further depth of knowledge, which is a distortion of the Code of
Civil Procedure, as to the operationalization of expertise activity.
The major problem is that rarely unqualified practitioners are punished for their actions, as it rarely the parties
perform accusations in the Council, as well as the judges, which only fail to request the services of experts who do not meet the
expectations.
Another issue is that the judge is a layman, he has no knowledge to evaluate the use of the scientific quality of
expertise, which also hampers the punishment of the wicked experts, and there is no legal instrument or procedure that it can be used
to make this review.
3. THE LEGAL SYSTEM IN U.S.
In United States, the U.S. Constitution establishes a federal system of government and gives specific powers to the
federal government, and all powers not delegated to the federal government are left to the states. The fifty states of the country have
their own constitution, government structure, legal codes and own judicial system.
The legal system adopted has Anglo-Saxon origin and is based on the study of judicial precedents (Common Law).
Also the Judicial Branch of the federal government is established by the Constitution specifying its authority. Federal courts have
exclusive jurisdiction only in certain types of cases, such as in cases involving federal laws, disputes between states and cases
involving foreign governments. There are cases in which federal courts share jurisdiction with the state, such as a federal and a state
court can make a decision together about two parts that reside in different states. The state courts have exclusive jurisdiction over the
vast majority of cases.
The parties have the right to trial by jury in all criminal cases and in most civil cases. The jury usually consists of
twelve citizens who hear the evidence and based on the evidence presented during the trial, apply the law determined by the judge to
reach a decision based on the facts that the jury determines itself as true, based on the evidence presented during the trial. [10]
As measures to ensure the reliability of the expert's opinion, before the presentation of expert evidence at trial in a
U.S. federal court, the expert goes through some essential preliminary steps. The expert is selected and maintained by the party;
evaluates the starting materials; emits a report; giving his testimony. The admissibility may have been evaluated under standards by
the trial judge. [11]
Initially to present an expert opinion, the proposed expert must be qualified and able to meet the admissibility
requirements as established by the Supreme Court in the 1990s in cases Daubert, Joiner, and Kumho. A more recent decision of the
Court of Justice, the Kumho case, again confirmed and clarified that judges should act as "gatekeepers" in determining the
admissibility of expert evidence, and must be sure that the witness is relevant and reliable. [12]
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


15
www.ijergs.org

The FRYE test of "general acceptance" is given on behalf of the admissibility of lie detector test with over 80 years
in the case Frye v. United States, which generated controversy over what standard a court should apply in evaluating the expert
evidence.
In Frye, the defendant was subjected to a scientific test designed to determine the innocence or guilt based on the
variation of blood pressure when questioned about facts related to the crime for which he was accused. The defendant objected to the
methodology and results based on the novelty of the technique. In stating the rule, the Court argued that is difficult to know when a
scientific principle or discovery crosses the line between the experimental and final stages. The probative force of the principle must
be recognized at some point but until this occurs, the deduction must be sufficiently established to have general acceptance in the
particular field to which it belongs.
In light of the new rule, the Frye Court held that the blood pressure test at issue had not yet gained such standing and
scientific recognition to justify admitting the expert testimony at hand. The Frye ―general acceptance‖ test was applied by federal
courts for more than 50 years, and was applied exclusively to expert testimony based on new scientific techniques. The Frye test was
also adopted and applied by many state courts, some of which still apply the Frye test today.
By applying the new rule, the Court held that the blood pressure test in question had not gained acceptance and
recognition for justifying the admission of expert evidence. The Frye test of "general acceptance" was applied by federal courts for
over 50 years, exclusively to expert testimony based on new scientific techniques. Some state courts still use the Frye test today.
The Frye test was the rule until 1975, when Congress passed the Federal Rules of Evidence (FRE), which seemed to
create a new standard for the courts to assess the admissibility of expert evidence. FRE 104 went to the district court the power to
determine the qualifications of a witness. Preliminary questions concerning the qualification of a person to be a witness, the existence
of a privilege, or the admissibility of evidence shall be determined by the court, subject to the provision of subdivision pertaining to
conditional admissions.
The FRE 702 seemed to bring new parameters for the courts to assess the admissibility of expert testimony. The rule
provides that the specialized scientific, technical, or other will assist the trier of fact to understand the evidence or to determine a fact
in issue, a witness qualified as an expert, knowledge, skill, experience, training, or education may testify thereto in the form of an
opinion or otherwise.The coexistence of the two rules created great confusion and different strands of the courts in applying the rules.
The uncertainty was then clarified by the Supreme Court in Daubert v. Merrell Dow Pharmaceuticals. The authors
were trying to introduce expert testimony that supported the birth defects occurred due to ingestion of the drug Bendectin by mothers.
The Court held that FRE 702 superseded Frye and the general acceptance test was not a pre-condition for the admissibility of
scientific evidence under the Federal Rules of Evidence, considering the court of first instance as a "guardian" to determine the
admissibility of expert evidence ensuring that this is on a reliable foundation and is relevant to the issue under examination.
In Daubert, the trial judge must consider two issues: the relevance and scientific basis. To determine the relevance,
the trial judge must ensure that the expert will help the judge to understand or determine a fact in issue. As the trust in the scientific
basis, the Court found different factors to consider, among them, the methodology can be and has been tested, the methodology has
been subject to peer review or publication and if error rates are known, observe if there is existence of patterns of control and
operation, and if the theory obtained general acceptance in the relevant scientific community.
The Court found that this list is not definitive. Acknowledged that peer review or publication is not always a
determinative consideration because not always correlate with reliability, with some propositions, very particular, very recent or very
limited interest to be published.
The facts of each case must be considered to achieve the determination of admissibility. Even if weak evidence is
found admissible by the trial court, the court and the parties still have other means yet to reach the truth.
In General Electric Company v. Joiner the Supreme Court clarified some issues, where the plaintiff alleged that
exposure in the workplace to polychlorinated biphenyls (PCB) and their derivatives have generated small cell lung cancer. The author
admitted to being a smoker. The author was assumed smoker.
The district court granted summary judgment for the defendant, relying on the fact that there was no causal link
between exposure to PCBs and the author's illness. The district court also found that the testimony of prosecution expert was used
subjective arguments or unsupported speculation.
On appeal, the Eleventh Circuit Court of Appeals reversed the decision, stating that the Federal Rules of Proof
related to experts, assist the admissibility, and therefore, the reviewing courts should adopt a strict standard of review for exclusion of
expert testimony by the trial's judge.
The Supreme Court reversed the decision of the Eleventh Circuit Court, contending that the appellate courts should
be more judicious to admit or exclude expert testimony. Analyzing the admissibility of expert evidence in question was realized that
the studies presented were different from the case presented by the author and thus did not provide adequate basis for the allegations.
The Joiner Court held that, although in Daubert require courts to focus only on principles and methodology, not on the conclusions
generated, they are not required to admit evidence where there is a major omission in the data presented and the opinion offered.
There was still the problem of non-scientific expert evidence, which was not resolved by Daubert, since only treated
the expert scientific evidence. Then the question remained whether the trial courts should also work with gatekeepers in these cases.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


16
www.ijergs.org

In Kumho Tire Company v. Carmichael, the Court held that Daubertwas applied to all types of evidence.
The case was about a motor vehicle accident caused by a tire blowout that killed one person and caused injury to
others. The claim was that the tire had a manufacturing fault, based on studies done by failure engineers.
It was confirmed that the engineering evaluation of the evidence, the trial judge may consider the Daubert factors to
the extent of its relevance, and that the application of these factors should depend on the nature of the issue, the expert's area of
specialization , and the subject under discussion. The Daubert factors should be utilized to be useful and not immutable and
responsibility of gatekeeping is to evaluate each individual case, except that the trial judge go beyond the Daubert factors to assess the
relevance and reliability, ensuring that techniques were used stringent.
In a national survey of judges in USA on judging expert evidence in a post-Daubert era is explicit the belief that the
guard against ―junk science‖ is the intent of decision. [13]
It was also found that judges have difficulty in operationalizing the Daubert criteria and apply them, especially in
regards to falsifiability and error rate. Judges also have some difficulty to understand the scientific meaning of these criteria.
Another point is that the validity and reliability of approaches and procedures for forensic analysis should be tested.
In this sense, there is an effort of the acting community to achieve this goal. [14]
Certification programs for individuals and accreditation of educational programs and crime labs are voluntary and
are not supervised by the American Academy of Forensic Sciences (AAFS), which has a council to examine existing certification
bodies. [15]
Randall K. Noon sets forensic engineering as ―the application of engineering principles and methodologies to
answer questions of fact. These questions of fact are usually associated with accidents, crimes, catastrophic events, degradation of
property, and various types of failure‖. [16]
As in Brazil, in U.S. forensic engineers are experts that use engineering disciplines to assist in legal matters. They
work in all areas of engineering. It´s necessary at least a bachelor's degree in engineering, and most professionals are licensed as
professional engineers and this license may be required for some practical. Some forensic engineers have masters or doctorate degree
too. Most experts full time are in private practice or small private companies. There are also academics who do eventually
consultancy. There are many Forensic engineers engaged in the reconstruction of traffic accidents (car, train, airplane, etc..) and may
be involved in some cases failures materials, construction or other structural and collapses and other failures. [17]
The duty of the engineer appears in the following instruments: a contract for engineering services; laws governing
the licensing engineer; recommendations for good practice and codes of ethics promulgated by professional societies; and case law,
which is the law based on judicial decisions and precedents.
The case law established that engineers have a duty to provide its services in a manner consistent with the standard
of care of their professions. The standard jury dealing with the duty of a professional instruction provides that when performing
professional services for a client, a professional has a duty to have that degree of learning and skill ordinarily possessed by reputable
professionals practiced in the same location and under similar circumstances.
When performing professional services for a client must have the degree of learning and skill ordinarily possessed
by reputable professionals.It is his duty use skill and care ordinarily used in similar cases by other reputable professionals in the same
locality and under similar circumstances. Should also use reasonable diligence and his best judgment in the exercise of his
professional skill and applying his knowledge, struggling to fulfill the purpose for which it was employed. Failure to comply with any
duty is negligence.
In this way, there are four main obligations presented by the jury instruction: have knowledge and skill; use care and
skill; make use of reasonable diligence and his best judgment; and strive to achieve the purpose for which it was hired.
The level of learning, skill and care that professional engineering must own and use are those owned and used by
respected engineers in similar situations and locations. The requirement of "reasonable diligence" means that the engineer must apply
a balanced level of effort to complete their tasks. Efforts must involve or result in a serious, careful examination, but without
exceeding the bounds of reason. [18]
4. CONCLUSION
In the legal system of the United States, can be said that the quality of engineering professionals are
determined by the "gatekeepers" at the time of the admissibility of evidence, unlike the Brazilian system expert is chosen by the
judge.
Despite the failures that the system can provide, there are standards that, although not decisive, serve to guide
the court as to the admissibility of a particular evidence, getting the system always subject to improvement by a new decision
because the American system is based on the history of judicial decisions.
The focus on Brazil is the expert and their technical expertise, in other words the person's professional, while in
the U.S. the focus is on the result of the work of the expert: if this is usable, if the work has credibility, or whether it was used
"junk science, "which therefore demonstrates that the professional is not a good expert.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


17
www.ijergs.org

The biggest problem of the Brazilian system of choice of experts is that the choice of the expert stay only at the
discretion of the judge. The judge does not have the standards as adopted in the United States for the admissibility of expert
evidence, and technical assistants who could act as "supervisors" have little or no influence on the final results of forensic
analysis.
It turns out that experts with little technical knowledge, or that use "junk science", influences the court decision
because the judge believes in his assistant since it has no effective parameters to assess the quality of work of the professional.
A list of experts available to judges does not exist, as well as the quality of professionals is not
evaluatedobjectively. Also the scientific method is not evaluated which often exposes the judge take decisions based on
unreliable information.
The work of technical assistants could be valued, contributing to these act as beacons of expert performance ,
functioning as critical expertise held, almost like in America when using the cross-examination.
Despite being different judicial systems, the idea of control of the professionals who act as experts and a review
of the results of their investigations, in other words, if the expert reports are scientifically sound and not based on junk science,
one can have a benefit of expert activity much better and more useful results for society.
Due to the Brazilian legal system, to achieve this goal, it is necessary to make changes in the existing
legislation, which will only happen with a mobilization of experts and judges.

REFERENCES:
[1] Peter Neufeld, Barry Scheck ―Making forensic science more scientific‖ Nature, volume 464, page 351, Mar 2010.
[2] Carper, Kenneth L., ―Forensic Engineering‖2
nd
ed. Boca Raton: CRC Press, 2000.
[3] In: http://www.planalto.gov.br/ccivil_03/Constituicao/Constituicao.htm
[4] In: http://www.loc.gov/law/help/legal-research-guide/brazil-legal.php?loclr=bloglaw#t9
[5] In: http://www.planalto.gov.br/ccivil_03/Decreto-Lei/Del3689.htm
[6] In: http://www.planalto.gov.br/ccivil_03/Leis/L5869.htm
[7] In: http://www.confea.org.br/cgi/cgilua.exe/sys/start.htm?sid=906
[8] In: http://www.abnt.org.br/IMAGENS/Estatuto.pdf
[9] In: http://www.planalto.gov.br/ccivil_03/Decreto-Lei/Del2848.htm
[10] In: http://www.fjc.gov/public/pdf.nsf/lookup/U.S._Legal_System_English07.pdf/$file/U.S._Legal_System_English07.pdf
[11] Jurs, Andrew W., ―Balancing Legal Process with Scientific Expertise: Expert Witness Methodology in Five Nations and
Suggestions for Reform of Pos-DaubertU.S. Reliability Determinations‖ Marquette law review [0025-3987] Jurs, Andrew, Vol.
95,Issue 4, Pages1329 -1415, 2012.
[12] Patrick J. Sullivan, Franklin J. Agardy, Richard K. Traub ―Practical environmental forensics:process and case histories‖ John
Wiley & Sons, Inc., NY, 2001.
[13] Sophia I. Gatowski, Shirley A. Dobbin, James T. Richardson, Gerald P. Ginsburg, Mara L. Merlino, and Veronica Dahir ―Asking the
Gatekeepers: A National Survey of Judgeson Judging Expert Evidence in a Post-Daubert World‖ Law and Human Behavior, Vol. 25, No. 5,
October 2001.
[14] Geoffrey Stewart Morrison ―Distinguishing between forensic science and forensic pseudoscience: Testing of validity and
reliability, and approaches to forensic voice comparison‖ Science and Justice,Volume 54, Issue 3, Pages 245–256, May 2014.
[15] Virginia Gewin ―Forensic Evidence‖ Nature, Volume 458, Page 663, Apr 2009.
[16] Randall K. Noon ―Forensic engineering investigation‖ Boca Raton,CRC Press, 2001, ISBN 0-8493-0911-5.
[17] R. E. Gaensslen ―How do I become a forensic scientist? Educational pathways to forensic science careers‖ Anal Bioanal Chem
(2003) 376 : 1151–1155. DOI 10.1007/s00216-003-1834-0.
[18] Joshua B. Kardon ―The elements of care in engineering‖ in ―Forensic engineering: diagnosing failures and solving problems:
Proceedings of the 3rd International Conference on Forensic Engineering‖ Edited by Brian S. Neale. Taylor& Francis Group,
London, 2005, ISBN 9780415395236




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


18
www.ijergs.org

High Speed CPL Adder for Digital Biquad Filter Design
Neva Agarwala
1

1
Lecturer, Department of EEE, Southeast University, Dhaka, Bangladesh
E-mail- mnagarwala@seu.ac.bd

Abstract— The project presents the comprehensive explanation of how to minimize the overall delay of a Digital Biquad Filter by
comparing the time delay performance analysis among different adders. Finally, 8 bit CPL adder is used in the design methodology of
this Biquad Filter for its excellent performance in timing delay calculation. At the end, it has been found that the design was fully
functional and the time delay was less compare to others.
Keywords—Biquad Filter, CPL Adder, CMOS adder, ROM, Register, D Flip Flop. nmos, pmos, XOR.

1. INTRODUCTION
1.1 Review of full adder design of two different cmos logic style

Several variants of static CMOS logic styles have been used to implement low-power 1-b adder cells [1]. In general, they can be
broadly divided into two major categories: the complementary CMOS and the pass-transistor logic circuits. The complementary
CMOS full adder (C-CMOS) of Fig. 2 is based on the regular CMOS structure with pMOS pull-up and nMOS pull-down transistors.
The series transistors in the output stage form a weak driver. Therefore, additional buffers at the last stage are required to provide the
necessary driving power to the cascaded cells. [2]

The complementary pass transistor logic (CPL) full adder with swing restoration is shownin Fig. 3. The basic difference between the
pass-transistor logic and the complementary CMOS logic styles is that the source side of the pass logic transistor network is connected
to some input signals instead of the power lines. The advantage is that one pass-transistor network (eitherpMOSor nMOS) is sufficient
to implement the logic function, which results in smaller number of transistors and smaller input load. [3]

1.2 Aims and Objectives

The general objective of our work is to make a faster 8-bit adder and to investigate the area and power-delay performances of 1 bit full
adder and 8 bit full adder cells in two different CMOS logic styles. Here, we compare the CMOS and CPL 1 bit and 8 bit full adder
and use the CPL full adder in the Biquad filter because the delay is low compared to the CMOS adder.

1.3 One bit full adder

The one-bit full adder used is a three-input two-output block. The inputs are the two bits to be summed, and, and the carry bit , which
derives from the calculations of the previous digits. The outputs are the result of the sum operation and the resulting value of the carry
bit. More specifically, the sum and carry output are given by,

S = A xor B xor Cin ------------------------------ (1)

Co = AB+(A+B)Cin ------------------------------ (2)

From (2) it is evident that if the carry output is equal to their value. If we have (the full adder is said to be in propagate mode), and,
hence, the full adder has to wait for the computation of Co.[4]


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


19
www.ijergs.org










Fig 1: A full adder [4]

2. DESIGN AND SPECIFICATION

The sizing used are based on the inverter size (nmos = 3:2 and pmos = 6:2). Below are the details of 8-Bit CPL Adder modules:
- 8-bit CPL Adder
- Reference sizing from inverter with size 3:2 for nmos transistor and 6:2 for pmos transistor
- Using 1-bit CPL Adder
- 1-bit CPL Adder sizing: 6:2 for all nmos transistor and 6:2 for all pmos transistor


3. RESULTS AND ANALYSIS

We simulate the 1 bit and 8 bit full adders by using IRSIM and get different delay from these two different adders. From the
simulation waveform we can easily calculate the delay

For 1 bit full adder:

CPL: 0.179ns 0.537ns (Layout)
CMOS: 0.53ns 0.893ns (Layout)

For 8 bit:

CPL: 2.02ns 2.268ns (Layout)

From these above result we come to know that CPL 1 bit adder is faster than CMOS adder. For this reason we use this adder in the
biquad filter design to get the minimum delay from the whole design and get a better performance.



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


20
www.ijergs.org

- A. SCHEMATIC DIAGRAM
- CMOS 1-bit Adder






Fig 2: CMOS 1 bit Adder (Schematic)

- CPL 1-bit Adder






Fig 3: CPL 1 bit Adder (Schematic)
- CPL 8-bit Adder







Fig 4: CPL 8 bit adder (Schematic)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


21
www.ijergs.org

B. LAYOUT VIEW

- CMOS 1-bit Adder







Fig 5: CMOS 1 bit Adder (Layout)

- CPL 1-bit Adder






Fig 6: CPL 1 bit Adder (Layout)
- CPL 8-bit Adder







Fig 7: CPL 8 bit adder (Layout)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


22
www.ijergs.org

C. TIMING SIMULATION:
1) CMOS 1-bit Adder
a. Schematic
∆d = 0.536ns





Fig 8: Simulation of CMOS 1 bit Adder (Schematic)
b. Layout
∆d = 0.893ns






Fig 9: Simulation of CMOS 1 bit Adder (Layout)
2) CPL 1-bit Adder
a. Schematic
∆d = 0.179ns






Fig 10: Simulation of CPL 1 bit Adder (Schematic)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


23
www.ijergs.org

b. Layout
∆d = 0.537ns







Fig 11: Simulation of CPL 1 bit Adder (Layout)
3) CPL 8-bit Adder
a. Schematic
∆d = 2.024ns






Fig 12: Simulation of CPL 8 bit adder (Schematic)
b. Layout
∆d = 2.268ns





Fig 13: Simulation of CPL 8 bit adder (Layout)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


24
www.ijergs.org

4. OUTPUT
TABLE 1: 8 BIT FULL ADDER








5. CONCLUSION

For full adder cell design, pass-logic circuit is thought to be dissipating minimal power and have smaller area because it uses less
number of transistors. Thus, CPL adder is considered to be able to perform better than C-CMOS adder. Based on the SPICE net list
generated for all modules were compared and found similarity for both schematic and layout. The same goes for the timing simulation
ran using Build-in IRSIM. The delay found for the layout greater than the schematic but still in the acceptable range. Below are tables
of delay observed:


TABLE 2: TIME DELAY







A
0
A
1
A
2
A
3
A
4
A
5
A
6
A
7
B
0
B
1
B
2
B
3
B
4
B
5
B
6
B
7
C
i
n
S
0
S
1
S
2
S
3
S
4
S
5
S
6
S
7
C
o
u
t
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1
1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0
1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 0 1
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 0 1 0 1 0 1 0 1
1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0
1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 0 1 0 1 0 1 0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1


Modules
Delays
Schematic Layout
1-bit CMOS
Adder
0.536ns 0.893ns
1-bit CPL
Adder
0.179ns 0.537ns
8-bit CPL
Adder
2.024ns 2.268ns

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


25
www.ijergs.org

ACKNOWLEDGEMENT

I would like to thank Dr. P.Y.K Cheung for his enormous support while doing this work.

REFERENCE
[1] S. Wairya, R. K. Nagaria, S. Tiwari, ―New Design Methodologies for High-Speed Mixed Mode Cmos Full Adder Circuits‖
International Journal of VLSI Design & Communication Systems, Vol. 2, No. 2, pp. 78-98, June 2011.

[2] S. Wairya, R. K. Nagaria, S. Tiwari, S. Pandey, ―Ultra Low Voltage High Speed 1-Bit CMOS Adder‖, IEEE Conference on
Power, Control and Embedded Systems (ICPCES), pp.1-6, December 2010.

[3] R. Zimmermann, W. Fichtner, ―Low-Power Logic Styles: CMOS Versus Pass-Transistor Logic‖, IEEE Journal of Solid-State
Circuits, Vol. 32, No.7, pp. 1-12, July 1997.

[4] Fordham University, ―The Binary Adder‖, Fordham College Lincoln Center, Spring, 2011















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


26
www.ijergs.org

WiTricity: A Wireless Energy Solution Available at Anytime and Anywhere
Shahbaz Ali Khidri¹, Aamir Ali Malik², Shakir Hussain Memon³
Department of Electrical Engineering, Sukkur Institute of Business Administration, Sukkur, Sindh, Pakistan
¹shahbazkhidri@outlook.com, ²aamir.malik@iba-suk.edu.pk, ³shakir.hussain@iba-suk.edu.pk
Abstract – Electrical power is vital to everyone and is a clean and efficient energy source that is easy to transmit over long distances,
and easy to control. Generally, electrical power is transmitted from one place to another with the help of wires which introduce losses
and a significant amount of power is wasted in this way. As a result, the efficiency of the power system is significantly affected. In
order to overcome these problems, a low-cost, reliable, efficient, secure, and environmental friendly wireless energy solution is
presented in this research paper. The concept of transferring power wirelessly in 3D space was first realized by Nikola Tesla when he
gave the idea to transmit the power without the help of wires over large distances using the earth‘s ionosphere. In this research paper,
magnetic resonance method which is non-radiative in nature is introduced for wireless power transmission and the electrical power is
transmitted wirelessly over a distance of 10 feet with an overall efficiency of 80%. The method introduced in this paper is
environmental friendly and has a negligible interaction with exterior forces/objects.
Keywords – Electrical power, energy source, long distance power transmission, wireless power transmission, magnetic resonance,
non-radiative, power system efficiency.
I. INTRODUCTION
An interesting aspect about the energy in ―Electrical form‖ is that neither it is so available directly from nature, nor it is required to be
finally consumed in that form [1]. Still, it is the popular form of energy since it can be used cleanly in any home, work place, or
factory [2]. Generally, electrical power is transmitted from one place to another with the help of conventional copper cables and
current carrying wires which introduce significant losses and much amount of power is wasted in this way. As a result, the efficiency
of the power system is highly affected and the overall performance of the power system is degraded. The efficiency of the
conventional power transmission system can be improved by using the quality material but this introduces a significant increment in
cost. As the world has become a global village because of technological advancements, people don‘t like to interact all the time with
the conventional wired power system for charging their electrical/electronic devices and for other purposes because it‘s much
complicated, time consuming, and dangerous as there is always a chance of getting electric shock. The conventional wired power
system is shown in figure 1.

Figure 1 Conventional Wired Power System
In order to get rid of all these problems and hurdles, an alternative solution must be presented or created which must be efficient,
reliable, safe, cost-effective, and environmental friendly. Nikola Tesla was the first person who gave the idea of transmitting electrical
power over large distances using the earth‘s ionosphere without the help of conventional copper cables and current carrying wires [3].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


27
www.ijergs.org

Nikola Tesla designed a magnifying transmitter to implement wireless energy transmission by means of the disturbed charge of
ground and air method [4]. The magnifying transmitter is shown in figure 2.

Figure 2 Magnifying Transmitter
In this research paper, a low-cost, reliable, efficient, secure, and environmental friendly wireless energy solution is presented and is
based on magnetic resonance method which is non-radiative in nature. The electrical power is transmitted wirelessly over a distance of
10 feet and an overall efficiency of 80% is achieved by utilizing this technique.
This research paper is based on 5 sections. Section II is based on literature review and reviews the existing techniques and methods for
wireless power transmission. Section III is based on the methods and techniques which we have used for wireless power transmission
and describes our contribution. Section IV is based on results and reflects the results obtained from carried out research work. Section
V is based on conclusions and concludes the research paper with the important suggestions and factual findings from the carried out
research work.
II. LITERATURE REVIEW
Several techniques and methods are available for wireless power transmission. The common methods are given as:
1. Wireless Power Transmission using Magnetic Induction
This method for wireless power transmission is non-radiative in nature and works on the principle of mutual induction which states
that when two coils are inductively coupled and electrically isolated and if current in one coil is changed uniformly then an
electromotive force gets induced in the other coil [5]. In this way, the energy can be transmitted from one place to another without
using the conventional wires. However, there are few limitations and this method is not a proper method for wireless power
transmission because of several factors including shorter range (a few mm if any), lower overall efficiency, and tight coupling [6]. The
care must be taken in positioning the coils for proper operation. Many industries are using this method in their products. The magnetic
induction method is widely used in electric toothbrushes, wireless cell phone chargers, and pace makers [7, 8]. The efficiency and the
operating range of this method can be improved to a considerable level by enhancing the resonance.
2. Wireless Power Transmission using Electromagnetic Radiations
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


28
www.ijergs.org

This method for wireless power transmission is radiative in nature and not widely used because the transmitted power is dissipated in
all directions and at the end, insufficient amount of power reaches at the receiver.
This method is widely used for the transmission of information wirelessly over large distances.
3. Wireless Power Transmission using Optical Techniques
This method for wireless power transmission uses lasers to transmit energy from one place to another. The energy which is to be
transferred is in the form of light which is converted into electrical form at the receiver end. This method uses directional
electromagnetic waves so the energy can be transmitted over large distances [9]. This method for wireless power transmission is not
suitable when the receiver is mobile because proper line-of-sight is needed. For proper operation, no object should be placed between
transmitter and receiver. Complicated tracking techniques can be used in mobility condition but at the end, the cost of the power
system is increased to a significant level.
4. Wireless Power Transmission using Microwaves
This method for wireless power transmission uses microwave frequencies for transmitting energy from one place to another. The
energy can be transmitted over large distances using radiative antennas [10]. The efficiency of this power system is higher at greater
distances as compared to other wireless power transmission systems but this method is not environmental friendly and is unsafe and
complicated because microwave frequencies at higher power levels can potentially harm people. Proper care must be taken when
using method at higher power levels. Energy in tens of kilowatts has been transmitted wirelessly using this method [11]. In 1964, a
model of microwave powered helicopter was presented by Brown [12]. In 1997, this method has been utilized for wireless power
transmission in Reunion Island [13].
5. Wireless Power Transmission using Electrodynamic Induction
This method for wireless power transmission is non-radiative in nature and is environmental friendly. Two resonance objects can
exchange energy when they possess same frequency [14]. The higher efficiency can be achieved when the transmitting range is
medium. This method is popular method for wireless power transmission because no alignment of transmitter and receiver is needed
so this method has a higher placement freedom. In 2007, researchers from Massachusetts Institute of Technology (MIT) utilized this
method and powered a 60W light-bulb wirelessly at a distance of 7 feet with an overall efficiency of 40% [15]. In 2008, Intel used the
same method and powered a 60W light-bulb wirelessly at a shorter distance with an overall efficiency of 75% [16]. In 2008, the same
method was used by Lucas Jorgensen and Adam Culberson belonging to Cornell University and a successful experiment of wireless
power transmission at a shorter distance was performed [17]. In 2011, Mandip Jung Sibakoti and Joey Hambleton belonging to
Cornell University performed the same experiment and powered a 40W light-bulb wirelessly at a shorter distance [18].
III. IMPLEMENTATION OF WIRELESS POWER TRANSMISSION USING MAGNETIC RESONANCE
The block diagram of wireless power transmission using magnetic resonance is shown in figure 3.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


29
www.ijergs.org


Figure 3 Block Diagram of WPT using Magnetic Resonance
Magnetic resonance is a low-cost, reliable, efficient, secure, and environmental friendly method for wireless power transmission.
The energy in ―Electrical form‖ can be transmitted from one place to another over medium ranges with the help of magnetic field
when the frequencies of source resonator and device resonator are equal. This method is non-radiative in nature and has a negligible
interaction with exterior forces/objects. Different steps involved in magnetic resonance based wireless power transmission are shown
in figure 3.
In first step, an alternating current (AC) is supplied to the power system which is usually 240V. In second step, the alternating current
(AC) is converted into the direct current (DC) using rectifiers. This step is ignored when a direct current (DC) supply is provided to
the power system. When used for high power applications, few errors may occur so a power factor corrector may be needed for high
power applications. In third step, the direct current (DC) obtained from the rectifier is converted into a radio frequency (RF) voltage
waveform because the source resonator operates on a radio frequency (RF) voltage waveform. This conversion is done using a high
speed and highly efficient operational amplifier. The amplifier used here has a very high frequency response. In fourth step, an
impedance matching network (IMN) is used for efficient coupling of the high speed operational amplifier output and the source
resonator. In fifth step, the magnetic field is generated by the source resonator. In sixth step, the generated magnetic field excites the
device resonator and an energy build up process takes place. Here, the energy is transferred without the help of wires with the help of
magnetic field. In seventh step, an impedance matching network (IMN) is used again for efficient coupling of the device resonator and
the load. In eighth step, the radio frequency (RF) voltage waveform is converted into the direct current (DC) using rectifiers because
the load operates on a direct current (DC) supply. In ninth and final step, the load is powered with the direct current (DC) supply. So,
the energy is efficiently transmitted wirelessly from the source to the load with the help of magnetic resonance.
In this research work, a successful experiment of wireless power transmission over a distance of 10 feet is performed with an overall
efficiency of 80%. It is observed that a large amount of energy can be obtained from resonators because of their oscillating nature. So
with a weak excitation force, a useful amount of energy can be obtained which is stored in resonators. The efficiency of a resonator is
based on its quality factor often represented by ―Q‖. The quality factor depends upon the resonant frequency and the internal losses of
the resonator. So, a resonator with lower losses has a higher quality factor and a higher efficiency. A simple electromagnetic resonator
is shown in figure 4
.
Figure 4 Electromagnetic Resonator
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


30
www.ijergs.org

For this research work, resonators with a high quality factors are used in order to obtain a better and desirable efficiency. It is possible
for resonators to exchange energy with the help of magnetic field when they are placed closer to each other. Two coupled resonators
exchanging energy are shown in figure 5
.
Figure 5 Coupled Resonators
The coils used for wireless power transmission in this research work have the radius of approximately 74cm and are designed to have
the resonant frequency range of 8.5MHz – 12.5MHz. For frequency matching, a tunable high frequency oscillator is designed using
operational amplifiers having the tunable frequency range of 5.5MHz – 14.5MHz. Along with a tunable high frequency oscillator, a
power amplifier is used for assuring the reasonable amount of power which is to be transferred to the load at the receiver side.
The creative visualization of wireless power transmission using magnetic resonance is shown in figure 6, figure 7, and figure 8.

Figure 6 Powering the Source Resonator

Figure 7 Energy Build Up Process in the Device Resonator due to the Source Resonator
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


31
www.ijergs.org


Figure 8 Powering the Load
The creative visualization shows that the energy is transmitted from the source to the load in three steps.
In first step as shown in figure 6, the source resonator is powered with the help of an alternating current (AC) supply. In second step as
shown in figure 7, the energy build up process takes place with the help of magnetic field when the source resonator and device
resonator having same frequencies are coupled. In third step as shown in figure 8, the load is powered with the help of direct current
(DC) supply which is transmitted wirelessly from the source.
IV. RESULTS AND DISCUSSIONS
In this research work, we were able to power a 40W light-bulb wirelessly over a distance of 10 feet with an overall efficiency of 80%.
A significant change in the efficiency of the wireless power transmission system was observed when the distance between the source
resonator and the device resonator was varying. A decrement in the intensity of light was observed with the increment in the distance.
However, the overall efficiency of the designed wireless power transmission system was better and desirable. The results obtained
from the designed wireless power transmission system when placed in the parallel and the perpendicular configuration are shown in
figure 9 and figure 10 in form of charts respectively.

Figure 9 Power vs. Distance Chart for Parallel Configuration
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


32
www.ijergs.org


Figure 10 Power vs. Distance Chart for Perpendicular Configuration
Different values of power in watts with respect to distance in feet are shown in figure 9 and figure 10 for the parallel and the
perpendicular configuration respectively. It is observed from these power values that the intensity of light decreases when the distance
increases. However, sufficient amount of power is obtained wirelessly over a distance of 10 feet with an overall efficiency of 80%. A
change in the resonant frequency was also observed when the distance was increased gradually due to the imperfect match in the
resonant frequencies of coils. So, the frequency was properly adjusted at different intervals of measurement for obtaining maximum
power and better efficiency. Overall, the results obtained from the carried out research work were desirable.
The chart in figure 11 shows the relationship between the efficiency of the designed wireless power system and the distance when the
parallel configuration is used and the chart in figure 12 shows the relationship between the efficiency and the distance when the
perpendicular configuration is used.

Figure 11 Efficiency vs. Distance Chart for Parallel Configuration
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


33
www.ijergs.org


Figure 12 Efficiency vs. Distance Chart for Perpendicular Configuration
The efficiency is decaying with an increment in the distance as shown in both charts in figure 11 and figure 12.
This shows that the performance of the designed system is better when the source resonator and the device resonators are closer to
each other and the performance starts degrading when the distance between the source resonator and the device resonator gets
increased. It is observed that the efficiency and the performance of the wireless power system decreases when the distance between
the source resonator and the device resonator increases for parallel as well as perpendicular configuration.
This wireless power transmission system is suitable for medium transmitting ranges so better efficiency can be achieved at moderate
distances.
V. CONCLUSIONS AND FUTURE RECOMMENDATIONS
A successful experiment of wireless power transmission over a distance of 10 feet with an overall efficiency of 80% is carried out in
this research work. The designed wireless power transmission is a low-cost, reliable, efficient, secure, and environmental friendly. The
designed power system has a negligible interaction with exterior forces/objects. The designed wireless power transmission system can
be used in various areas of application. In the area of consumer electronics, the designed system can be used to wirelessly power the
home and industry appliances including television, cell phone, room lightings, laptop, propeller displays and clocks, etc. In the area of
medical sciences, the presented system can be used to power heart assist pumps, pacemakers, infusion pumps, etc. The presented
system can be used to efficiently charge the electric vehicles. The wireless power transmission system can also be used in military and
can be used to power military robots, vehicles, and other necessary equipment of a soldier.
In future, a significant research can be carried out in the area of wireless power. Reduced size wireless power transmission systems
with better efficiency over large distances can be developed. Efficient wireless power transmission systems can be designed in future
to transmit tens and thousands of KW power over hundreds of miles with maximum efficiency and performance.
ACKNOWLEDGMENT
We would like to express our appreciation to our beloved parents for their unconditional love and support that let us through the
toughest days in our life.
REFERENCES:
[1] Bakshi U. A. and Bakshi V. U., ―Electrical Machines – I‖, Pune, Technical Publications Pune, v1, n1, ISBN: 978-8-18-
431535-6, 2009.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


34
www.ijergs.org

[2] Chapman Stephen J., ―Electrical Machinery Fundamentals‖, New York, McGraw-Hill Companies, Inc., v1, n5, ISBN: 987-0-
07-352954-7, 2012.
[3] Tesla Nikola, ―The transmission of electrical energy without wires‖, Electrical World and Engineer, March 1905.
[4] Corum K. L. and Corum J. F., ―Nikola Tesla and the Diameter of the Earth: A Discussion of One of the Many Modes of
Operation of the Wardenclyffe Tower‖, 1996.
[5] Syed A. Nasar, ―Schaum‘s Outline of Theory and Problems of Electric Machines and Electro mechanics‖, New York,
McGraw-Hill Companies, Inc., v1, n2, ISBN: 0-07-045994-0, 1998.
[6] Dave Baarman and Joshua Schwannecke, ―Understanding Wireless Power‖, December 2009, Web,
http://ecoupled.com/pdf/eCoupled_Understanding_Wireless_Power.pdf, last visited on July 25, 2014.
[7] The Economist, ―Wireless charging, Adapter dies‖, November 2008, Web,
http://www.economist.com/science/tq/displayStory.cfm?story_id=13174387, last visited on July 25, 2014.
[8] Fernandez J. M. and Borras J. A., ―Contactless battery charger with wireless control link‖, U. S. Patent: 6,184,651, February
2001.
[9] Sahai A., Graham, and David, ―Optical wireless power transmission at long wavelengths‖, IEEEICSOS, 2011, IEEE
International Conference on Space Optical Systems and Applications, ISBN: 978-1-4244-9686-0, June 02, 2011.
[10] Landis G. A., ―Applications for Space Power by Laser Transmission‖, SPIEOEOLC 1994, Conference on SPIE Optics,
Electro-optics, and Laser, v2121, p252-55, January 1994.
[11] Space Island Group, ―Space Solar Energy Initiative‖, Web, http://www.spaceislandgroup.com/solarspace.html, last visited on
July 27, 2014.
[12] Brown W. C., Mims J. R., and Heenan N. I., ―An Experimental Microwave-Powered Helicopter‖, IEEEICR 1964, 965 IEEE
International Convention Record, v13, n5, p225-35, 1964.
[13] Lan Sun Luk J. D., Celeste A., Romanacce, Chane Kuang Sang L., and Gatina J. C., ―Point-to-Point Wireless Power
Transportation in Reunion Island‖, IAC 1997, 48
th
International Astronautical Congress, October 1997.
[14] Karalis A., Joannopoulos J. D., and Soljacic M., ―Efficient Wireless Non-Radiative Mid-range Energy Transfer‖, Annals of
Physics, 323, 2008, p34-48, April 27, 2007.
[15] EetIndia.co.in, ―MIT lights 60W light-bulb by wireless power transmission‖, Web,
http://www.eetindia.co.in/ART_8800467843_1800005_NT_4ba623b8.HTM, last visited on July 27, 2014.
[16] TG Daily, ―Intel imagines wireless power for your laptop‖, August 2008, http://www.tgdaily.com/content/view/39008/113/,
last visited on July 27, 2014.
[17] Lucas Jorgensen and Adam Culberson, ―Wireless Power Transmission Using Magnetic Resonance‖, 2008, Web,
http://www.cornellcollege.edu/physics/courses/phy312/Student-Projects/Magnetic-Resonance/Magnetic-Resonance.html, last
visited on July 27, 2014.
[18] Mandip Jung Sibakoti and Joey Hambleton, ―Wireless Power Transmission Using Magnetic Resonance‖, December 2011,
Web, www.cornellcollege.edu/physics/files/mandip-sibakoti.pdf, last visited on July 27, 2014

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


35
www.ijergs.org

Removal of phenol from Effluent in Fixed Bed: A Review
Sunil J. Kulkarni
1

1
Chemical Engineering Department, Datta meghe College of Engineering, Airoli, Navi Mumbai, maharastra, India
E-mail- Suniljayantkulkarni@gmail.com

Abstract— Phenol removal from wastewater is very widely studied area of research. The practical approach for phenol removal by
adsorption involves study of batch adsorption and more importantly the fixed bed operation. In the present study, various aspects of
fixed bed adsorption for phenol have been discussed. The review of research carried out on this topic is presented. The phenol
removal in fixed bed has been carried out by using adsorbents, biosorbents, aerobic and anaerobic biological mechanisms. In most of
the investigations fixed bed adsorption was found to be satisfactory in terms of removal efficiency and time. The nature of break
through curve was justified by using various models. The experimental data was in agreement with model results. In most of the
cases, the equilibrium capacity increased with increase in influent concentration, bed height and decreased with increase in flow rate.
Keywords— Adsorption, saturation time, isotherms, kinetics, flow rate, concentration , removal efficiency.
I.INTRODUCTION
Industrial effluent is major source of effluent disposed to river, land other reservoirs. One of the major pollutant of great
environmental concern is phenol. Wastewater from other industries such as paper and pulp, resin manufacturing, tanning, textile,
plastic, rubber, pharmaceutical and petroleum also contain different types of phenols. Phenolic compounds are harmful to organisms
even at low concentrations and many have been classified as hazardous pollutants because of their potential harm to human health.
Various methods used for phenol removal from wastewater includes abiotic and non–biological processes such as adsorption,
photodecomposition, volatilization, coupling to soil humus and thermal degradation. Removal of phenol by adsorption is very
effective treatment method. Use of fixed bed for phenol removal offers many advantages such as flexibility, adoptability and high
removal efficiency. In the present study, the work done in this field is summerized.The studies and research carried out includes
isotherm, kinetic, breakthrough curve studies. The batch data was used for isotherm and kinetic studies. Attempts have also been done
to use the batch data to predict fixed bed parameters. Various models were used to justify the nature of breakthrough curve by various
researchers.
II.PHENOL REMOVAL IN FIXED BED
Studies for the removal of aqueous phenol from activated carbon prepared from sugarcane bagasse in a fixed bed adsorption column
were carried out by Karunarathnea and Amarasinghe[1]. They prepared adsorbent from fine bagasse pith collected from a leading
local sugar manufacturing factory in Sri Lanka .This bagasse was washed and dried in an oven under the temperature of 700C for
24h. Preparation of activated carbon(AC) was done by heating a bagasse sample at 600
0
C for 1 hour in a muffle furnace in the absence
of air. AC particles between 1-2 mm in size were used for all the experiments. They conducted column experiments using a glass tube
of 3 cm diameter and 55 cm height. They conducted the experiments varying the weight of activated carbon using initial solution
concentration of 20mg/l. There are many parameters involve in evaluation of performance of fixed bed column such as solution initial
concentration, flow rate, amount of adsorbent used and particle size of the adsorbent. The results show that the increases of adsorbent
dose in column enhance the adsorbent capacity of the bed. Further percentage of length of the unused bed to its original bed height
decreases with the increases of amount of adsorbent used. Anisuzzaman et.al.investigated phenol adsorption in activated carbon
packed bed column with emphasis on dynamic simulation[2]. Their many study was aimed at the dynamic simulation of phenol
adsorption within the packed bed column filled with activated carbon derived from dates‘ stones. The parameters such as column
length, inlet liquid flow rate, initial phenol concentration of feed liquid and characteristics of activated carbon were investigated based
on dynamic simulation using Aspen Adsorption V7.1. However, based on the simulation, they concluded that the adsorption column
is not feasible for conventional water treatment plant. A review on removal of phenol from wastewater in packed bed and fluidized
bed columns was done by Girish and Murty[3].. Their study provided an bird eye view of the packed and fluidized bed columns used
for treatment of wastewater containing phenol and also on the different operational conditions and their performance. They concluded
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


36
www.ijergs.org

that to enhance the performance of the reactors for phenol adsorption, there is an indispensable requirement of novel efforts to be done
in the reactor design.Gayatri and Ahmaruzzaman studied adsorption technique for the removal of phenolic compounds from
wastewater using low-cost natural adsorbents[4]. Though activated carbon is an effective adsorbent, its widespread used is restricted
due to its high cost and substantial lost during regeneration. Their study indicated that the adsorption for phenol in a fixed bed a
efficient method for phenol removal The data obtained during investigation is in agreement with the models available to relat e the
break through time and break through curve for adsorption. Ekpete et.al used fluted pumpkin and commercial activated carbon for
fixed bed adsorption of chlorophenol[5].They compared the removal efficiency of chlorophenol by fluted pumpkin stem waste to a
commercial activated carbon. The fixed bed experiment were carried out to study flow rate (2-4ml/min), initial concentration (100-
200mg/l) and bed height (3-9cm). Column bed capacity and exhaustion time increased with increasing bed height. With increase in the
flow rate the bed capacity decreased. They observed that the column performed well for lowest flow rate of 2 ml/min. It was also
observed that the increase in flow rate decreased the breakthrough time, exhaustion time and uptake capacity of chlorophenol due to
insufficient residence time of the chlorophenol in the column. Li et.al. developed a mathematical model for multicomponent
competitive adsorption process, to describe the mass transfer kinetics in a fixed-bed adsorber packed with activated carbon fibers
(ACFs)[6]. They analyzed the effects of competitive adsorption equilibrium constants, axial dispersion, external mass transfer, and
intraparticle diffusion resistances on the breakthrough curves for weakly-adsorbed and strongly-adsorbed components. It was
observed, during the analysis that the effects of intrafiber and external mass transfer resistances on the breakthrough curves can be
neglected for a fixed-bed adsorber packed with ACFs. The axial dispersion was confirmed to be the main parameter that controls the
adsorption kinetics.
Ashtoukhy et.al. investigated removal of phenolic compounds by electrocoagulation using a fixed bed electrochemical reactor for
petroleum waste.[7]. The investigation was carried out to study the removal of phenolic compounds in terms of various parameters in
batch mode namely: pH, operating time, current density, initial phenol concentrations, addition of NaCl, temperature and the effect of
phenol structure (effect of functional groups). Their study revealed that the optimum conditions for the removal of phenolic
compounds were achieved at current density = 8.59 mA/cm2, pH = 7, NaCl concentration = 1 g/L and temperature of 25°C. EC of
phenolic compounds using Al rasching rings connected together in the form of fixed bed sacrificial anode seems to be very efficient
method from this research. The removal of 100% of phenol compound after 2 hrs was achieved for 3 mg/l phenol concentration of real
refinery wastewater at the optimum conditions.Sorour et.al. studied application of adsorption packed-bed reactor model for phenol
removal[8]. They conducted the experiments to determine the Langmuir equilibrium coefficients (α and Xm) and to determine the
bulk sorbate solution concentration versus different adsorption column depths and different time as well. The model equations which
are a combination of Particle Kinetics and Transport Kinetics were used to predicts the relations between sorbate concentration and
flow rate as variables with column depth at any time. The granular active carbon[AquaSorbTM2000] and filtration anthracite
[AMSI/AWWA 8100-96] was used as sorbents and phenol as sorbate through testing over a range of phenol concentrations (100-300
mg/l).The results of the model were in good agreement with the experimental data. The investigation on removal of phenol and lead
from synthetic wastewater by adsorption onto granular activated carbon in fixed bed adsorbers was carried out by Sulaymon et.al.[9].
They used fixed bed adsorbers for removal of phenol and lead (II) onto granular activated carbon (GAC) in single and binary system.
A general rate multi-component model which considers both external and internal mass transfer resistances as well as axial dispersion
with non-liner multi-component isotherm, was utilized to predict the fixed bed breakthrough curves for dual-component system. The
results showed that a general rate model was satisfactory for describing the adsorption process of the dynamic behavior of the GAC
adsorber column. The research on fixed bed column studies for the sorption of para-nitrophenol from aqueous solutions using cross-
linked starch based polymer was conducted by Sangeeta et.al.[10]. It was observed that the column experiments on cross-linked starch
showed that the adsorption efficiency increased with increase in the influent concentration, bed height and decreased with increasing
flow rate. Also the experimental data was well fitted with Yoon-Nelson model. It was concluded that the adsorbent prepared from the
cross-linking of starch with HMDI was an effective adsorbent for the removal of para-nitrophenol (pNP) from waste water.
maximum equilibrium capacity of 42.64 mg/g for pNP at 100 mg/L of influent concentration, bed height 7.5 cm and flow rate of 4
ml/min was observed. It was also seen that the equilibrium capacity increased with increase in influent concentration, bed height and
decreased with increase in flow rate.Bakhshi et. al. used upflow anaerobic packed bed reactor (UAPB) for phenolremoval[11].The
operating conditions were a hydraulic retention time (HRT) of 24 h under mesophilic (30±1°C) conditions. The operation was split
into four phases. The concentration of phenol in phases 1, 2, 3 and 4 were 100, 400, 700 and 1000 mg/l, respectively..The reactor
reached steady state conditions on the 8th day with a phenol removal efficiency and biogas production rate of 96.8 % and 1.42 l/d in
phase 1.An increase of the initial phenol concentration in phase 2 resulted in a slight decrease in phenol removal efficiency. Phases 3
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


37
www.ijergs.org

and 4 of startup followed the same trend. In phases 3 and 4, the phenol removal efficiencies at steady state conditions were 98.4 and
98%, respectively. A sudden decrease in biogas production was observed with stepwise increase of the phenol concentration, dynamic
studies of nitro phenols sorption on perfil in a fixed-bed column were carried out by Yeneva et.al.[12]. They investigated the
adsorption of two substituted nitrophenols, namely 4-nitrophenol (4-NP) and 2,4-dinitrophenol (2,4-DNP), from aqueous solutions
onto perfil in a fixed bed. They applied the theoretical solid diffusion control (SDC) model describing single solute adsorption in a
fixed bed based on the Linear Driving Force (LDF) kinetic model to the investigated systems. They used Biot number as an indicator
for the intraparticle diffusion. The Biot number was found to decrease with the increase of bed depth. This indicated that the film
resistance was increased or the intraparticle diffusion resistance was decreased. Coated sand (CS) filter media was used to remove
phenol and 4-nitrophenol from aqueous solutions in batch experiments by Obaidy [13]. They studied the influence of process
variables represented by solution pH value, contact time, initial concentration and adsorbent dosage on removal efficiency of phenol
and 4-nitrophenol.The adsorption of phenol from aqueous solution onto natural zeolite was studied using a fixed bed column by
Ebrahim[14]. They carried out experiments to study the effect of influent concentration, flow rate, bed depth and temperature on the
performance of the fixed bed. The study indicated that there was a good matching between experimental and predicted data in batch
experiment by using surface diffusion method. It was also observed that The Homogeneous Surface Diffusion Model (HSDM) which
includes film mass transfer and surface diffusion resistance provides a good description of the adsorption process. With increase in
concentration the breakthrough curve became steeper, because of increase in the driving force. The investigation on adsorption of
phenol, p-chlorophenol and mercuric ions from aqueous solution onto activated carbon in fixed bed columns was done by Mckay and
Bino[15]. It was observed that the parameters like bed depth, solution flowrate and pollutant concentration affect the breakthrough
curve and breakthrough time. The Depth Service Time was used to analyze the data. The experimental data agreed with the model. In
case of modeling, insufficient models are available to describe and predict fixed-bed or column adsorption. Mathematical models
proposed to describe batch adsorption in terms of isotherm and kinetic behavior can be used for study of fixed bed.The review done by
Xu et al. indicates that the general rate models (and ―general rate type‖ models) and (linear driving force) LDF model generally fit
well with the experimental data for most cases, but they are relatively time-consuming[16].It was also found in the review that The
Clark model was suitable to describe column adsorption obeying the Freundlich isotherm and do not show conspicuously better
accuracy than the above models. A research on Biological degradation of chlorophenols in packed-bed bioreactors using mixed
bacterial consortia was carried out by Zilouei and Guieysse[17]. For the continuous treatment of a mixture of 2-chlorophenol (2CP),
4-chlorophenol (4CP), 2,4-dichlorophenol (DCP) and 2,4,6-trichlorophenol (TCP), two packed-bed bioreactors filled with carriers of
foamed glass beads were tested at 14 degree C and 23 degree C. The results presented in their study represented some of the highest
chlorophenol volumetric removal rates reported, even in comparison to the rates achieved in well homogenized systems such as
fluidized bed and air-lift reactors. The maximum removal upto 99 percent was achieved without let concentration less than 0.1 mg/l.
Nakhli et.al. investigated biological removal of phenol from saline wastewater using a moving bed biofilm reactor containing
acclimated mixed consortia[18].It was observed that the performance of reactor depends on parameters such as inlet phenol
concentration(200–1200 mg/L),hydraulic retention time (8–24 h),inlet salt content(10–70 g/L),phenol shock loading, hydraulic shock
loading and salt shock loading. It was also observed that the an aerobic moving bed biofilm reactor (MBBR) was able to remove up to
99% of phenol. They concluded that the MBBR system with high concentration of the active mixed biomass can play a prominent
role in order to treat saline wastewaters containing phenol in industrial applications very efficiently.
III. CONCLUSION
The phenol removal by fixed bed operation is very promising method of treatment. The percentage removal of the order of 99 to 100
percent has been reported. The nature of breakthrough curve is affected by the parameters such as initial concentration, bed depth and
flow rates. With initial concentration and bed depth, the equilibrium adsorption capacity increases and it decreases with flow rate. The
nature of breakthrough curve was justified in most of the cases by using the available models. There is still scope for developing the
model for the fixed bed as available models in few studies were not able to explain the fixed bed adsorption phenomenon completely
in terms of breakthrough time, saturation time and retention time.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


38
www.ijergs.org


REFERENCES:
1. H.D.S.S. Karunarathnea, B.M.W.P.K. Amarasinghea, ―Fixed Bed Adsorption Column Studies for the Removal of Aqueous
Phenol from Activated Carbon Prepared from Sugarcane Bagasse‖, Energy Procedia, Vol. 34,pp. 83 – 90,2013.
2. S.M. Anisuzzaman, Awang Bono, Duduku Krishnaiah, Yit Zen Tan, ―A study on dynamic simulation of phenol adsorption in
activated carbon packed bed column‖, Journal of King Saud University – Engineering Sciences,Vol.30,pp 30, (2014).
3. Girish C.R. and Ramachandra Murty V. ―Removal of Phenol from Wastewater in Packed Bed and Fluidized Bed Columns: A
Review‖, International Research Journal of Environment Sciences, Vol. 2, No.10, pp.96-100, 2013.
4. S. Laxmi Gayatri, Md. Ahmaruzzaman, ―Adsorption technique for the removal of phenolic compounds from wastewater using
low-cost natural adsorbents‖, Assam University Journal of Science & Technology : Physical Sciences and Technology,Vol. 5 ,
No.2,pp.156-166, 2010.
5. Ekpete, O.A, M. Horsfall Jnr and A.I. Spiff, ―Bed Adsorption Of Chlorophenol On To Fluted Pumpkin And Commercial
Activated Carbon‖, Australian Journal of Basic and Applied Sciences, Vol.5, No.11,pp. 1149-1155, 2011.
6. Ping Li,Guohua Xiu, Lei Jiang, ―Competitive Adsorption of Phenolic Compounds onto Activated Carbon Fibers in Fixed Bed‖,
Journal of Environmental Engineering, Vol. 127, No. 8, pp. 730-734, 2001.
7. E-S.Z. El-Ashtoukhy, Y.A.El-Taweel, O. Abdelwahab , E.M.Nassef, ―Treatment of Petrochemical Wastewater Containing
Phenolic Compounds by Electrocoagulation Using a Fixed Bed Electrochemical Reactor‖, Int. J. Electrochem. Sci.,
Vol.8,pp.1534 – 1550,2013.
8. M. T. Sorour, F. Abdelrasoul and W.A. Ibrahim, ―Application Of Adsorption Packed-Bed Reactor Model For Phenol Removal‖,
Tenth International Water Technology Conference, IWTC10 Alexandria, Egypt ,pp131-144,2006.
9. Sulaymon, Abbas Hamid; Abbood, Dheyaa Wajid; Ali, Ahmed Hassoon, ―Removal of phenol and lead from synthetic wastewater
by adsorption onto granular activated carbon in fixed bed adsorbers: prediction of breakthrough curves‖, Desalination & Water
Treatment, Vol. 40,No.1-3, pp.244, 2012.
10. Garg Sangeeta, Kohli Deepak and Jana A. K., ―Fixed Bed Column Studies For The Sorption Of Para-Nitrophenol From Aqueous
Solutions Using Cross-Linked Starch Based Polymer‖, Journal of Environmental Research And Development, Vol. 7
,No.2A,pp.843-850,2012.
11. Einab Bakhshi,Ghasem Najafpour,Ghasem Najafpour,Bahram Navayi Neya,E Smaeel Kariminezhad ,Roya Pishgar,Nafise
Moosav, ―Recovery Of Upflow Anaerobic Packed Bed Reactor From High Organic Load During Startup For Phenolic
Wastewater Treatment‖, Chemical Industry & Chemical Engineering Quarterly, Vol. 17,No.4, pp.517−524,2011.
12. Zvezdelina Yaneva,Mirko Marinkovski,Liljana Markovska,Vera Meshko,Bogdana Koumanova,‖ Dynamic studies of
nitrophenols sorption on perfil in a fixed-bed column‖, Vol 27, No, 2,pp.123-132,2008.
13. Asrar Al-Obaidy, ―Removal of Phenol Compounds from Aqueous Solution Using Coated Sand Filter Media‖, Iraqi Journal of
Chemical and Petroleum Engineering, Vol.14 No.3 pp. 23- 31,2013.
14. Shahlaa E. Ebrahim, ―Modeling the Removal of Phenol by Natural Zeolitein Batch and Continuous Adsorption Systems‖,
Journal of Babylon University/ Engineering Sciences, Vol.21, No.1,2013.
15. Mckay, Gordon Bino, M.J., ―Fixed Bed Adsorption for the Removal of Pollutants from Water‖, Environ. Pollut.. Vol. 66, pp. 33-
53,1990.
16. Zhe XU, Jian-guo CAI, Bing-cai PAN, ―Mathematically modeling fixed-bed adsorption in aqueous systems‖, Journal of Zhejiang
University-SCIENCE A (Applied Physics & Engineering),Vol.14,No.3,pp.155-176, 2013.
17. Hamid Zilouei, Benoit Guieysse,Bo Mattiasson, ―Biological degradation of chlorophenols in packed-bed bioreactors using mixed
bacterial consortia, Process Biochemistry,Vol. 41,pp.1083–1089,2006.
18. Seyyed Ali Akbar Nakhli, Kimia Ahmadizadeh, Mahmood Fereshtehnejad, Mohammad Hossein Rostami, Mojtaba Safari and
Seyyed Mehdi Borghei, ―Biological Removal Of Phenol From Saline Wastewater Using A Moving Bed Biofilm Reactor
Containing Acclimated Mixed Consortia, Journal of Environmental Engineering, Vol. 127, No. 8, pp. 730-734,2001





International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


39
www.ijergs.org

Derating Analysis for Reliability of Components
K.Bindu Madhavi
1
, BH.Sowmya
2
, B. Sandeep
2
, M.Lavanya
2

1
Associate Professor, Department of ECE, HITAM, Hyderabad
2
Research Scholar, Department of ECE, HITAM, Hyderabad

ABSTRACT: Ensuring reliable operation over an extended period of time is one of the biggest challenge facing present day
electronic systems. The increased vulnerability of the components to various electrical, thermal, mechanical, chemical and
electromagnetic stresses poses a big threat in attaing the reliability required for various mission critical applications. Derating can be
defined as the practice of limiting electrical, thermal and mechanical stresses on devices to levels below their specified or proven
capabilities in order to enhance reliability. If a system is expected to be reliable, one of the major contributing factors must be a
conservative design approach incorporating part derating. Realizing a need for derating of electronic and electromechanical parts,
many manufacturers have established internal guidelines for derating practices.
In this project, a notch filter circuit is used in an aerospace application is selected. Circuit simulation will be carried out by
using E-CAD Tools. Further Derating analysis will be done as in the methodology given in MIL-STD-975A and provide design
margin against this standard as well.
Key for success of any product lies in its producibility, Quality and reliability. A lot of effort is needed to develop a new
product, make a prototype and prove its performance. Still more effort is required, if it is to be produced in large quantities with
minimum number of rejections. Minimum number of rejections or increase in First time yield saves production costs, testing time and
resources. Hence it helps to reduce cost of item. It is also required that product delivered to the customer should perform satisfactorily
without failure under its expected life cycle operational stresses. It should continue this performance over its expected operational life
time, or whenever it is required to operate, a factor which is called reliability. Reliable product performance increases customer
satisfaction and gives Brand name for manufacturer.
The increased vulnerability of the components to various electrical, thermal, mechanical, chemical and electromagnetic
stresses poses a big threat in attaing the reliability required for various mission critical applications. Derating is the practice of
operating at a lower stress condition than a part's rating.


INTRODUCTION:
Derating is the reduction of electrical, thermal, and mechanical stresses applied to a part in
order to decrease the degradation rate and prolong the expected life of the part. Derating increases the margin of safety between the
operating stress level and the actual failure level for the part, providing added protection from system anomalies unforeseen by the
designer.

DERATING CRITERIA
The derating criteria contained herein indicate the maximum recommended stress values and do not preclude further
derating. When derating, the designer must first take into account the specified environmental and operating condition rating factors,
consider the actual environmental and operating conditions of the applications, and then apply the recommended derating criteria
herein. Parts not appearing in these guidelines are lacking in empirical data and failure history, The derating instructions are listed
for each commodity in the following paragraphs.
To assure that these derating criteria are observed, an EEE parts list (item by item) shall be generated for each hardware
assembly. This list shall, as a minimum, contain the maximum rated capability (such as voltage, current, power, temperature, etc.) of
the part in comparison with the design requirements of the application, indicating conformance to the derating criteria specified herein.
In the following derating sections, the term ―ambient temperature‖ as applied to low pressure or space vacuum operation, is
defined as follows:
For operation under conditions of very low atmospheric pressure or space vacuum, heat loss by convention is essentially zero, so
ambient temperature is the maximum temperature of the heat sink or other mounting surface in contact with the part, or the
temperature of the surface of the part itself (case temperature).

DERATING LEVELS
The range of derating is generally defined as a point between the minimum derating point and the point of over-derating. The
optimum derating, therefore, should occur at or below the point of stress (i.e., voltage, temperature) where a rapid increase in failure
rate occurs for a small increase in stress.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


40
www.ijergs.org



PART QUALITY LEVELS
Derating cannot be used to compensate for using parts of a lower quality than necessary to meet usage reliability
requirements. The quality level of a part has a direct effect on the predicted failure rate.
These derating criteria for hybrid devices such as Integrated circuits, Transistors, Capacitors, Resistors these devices may use
thick film or thin films as interconnections and resistive elements. The primary failure modes are failures of active components,
integrated circuits or transistor chips, and interconnection faults.
The derating criteria for other complex integrated circuits such as LSI, VHSIC, VLSI, Microprocessors), for the memory
devices such as Bipolar, MOS, which are broken up into RAM (Random access memories) and ROM (Read only memories), for
Microwave devices such as GaAs FET, Detectors and Mixers, Varactor diodes, Step recovery diodes, PIN diodes, Tunnel diodes,
IMPATT diodes, Gunn diodes, and Transistors. The derating criteria procedure is even carried out for Surface Acoustic Wave (SAW)
devices such as Delay lines, Oscillators, Resonators, and Filters.
In this project we are derating the hybrid devices which are Resistors, Capacitors and Operational Amplifiers by using an E-
CAD Tool MULTISIM which is a circuit simulator developed by SPICE and designed as per the MIL.STD 975M.

RESISTORS DERATING CRITERIA
The derated power level of a resistor is obtained by multiplying the resistor‘s nominal power rating by the appropriate power ratio
found on the (y) axis in the graphs below and on the next page. This ratio is also a function of the resistor‘s ambient temperature
maximum (x axis). The voltage applied to resistors must also be controlled. The maximum applied voltage should not exceed 80% of
the specification maximum voltage rating of PR which ever is less, where:
P = Derated Power (watts).
R = Resistance of that portion of the element actually active in the circuit.

This voltage derating applies to DC and Regular waveform AC applications.







Fig: Resistor derating chart

Power and ambient temperature are the principal stress parameters. Thus by following these parameters based as per
MIL.STD 975M and datasheet rules the resistors have been derated which are present in the design.

There are many military specifications that deal with different types of resistors; listing of Resistor MIL Specs.

CAPACITOR DERATING CRITERIA
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


41
www.ijergs.org

Voltage derating is accomplished by multiplying the maximum operating voltage by the appropriate derating factor. The
principal stress parameters for capacitors are temperature and DC and/or AC voltage.


OP - AMPS DERATING CRITERIA
The principal stress parameters for linear microcircuits are the supply voltage, input voltage, output current, total device
power, and junction temperature.
Even though a component is rated to a particular maximum temperature, derating will insure a worst-case design. So that
some unpredictable event, operating condition or design uncertainty does not cause a component to over-heat. However even without
derating an integrated circuit is normally specified below its maximum temp, because of part-to-part variations.


So there is always some head room, but the concern is about reliability and good design practices. Derating is a sound design
practice because it lowers the junction temperature of the device increasing component life, and reliability.

First stage where factors of producibility, reliability can be taken care is Design phase. Things can be improved later but only with
higher costs. One important step that needs to be done during Design stage is simulation with E-CAD tool i.e MULTISIM. Multisim is
a circuit simulator powered by SPICE which is the industry standard circuit simulation engine, developed here at Berkeley.

Fig : Outlook of the simulation tool

Many Designers perform simulation and basic nominal functional performance analysis of an Electronic circuit during Design stage.
This involves application of appropriate inputs to circuit, simulation and examination of outputs for expected/designed behavior. All
component parameters are set to their nominal values. This approach proves circuit behavior at nominal component values. A NOTCH
FILTER circuit used in an aerospace application is taken up for analysis. The schematic that has been simulated for the analysis
procedure is the notch filter. Notch filter is a band- stop filter with a narrow stop band (or) band - rejection filter of high Q- factor.
The performance parameter for the schematic is notch frequency value. Tolerance for this performance parameter is specified. For
carrying out this derating analysis procedure we estimate the minimum and maximum currents , voltages , temperature and the
required parameters that are considered for any component specifications some of those components are resistors , capacitors,
operational amplifiers etc. First simulation is run at nominal values for all component values in schematic. Finally optimum
component tolerances, which give low rejection rate during production, are obtained.

DESIGN ANALYSIS OF THE NOTCH FILTER

Fig : Notch filter schematic design
In above schematic , FD and FB are analog inputs with a dynamic range of ± 10 Volts . OUT is analog output with a dynamic
range of ± 10 Volts .V1 and V2 are Voltage sources used to simulate the circuit. Above circuit exhibits notch Frequency at 90 Hz with
respect to input on FD with FB grounded.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


42
www.ijergs.org

NOMINAL FUNCTIONAL ANALYSIS:
A nominal functional simulation is run by using an EDA tool, with all component values set to nominal.V2 voltage source is set
to 0 V to ground FB input.V1voltage source is set to 0.1 V rms sine wave to perform Frequency response Analysis with respect to FD
input.Expected nominal value of Notch Frequency is 90±3 Hz.Frequency Response on OUT for nominal simulation is shown below
in Fig.It is giving a value of 91.20 Hz as Notch Frequency value, which is as per expectation.

Fig : Nominal Functional
Analysis of Notch Frequency
The simulation process consists of AC ANALYSIS, TRANSIENT ANALYSIS and even DC ANALYSIS. The simulation is being
carried out with and without presence of load and at different temperatures for checking of design margin and longetivity of the
components.

Conclusion:
The outputs that are observed are shown below:
AC ANALYSIS:
Frequency response is observed from magnitude and phase plots .

MAGNITUDE

PHASE


TRANSIENT RESPONSE:

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


43
www.ijergs.org



Many more steps are required to make a reliable product. Product should have a reliability program as per US Military standard.

REFERENCES:
[1]Dillard, R.B., Reliability for the Engineer, Book Two, Martin Marietta Corporation, 1973.
[2]"Electronic Reliability Design Handbook," MIL-HDBK-338-1A, October 1988.
[3]Klion, Jerome, A Redundancy Notebook, Rome Air Development Center, RADC-TR-77-287, December 1987.
[4]Lalli, Vincent R. and Speck, Carlton E., "Traveling-Wave Tube Reliability Estimates, Life Tests, and Space Flight Experience,"
NASA TM X-73541, January 1977.
[5]"Reliability Modeling and Prediction," MIL-STD-756B, November 1981.
[6] Reliability of components , MIL-STD-975M.
[7]National Instruments tutorials and general information:
http://search.ni.com/nisearch/app/main/p/bot/no/ap/global/lang/en/pg/1/ps/10/q/multisim %20tutorial/
[8]Transitioning from PSPICE to NI Multisim: A Tutorial http://zone.ni.com/devzone/cda/tut/p/id/5964
[9]Adding components to your database: http://zone.ni.com/devzone/cda/tut/p/id/5607#toc1
[10[Ultiboard PCB layout system: http://digital.ni.com/manuals.nsf/websearch/D97873AF18C4EA84862571F5006D0EF3
http://www.opsalacarte.com/Pages/reliability/reliability_des_comp.html














International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


44
www.ijergs.org

A Survey on Feature Selection Techniques
Jesna Jose
1

1
P.G. Scholar, Department of Computer Science and Engg, Sree Buddha College of Engg, Alappuzha
E-mail- jesnaakshaya@gmail.com
Abstract— Feature selection is a term commonly used in data mining to describe the tools and techniques available for reducing
inputs to a manageable size for processing and analysis. Feature selection implies not only cardinality reduction, which means
imposing an arbitrary or predefined cutoff on the number of attributes that can be considered when building a model, but also the
choice of attributes, meaning that either the analyst or the modeling tool actively selects or discards attributes based on their
usefulness for analysis. Feature selection is an effective technique for dimension reduction and an essential step in successful data
mining applications. It is a research area of great practical significance and has been developed and evolved to answer the challenges
due to data of increasingly high dimensionality. The objective of feature selection is three fold. Improving the prediction performance
of the predictors, Providing faster and more cost effective prediction and providing a better understanding of the underlying process
that generate the data. This paper is actually a survey on various techniques of feature selection and its advantages and disadvantages.
Keywords— Feature selection, Graph based clustering, Redundancy, Relevance, Minimum spanning tree, Symmetric uncertainity, correlation
.
INTRODUCTION
Data mining is a form of knowledge discovery essential for solving problems in a specific domain. As the world grows in
complexity, overwhelming us with the data it generates, data mining becomes the only hope for elucidating the patterns that underlie it
[1]. The manual process of data analysis becomes tedious as size of data grows and the number of dimensions increases, so the process
of data analysis needs to be computerized. Feature selection plays an important role in the data mining process. It is very essential to
deal with the excessive number of features, which can become a computational burden on the learning algorithms as well as various
feature extraction techniques.. It is also necessary, even when computational resources are not scarce, since it improves the accuracy
of the machine learning tasks.This paper made a survey on various existing feature selection techniques.
SURVEY
1. Efficient Feature Selection via Analysis of Relevance and Redundancy
This paper[4] propose a new framework of feature selection which avoids implicitly handling feature redundancy and turns to efficient
elimination of redundant features via explicitly handling feature redundancy. Relevance definitions divide features into strongly
relevant, weakly relevant, and irrelevant ones; redundancy definition further divides weakly relevant features into redundant and non-
redundant ones. The goal of this paper is to efficiently find the optimal subset. We can achieve this goal through a new framework of
feature selection (figure 1) composed of two steps: first, relevance analysis determines the subset of relevant features by removing
irrelevant ones, and second, redundancy analysis determines and eliminates redundant features from relevant ones and thus produces
the final subset. Its advantage over the traditional framework of subset evaluation lies in that by decoupling relevance and redundancy
analysis, it circumvents subset search and allows a both efficient and effective way in finding a subset that approximates an optimal
subset. The disadvantage of this technique is that it does not process the image data.

Figure 1: A new framework of feature selection
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


45
www.ijergs.org


2. Graph based clustering

The general methodology of graph-based clustering includes the below given five part story[2]:
(1) Hypothesis. The hypothesis can be made so that a graph can be partitioned into densely connected subgraphs that are
sparsely connected to each other.
(2) Modeling. It deals with the problem of transforming data into a graph or modeling the real application as a graph by
specially designating the meaning of each and every vertex, edge as well as the edge weights.
(3) Measure. A quality measure is an objective function that rates the quality of a clustering. The quality measure will
identify the cluster that satisfy all the desirable properties.
(4) Algorithm. An algorithm is to exactly or approximately optimize the quality measure. The algorithm can be either top
down or bottom up.
(5) Evaluation. Various metrics can be used to evaluate the performance of clustering by comparing with a ―ground truth‖
clustering.

Graph-based Clustering Methodology
We start with the basic clustering problem. Let = {1,…, } be a set of data points, =(ij),=1,…, be the
similarity matrix in which each element indicates the similarity j ≥ 0 between two data points and . A nice way to
represent the data is to construct a graph on which each vertex represents a data point and the edge weight carries the
similarity of two vertices. The clustering problem in graph perspective is then formulated as partitioning the graph into
subgraphs such that the edges in the same subgraph have high weights and the edges between different subgraphs have low
weights.
A graph can be represented in such a way that A graph is a triple G=(V,E,W) where = {1,…, } is a set of
vertices, E⊆V×V is a set of edges, and = (Wij),=1,…, is called adjacency matrix in which each element indicates a
non-negative weight ( ≥ 0) between two vertices and . The hypothesis behind graph-based clustering can be
stated in the following ways[2]. First is the graph consists of dense subgraphs such that a dense subgraph contains more well
connected internal edges connecting the vertices in the subgraph than cutting edges connecting the vertices across subgraphs.
Second is a random walk that visits a subgraph will likely stay in the subgraph until many of its vertices have been visited
(Dongen, 2000). Third hypothesis is among all shortest paths between all pairs of vertices, links between different dense
subgraphs are likely to be in many shortest paths (Dongen, 2000)
While considering the modeling step, Luxburg (2006) stated three most common methods to construct a graph: –
neighborhood graph, -nearest neighbor graph, and fully connected graph. About measuring the quality of a cluster, it is worth
noting that quality measure should not be confused with vertex similarity measure where it is used to compute edge weights. The
main difference is that cluster quality measure directly identifies a clustering that fulfills a desirable property while evaluation
measure rates the quality of a clustering by comparing with a ground-truth clustering.
Graph based clustering algorithms can be divided into two major classes: divisive and agglomerative. In the divisive
clustering class, we categorize algorithms into several subclasses like cut-based, spectral clustering, multilevel, random walks,
shortest path. Divisive clustering follows top-down style and recursively splits a graph into various subgraphs. The agglomerative
clustering works bottom-up and iteratively merges singleton sets of vertices into subgraphs. The divisive and agglomerative
algorithms are also called hierarchical since they produce multi-level clusterings, i.e., one clustering follows the other by refining
(divisive) or coarsening (agglomerative). Most graph clustering algorithms ever proposed are divisive.

3. Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution

Symmetric uncertainty is in fact the measure of how much a feature is related to another feature. This correlation based filter
approach is making use of this symmetric uncertainty method. This involves two aspects: (1) how to decide whether a feature is
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


46
www.ijergs.org

relevant to the class or not; and (2) how to decide whether such a relevant feature is redundant or not when considering it with
other relevant features. The solution to the first question can be using a user- defined threshold SU value, as the method used by
many other feature weighting algorithms (e.g., Relief). The answer to the second question is more complicated because it may
involve analysis of pairwise correlations between all features (named F-correlation), which results in a time complexity of O(N2)
associated with the number of features N for most existing algorithms. To solve this problem, FCBF algorithm is proposed. FCBF
means Fast Correlation-Based Filter Solution[3]. This algorithm involves two steps. First step is select relevant features and
arrange them in descending order according to the correlation value. Second step is remove redundant features and only keeps
predominant ones.
For predominant feature selection another algorithm is there.
a) Take the first element Fp as the predominant feature.
b) Then take the next element Fq.
- if Fp happens to be redundant peer of Fq, remove Fq
c) After one round of filtering based on Fp , take the remaining features next to Fp as the new reference and repeat.
d) The algorithms stops until there is no more feature to be removed.
The disadvantage of this algorithm is that it does not work with high dimensional data.


CONCLUSION
Feature selection is a term commonly used in data mining to describe the tools and techniques available for reducing inputs to a
manageable size for processing and analysis. Feature selection implies not only cardinality reduction, which means imposing an
arbitrary or predefined cutoff on the number of attributes that can be considered when building a model, but also the choice of
attributes, meaning that either the analyst or the modeling tool actively selects or discards attributes based on their usefulness for
analysis. Feature selection techniques has wide variety of applications in data mining, digital image processing etc. Various feature
selection techniques and its advantages as well as disadvantages are depicted in this paper.

REFERENCES:
[1] I.H. Witten, E. Frank and M.A. Hall, Data mining practical machine learning tools and techniques, Morgan Kaufmann publisher,
Burlington 2011

[2] Zheng Chen, Heng Ji, Graph-based Clustering for Computational Linguistics: A Survey ,Proceedings of the 2010 Workshop on
Graph-based Methods for Natural Language Processing, ACL 2010, pages 1–9, Uppsala, Sweden, 16 July 2010. c 2010 Association
for Computational Linguistics

[3] Lei Yu, Huan Liu, Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution, Department of
Computer Science & Engineering, Arizona State University, Tempe, AZ 85287-5406, USA

[4]Lei Yu, Huan Liu, Efficient Feature Selection via Analysis of Relevance and Redundancy, Journal of Machine Learning Research 5
(2004) 1205–1224

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


47
www.ijergs.org

Face Recognition using Laplace Beltrami Operator by Optimal Linear
Approximations
Tapasya Sinsinwar
1
, P.K.Dwivedi
2

1
Research Scholar (M.Tech, IT), Institute of Engineering and Technology
2
Professor and Director Academics, Institute of Engineering and Technology, Alwar, Rajasthan Technical University, Kota(Raj.)

Abstract—We propose an appearance-based face recognition technique called the laplacian face method. With Locality Preserving
Projections (LPP), the face imageries are mapped into a face subspace for analysis. Unlike from Principal Component Analysis (PCA)
and Linear Discriminant Analysis (LDA) which commendably see merely the Euclidean structure of face space, LPP discovers a set in
subspace that conserves native information, and finds a face subspace that best perceives the essential face manifold structure. The
laplacian faces are the optimal linear approximations to the Eigen functions of the Laplace beltrami operator on the face manifold. In
this way, the undesirable variations resulting from changes in lighting, facial expression, and pose may be removed or reduced.
Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We equate the proposed Laplacian
face approach with Eigen face and Fisher face methods on three different face data sets. Experimental results recommend that the
proposed Laplacian face method provides a better representation and attains lower error rates in face recognition.

Keywords—Face recognition, principal component analysis, linear discriminant analysis, locality preserving projections, face
manifold, subspace learning.


1 Introduction
Lots of face recognition methods have been developed over the former few years. One of the best popular and well-studied practices
for face recognition is the appearance-based method [28], [16]. By means of appearance-based methods, we generally characterize an
image of size n × m pixels by a vector in an n × m-dimensional space. In fact, these n×m-dimensional spaces are too huge to permit
robust and fast face recognition. A common manner to attempt to determine this problem is to use dimensionality reduction methods
[7], [9], [6], [10], [16], [15], [21],[28], [29], [37], [37]. Two of the most popular methods for this purpose are Principal Component
Analysis (PCA) [26] and Linear Discriminant Analysis (LDA) [3]. PCA is an Eigenvector technique intended to model linear
difference in high-dimensional data. PCA achieves dimensionality reduction by projecting the unique n-dimensional data onto the k
(<< n)-dimensional linear subspace covered by the foremost Eigenvectors of the data‘s covariance matrix. Its aim is to discover a set
of communally orthogonal basis functions that capture the directions of maximum variance in the data and for which the coefficients
are pairwise decorrelated. For linearly fixed manifolds, PCA is sure to find the dimensionality of the manifold and produces a
compacted representation. Turk and Pentland[29] use Principal Component Analysis to define face images in terms of a set of basic
functions, or ―Eigen faces.‖
LDA is a managed learning algorithm. LDA examines for the project axes on which the data points of various classes are far
away from each other while needing data points of the same class to be convenient to each other. Different from PCA which encodes
data in an orthogonal linear space, LDA encodes discriminating information in a linearly independent space using bases that are not
essentially orthogonal. It is mostly believed that algorithms based on LDA are superior to those based on PCA. However, some
modern work [14] demonstrates that, when the training data set is lesser, PCA can outperform LDA, and also that PCA is less delicate
to different training data sets. In recent times, a lots of research efforts have shown that the face images possibly exists on a nonlinear
sub manifold [7], [10], [18], [19], [21], [23], [27]. However, both PCA and LDA excellently see only the Euclidean structure. They
fail to find the underlying structure, if the face images lie on a nonlinear sub manifold hidden in the image space.
In this paper, we propose a new method to face analysis (representation and recognition), which openly considers the manifold
structure. To be particular, the manifold structure is modelled by a nearest-neighbour graph which preserves the local structure of the
image space. A face subspace is obtained by Locality Preserving Projections (LPP) [9]. Each face image in the image space is plotted
to a low-dimensional face subspace, which is considered by a set of feature images, called Laplacian faces. The face subspace
preserves native structure and seems to have more discriminating power than the PCA approach for sorting purpose. We also offer
theoretical analysis to show that PCA, LDA, and LPP can be obtained from different graph models. Vital to this is a graph structure
that is contingent on the data points. LPP discovers a projection that compliments this graph structure. In our theoretical analysis, we
show how PCA, LDA, and LPP arise from the same principle applied to different choices of this graph structure.
It is worthwhile to focus some aspects of the proposed approach here:

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


48
www.ijergs.org

1. While the Eigen faces technique purposes to preserve the global structure of the image space, and the Fisher faces technique goals
to preserve the discriminating information; our Laplacian faces method aims to preserve the local structure of the image space. In
various real-world classification problems, the local manifold structure is more important than the global Euclidean structure,
especially when nearest-neighbour like classifiers are used for classification. LPP seems to have discriminating power even though it
is unproven.

2. An effective subspace learning algorithm for face recognition should be able to find the nonlinear manifold structure of the face
space. Our proposed Laplacian faces technique explicitly reflects the manifold structure which is modelled by an adjacency graph.
Furthermore, the Laplacian faces are gained by finding the optimal linear approximations to the Eigen functions of the Laplace
Beltrami operator on the face manifold. They imitate the intrinsic face manifold structures.

3. LPP shares several similar properties to LLE [18], such as a locality preserving character, though, their objective functions are
completely unlike. LPP is achieved by finding the optimal linear approximations to the Eigen functions of the Laplace Beltrami
operator on the manifold. LPP is linear, whereas LLE is nonlinear. Furthermore, LPP is defined everywhere, while LLE is defined
only on the training data points and it is unclear how to evaluate the maps for new test points. In contrast, LPP may be merely applied
to any new data point to discover it in the reduced representative space test points.

2 PCA and LDA
One approach to deal with the difficulty of extreme dimensionality of the image space is to reduce the dimensionality by combining
features. Linear permutations are particular, attractive because they are simple to calculate and logically tractable. In effect, linear
methods project the high-dimensional data onto a lower dimensional subspace.
Considering the problem of representing all of the vectors in a set of n-dimensional samples x
1
, x
2
,... x
n
, with zero mean, by a
single vector y ={y
1
,y
2
….y
n
} such that y
i
represents x
i
. Precisely, we find a linear mapping from the

d-dimensional space to a line.
Without loss of generality, we

represent the transformation vector by w. That is, w
T
x
i
= y
i
.

In reality, the magnitude of w is of no real
significance

because it just scales y
i
. In face recognition, each vector x
i
denotes a face image.

Different objective functions will yield different algorithms with different properties. PCA aims to extract a subspace in which the
variance is maximized. Its objective function is as follows:
n _
max
w
∑ (y
i
– y )
2
, (1)
i=1



_ n
y =1/n ∑ y
i
. (2)
i=1



The output set of principal vectors w
1
,w
2
,….w
k
is an orthonormal set of vectors representing the Eigenvectors of the sample
covariance matrix associated with the k < d largest Eigenvalues.


3 Learning a locality preserving subspace
PCA and LDA aim to preserve the global structure. Though, in many real-world applications, the local structure is more important. In
this section, we describe Locality Preserving Projection (LPP) [9], a new algorithm for learning a locality preserving subspace. The
comprehensive derivation and theoretical explanations of LPP can be traced [9]. LPP looks for preserving the basic geometry of the
data and local structure. The objective function of LPP is as follows:


min

∑ (y
i
– y
j
)
2
w
ij

,
ij

where y
i
is the one-dimensional representation of x
i
and the matrix S is a similarity matrix. A possible way of defining S is as follows:

w
ij =
{exp(-║xi -xj║
2
/ t
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


49
www.ijergs.org

0 otherwise }


The objective function with our choice of symmetric weights w
ij
(w
ij
= w
ji
) incurs a heavy penalty if neighbouring points x
i
and x
j
are
mapped far apart, i.e., if(y
i
- y
j
)
2
is large. Therefore, minimizing it is an attempt to

ensure that, if x
i
and x
j
are ―close,‖ then y
i
and y
j
are
close

as well. Following some simple algebraic steps, we see that :

½ ∑ (y
i
– y
j
)
2
w
ij


ij

= ½ ∑ (w
T
x
i
– w
T
x
j
)
2
w
ij


ij

= w
T
XDX
T
w - w
T
XWX
T
w

=w
T
X ( D – W ) X
T
w

= w
T
XLXTw

where X =[x
1
,x
2
..... x
n
], and D is a diagonal matrix; its
entries are column (or row since S is symmetric) sums of W, D
ii
=∑
j
W
ji
. L= D - W is the Laplacian matrix [6].



4 Locality Preserving Projections
4.1. The linear dimensionality reduction problem
The basic problem of linear dimensionality reduction is the follows: Given a set x
1
, x
2
,….x
m
in R
n
, find a transformation matrix A that
maps these m points to a set of points y
1
, y
2
,……y
m
in R
l
(l << n), such that y
i
‖represents‖ x
i
, where y
i
= A
T
x
i
.
Our method is of particular applicable in the special case where x
1
, x
2
…….x
m
∈M and M is a nonlinear manifold embedded in R
n
.


4.2. The algorithm
Locality Preserving Projection (LPP) is a linear approximation of the nonlinear Laplacian Eigen map [2]. The algorithmic procedure is
formally stated below:
1. Constructing the adjacency graph: Let G denote a graph with m nodes. We put an edge between nodes i and j if x
i
and x
j

are ‖close‖. There are two variations:
(a) ε-neighbourhoods: [parameter ε ∈ R] Nodes i and j are connected by an edge if ║x
i
- x
j

2
< ε where the norm is the usual
Euclidean norm in R
n
.
(b) k nearest neighbours: [parameter k ∈ N] Nodes i and j are connected by any edge if i is among k nearest neighbours of j or j is
among k nearest neighbours of i.
Note: The process of constructing an adjacency graph outlined above is correct if the data actually lie on a low dimensional manifold.
In general, though, one might take a more practical viewpoint and construct an adjacency graph based on any principle (for example,
perceptual similarity for natural signals, hyperlink structures for web documents, etc.). Once such an adjacency graph is obtained, LPP
will try to optimally preserve it in choosing projections.

2. Choosing the weights: Here, as well, we have two variations for weighting the edges. W is a sparse symmetric m×m matrix with
W
ij
having the weight of the edge joining vertices i and j, and 0 if there is no such edge.
(a) Heat kernel: [parameter t ∈ R]. If nodes i and j are connected, put
W
ij
= e
(-║x
i
-x
j
║2
/
t)
The justification for this choice of weights can be traced back to [2].
(b) Simple-minded: [No parameter]. W
ij
= 1 if and only if vertices i and j are connected by an edge.

3. Eigen maps: Compute the Eigenvectors and Eigenvalues for the generalized Eigenvector problem:
XLX
T
a = λXDX
T
a
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


50
www.ijergs.org

where D is a diagonal matrix whose entries are column (or row, since W is symmetric) sums of W, D
ii
= ∑
j
W
ji
. L = D - W is the
Laplacian matrix. The i
th
column of matrix X is x
i
.
Let the column vectors a
0
…….a
l-1
be the solutions of equation (1), ordered according to their Eigenvalues, λ
0
<…….< λ
l-1
. Thus,
the embedding is as follows:
x
i
yi = A
T
x
i
, A = ( a
0
, a
1
,.........a
l-1
)
where y
i
is a l-dimensional vector, and A is a n × l matrix.

5 Geometrical Justification
The Laplacian matrix L= (D - W) for finite graph, or [4], is analogous to the Laplace Beltrami operator L on compact Riemannian
manifolds. While the Laplace Beltrami operator for a manifold is generated by the Riemannian metric, for a graph it comes from the
adjacency relation.
Let M be a smooth, compact, d-dimensional Riemannian manifold. If the manifold is embedded in R
n
, the Riemannian structure on the
manifold is induced by the standard Riemannian structure on R
n
. We are looking here for a map from the manifold to the real line such
that points close together on the manifold get mapped close together on the line. Let f be such a map. Assume that f : M R is twice
differentiable.
Belkin and Niyogi [2] showed that the optimal map preserving locality can be found by solving the following optimization problem on
the manifold:

arg min ∫
M
║▼f║
2

║f║L
2
(M)=1

which is equivalent to

arg min ∫
M
L(f) f


║f║L
2
(M)=1

where the integral is taken with respect to the standard measure on a Riemannian manifold. L is the Laplace Beltrami operator on the
manifold, i.e. L f =- div▼ (f). Thus, the optimal f has to be an Eigen function of L. The integral ∫
M
L (f)f can be discretely
approximated by [f(X), Lf(X)] = f
T
(X) Lf(X) on a graph, where

f(X) = [f(x
1
), f(x
2
),……f(x
m
))]
T
, f
T
(X) = [f(x
1
), f(x
2
), …..f(x
m
))]

If we restrict the map to be linear, i.e. f(x) = a
T
x, then we have
f(X) = X
T
a [f(X),Lf(X)] = f
T
(X)Lf(X) = a
T
XLX
T
a

The constraint can be computed as follows,

║f║
L
2
(M)
= ∫
M


(a
T
x)
2
dx = ∫
M


(a
T
x x
T
a)dx = a
T
(∫
M

xx
T
dx)

a

where dx is the standard measure on a Riemannian manifold. By spectral graph theory [4], the measure dx directly corresponds to the
measure for the graph which is the degree of the vertex, i.e. D
ii
. Thus,

║f║L
2
(M) can be discretely approximated as follows,
║f║
2
L
2
(M)
= a
T
(∫
M

xx
T
dx)

a ≈ a
T
( ∑
i
xx
T
D
ii
)a = a
T
XDX
T
a

Finally, we conclude that the optimal linear projective map, i.e. f(x) = a
T
x, can be obtained by solving the following objective
function,

arg min a
T
XLX
T
a
a
T
XDX
T
a= 1

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


51
www.ijergs.org

These projective maps are the optimal linear approximations to the Eigen functions of the Laplace Beltrami operator on the manifold.
Therefore, they are capable of discovering the nonlinear manifold structure.

6 Experimental Results
Some simple mock examples given in [9] show that LPP can have additional discriminating power than PCA and be less subtle to
outliers. In this section, numerous experiments are carried out to demonstrate the effectiveness of our proposed Laplacian faces
technique for face representation and recognition.

6.1 Face Representation Using Laplacian faces
As we defined earlier, a face image can be represented as a point in image space. A typical image of size m × n describes a point in m
× n - dimensional image space. On the other hand, due to the undesirable variations resulting from changes in lighting, facial
expression, and pose, the image space might not be an ideal space for visual representation. The images of faces in the training set are
used to learn such a locality preserving subspace. The subspace is covered by a set of Eigenvectors of (35), i.e., w
0
,w
1
,......,w
k-1
. We
can show the Eigenvectors as images. These images may be called Laplacian faces. Using the face database as the training set, we
present the first 10 Laplacian faces in figure, in conjunction with Eigen faces and Fisher faces. A face image can be mapped into the
locality preserving subspace by using the Laplacian faces. It is interesting to note that the Laplacian faces are in some way similar to
Fisher faces.


Figure 1: Distribution of the 10 testing samples in the reduced representation subspace. As can be seen, these testing
samples optimally find their coordinates which reflect their intrinsic properties, i.e., pose and expression.

When the Laplacian faces are created, face recognition [2],[14], [28], [29] becomes a pattern classification task. In this section, we
examine the performance of our proposed Laplacian faces method for face recognition. The system performance is equated with the
Eigen faces method [28] and the Fisher faces method [2], two of the most popular linear methods in face recognition. In this study,
face database was tested. That one is the PIE (pose, illumination, and expression) database. In all the experiments, pre-processing to
trace the faces was applied. Original images were normalized (in scale and orientation) such that the two eyes were aligned at the
same position. Then, the facial areas were cropped into the final images for matching. Figure 2 shows the original image and the
cropped image. The size of each cropped image in all the experiments is 32 × 32 pixels, with 256 grey levels per pixel. Thus, each
image is represented by a 1,024-dimensional vector in image space.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


52
www.ijergs.org


Figure 2: The original face image and the cropped image

The facts of our methods for face detection and alignment can be found in [30], [32]. No further pre-processing is done. Different
pattern classifiers have been applied for face recognition, including nearest-neighbour [2], Bayesian [15], Support Vector Machine[17],
etc. In this paper, we apply the nearest-neighbour classifier for its simplicity. The Euclidean metric is used as our distance measure.
In short, the recognition process has three steps. First, we calculate the Laplacian faces from the training set of face images and
then the new face image to be recognized is projected into the face subspace spanned by the Laplacian faces. Finally, the new face
image is identified by a nearest-neighbour classifier.


Figure 3: The Eigenvalues of LPP and Laplacian Ei gen map.


6.1.1 PIE Database
The PIE face database contains 68 subjects with 41,368 face images as a whole. The face images were taken by 13 synchronized
cameras and 21 flashes, under varying pose, illumination, and expression. We used 170 face images for each individual in our
experiment, 85 for training and the other 85 for testing. Figure 4 shows some of the faces with pose, illumination and expression
variations in the PIE database.


Figure 4: The sample cropped face images of one individual from PIE database. The original face images are taken under
varying pose, illumination, and expression

Table 1 shows the recognition results. As can be seen, Fisher faces perform comparably to our algorithm on this database, while Eigen
faces performs poorly. The error rate for Laplacian faces, Fisher faces, and Eigen faces are 4.6 per cent, 5.7 per cent, and 20.6 per cent,
respectively. Figure 5 shows a plot of error rate versus dimensionality reduction. As can be seen, the error rate of our Laplacian faces
method decreases fast as the dimensionality of the face subspace increases, and achieves the best result with 110 dimensions. There is
no significant progress if more dimensions are used. Eigen faces achieves the best result with 150 dimensions. For Fisher faces, the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


53
www.ijergs.org

dimension of the face subspace is bounded by c - 1, and it achieves the best result with c -1 dimensions. The dashed horizontal line in
the figure shows the best result obtained by Fisher faces.



TABLE 1: Performance Comparison on the PIE Database



Figure 5: Recognition accuracy versus dimensionality reduction on PIE database


6.2 Discussion
These experiments on the database have been system-atically performed. These experiments disclose a number of remarkable points :

1. All these three approaches performed well in the optimal face subspace than in the original image space.

2. In all the three experiments, Laplacian faces consis-tently performs better than Eigen faces and Fisher faces.
These experiments also demonstrate that our algorithm is especially suitable for frontal face images. Likewise, our algorithm takes
advantage of more training samples, which is important to the real-world face recognition systems.

3. Equating to the Eigen faces method, the Laplacian faces method scrambles more discriminating information in the low-dimensional
face subspace by preserving indigenous structure which is more important than the global structure for classification, especially when
nearest-neighbour-like classifiers are used. In effect, if there is a reason to believe that Euclidean distances (║x
i
- x
j
║) are significant
only if they are small (local), then our algorithm finds a projection that respects such a belief.

7 Conclusion and future work
The manifold ways of face analysis (representation and
recognition) are introduced in this paper in order to identify the basic nonlinear manifold structure in the way of linear subspace
learning. To the best of our knowledge, this is the first devoted work on face representation and recognition which unambiguously
reflects the manifold structure. The manifold structure is estimated by the adjacency graph computed from the data points. Using the
notion of the Laplacian of the graph, we then compute a transformation matrix which maps the face images into a face subspace. We
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


54
www.ijergs.org

call this the Laplacian faces approach. The Laplacian faces are obtained by finding the optimal linear approximations to the Eigen
functions of the Laplace Beltrami operator on the face manifold. This linear conversion optimally preserves local manifold structure.
One of the vital problems in face manifold learning is to approximate the inherent dimensionality of the nonlinear face manifold,
or, degrees of freedom. We know that the dimensionality of the manifold is equal to the dimensionality of the local tangent space.
Some previous works [35], [36] show that the local tangent space can be estimated using points in a neighbour set. Hence, one
possibility is to approximate the dimensionality of the tangent space.
An additional possible extension of our work is to study the use of the unlabelled samples. It is important to note that the work
presented here is a general method for face analysis (face representation and recognition) by discovering the underlying face manifold
structure. Learning the face manifold (or learning Laplacian faces) is principally an unverified learning process. Meanwhile the face
images are supposed to exist in a sub manifold embedded in (Placeholder1) a high-dimensional ambient space, we believe that the
unlabelled samples are of great value.

REFERENCES:
[1] A. Levin and Shashua , ―Principal Component Analysis over Continuous Subspaces and Intersection of Half-Spaces,‖ Proc.
European Conf. Computer Vision, May 2002.
[2] A. Levin, Shashua and S. Avidan, ―Manifold Pursuit: A New Approach to Appearance Based Recognition,‖ Proc. Int‘l Conf.
Pattern Recognition, Aug. 2002.
[3] A.M. Martinez and A.C. Kak, ―PCA versus LDA,‖ IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp.
228-233, Feb. 2001.
[4] A.U. Batur and M.H. Hayes, ―Linear Subspace for Illumination Robust Face Recognition,‖ Proc. IEEE Int‘l Conf. Computer
Vision and Pattern Recognition, Dec. 2001.
[5] A. Pentland and Moghaddam, ―Probabilistic Visual Learning for Object Representation,‖ IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 19, pp. 696-710, 1997.
[6] F.R.K. Chung, ―Spectral Graph Theory,‖ Proc. Regional Conf. Series in Math., no. 92, 1997.
[7] H. Murase and S.K. Nayar, ―Visual Learning and Recognition of 3-D Objects from Appearance,‖ Int‘l J. Computer Vision,
vol. 14, pp. 5-24, 1995.
[8] H. Zha and Z. Zhang, ―Isometric Embedding and Continuum ISOMAP,‖ Proc. 20th Int‘l Conf.Machine Learning, pp. 864-
871, 2003.
[9] H.S. Seung and D.D. Lee, ―The Manifold Ways of Perception,‖ Science, vol. 290, Dec. 2000.
[10] J. Shi and J. Malik, ―Normalized Cuts and Image Segmentation,‖ IEEE Trans. Pattern Analysis and Machine Intelligence, vol.
22, pp. 888-905, 2000.
[11] J. Yang, Y. Yu, and W. Kunz, ―An Efficient LDA Algorithm for Face Recognition,‖ Proc. Sixth Int‘l Conf. Control,
Automation, Robotics and Vision, 2000.
[12] J.B. Tenenbaum, V. de Silva, and J.C. Langford, ―A Global Geometric Framework for Nonlinear Dimensionality Reduction,‖
Science, vol. 290, Dec. 2000.
[13] K.-C. Lee, J. Ho,M.-H. Yang, and D. Kriegman, ―Video-Based Face Recognition Using Probabilistic Appearance Manifolds,‖
Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 313320, 2003.
[14] L. Sirovich and M. Kirby, ―Low-Dimensional Procedure for the Characterization of Human Faces,‖ J. Optical Soc. Am. A,
vol. 4, pp. 519-524, 1987.
[15] L. Wiskott, J.M. Fellous, N. Kruger, and C.v.d. Malsburg, ―Face Recognition by Elastic Bunch Graph Matching,‖ IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 19, pp. 775-779, 1997.
[16] L.K. Saul and S.T. Roweis, ―Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds,‖ J.
Machine Learning Research, vol. 4, pp. 119-155, 2003.
[17] M. Belkin and P. Niyogi, ―Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,‖ Proc. Conf.
Advances in Neural Information Processing System 15, 2001.
[18] M. Belkin and P. Niyogi, ―Using Manifold Structure for Partially Labeled Classification,‖ Proc. Conf. Advances in Neural
Information Processing System 15, 2002.
[19] M. Brand, ―Charting a Manifold,‖ Proc. Conf. Advances in Neural Information Processing Systems, 2002.
[20] M. Turk and A.P. Pentland, ―Face Recognition Using Eigen faces,‖ IEEE Conf. Computer Vision and Pattern Recognition,
1991.
[21] M.-H. Yang, ―Kernel Eigen faces vs. Kernel Fisher faces: Face Recognition Using Kernel Methods,‖ Proc. Fifth Int‘l Conf.
Automatic Face and Gesture Recognition, May 2002.
[22] P.J. Phillips, ―Support Vector Machines Applied to Face Recognition,‖ Proc. Conf. Advances in Neural Information
Processing Systems 11, pp. 803-809, 1998.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


55
www.ijergs.org

[23] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, ―Eigen faces vs. Fisher faces: Recognition Using Class Specific Linear
Projection,‖ IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19,no. 7, pp. 711-720, July 1997.
[24] Q. Liu, R. Huang, H. Lu, and S. Ma, ―Face Recognition Using Kernel Based Fisher Discriminant Analysis,‖ Proc. Fifth Int‘l
Conf. Automatic Face and Gesture Recognition, May 2002.
[25] R. Gross, J. Shi, and J. Cohn, ―Where to Go with Face Recognition,‖ Proc. Third Workshop Empirical Evaluation Methods in
Computer Vision, Dec. 2001.
[26] R. Xiao, L. Zhu, and H.-J. Zhang, ―Boosting Chain Learning for Object Detection,‖ Proc. IEEE Int‘l Conf. Computer Vision,
2003.
[27] S. Roweis, L. Saul, and G. Hinton, ―Global Coordination of Local Linear Models,‖ Proc. Conf. Advances in Neural
Information Processing System 14, 2001.
[28] S. Yan, M. Li, H.-J. Zhang, and Q. Cheng, ―Ranking Prior Likelihood Distributions for Bayesian Shape Localization
Framework,‖ Proc. IEEE Int‘l Conf. Computer Vision, 2003.
[29] S.T. Roweis and L.K. Saul, ―Nonlinear Dimensionality Reduction by Locally Linear Embedding,‖ Science, vol. 290, Dec.
2000.
[30] S.Z. Li, X.W. Hou, H.J. Zhang, and Q.S. Cheng, ―Learning Spatially Localized, Parts-Based Representation,‖ Proc. IEEE
Int‘l Conf. Computer Vision and Pattern Recognition, Dec. 2001.
[31] T. Shakunaga and K. Shigenari, ―Decomposed Eigenface for Face Recognition under Various Lighting Conditions,‖ IEEE
Int‘l Conf. Computer Vision and Pattern Recognition, Dec. 2001.
[32] T. Sim, S. Baker, and M. Bsat, ―The CMU Pose, Illumination, and Expression (PIE) Database,‖ Proc. IEEE Int‘l Conf.
Automatic Face and Gesture Recognition, May 2002.
[33] W. Zhao, R. Chellappa, and P.J. Phillips, ―Subspace Linear Discriminant Analysis for Face Recognition,‖ Technical Report
CAR-TR-914, Center for Automation Research, Univ. ofMaryland, 1999.
[34] X. He and P. Niyogi, ―Locality Preserving Projections,‖ Proc. Conf. Advances in Neural Information Processing Systems,
2003.
[35] Y. Chang, C. Hu, and M. Turk, ―Manifold of Facial Expression,‖ Proc. IEEE Int‘l Workshop Analysis and Modeling of
Faces and Gestures, Oct. 2003.
[36] Z. Zhang and H. Zha, ―Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment,‖
Technical Report CSE-02-019, CSE, Penn State Univ., 2002













International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


56
www.ijergs.org

Operation and Control Techniques of SMES Unit for Fault Ride through
Improvement of a DFIG Based WECS
Sneha Patil
1

1
Research Scholar (M.Tech), Bharti Vidyapeeth University College of Engineering, Pune
Abstract— The energy storage in an SMES is in the state of magnetic field within a superconductor coil. The magnetic field is
formed by flowing DC current in the SMES. To ensure proper operation the temperature of SMES should be maintained below critical
temperature. At this temperature the resistance of the coil is zero and hence there is no loss in stored energy. The ability of SMES to
store energy is influenced by the current density. The energy is fed back to the grid by conversion of magnetic field into electrical
energy. An SMES system has a superconductor, refrigerant, power conditioning unit and control unit. The storage of energy is
achieved by continuous circulation of current inside the coil. Since the energy is not converted in any form other than electrical there
are lesser losses in SMES configuration than any other storage mechanism. Thus the efficiency is very high. It inhibits very low
cycling time and the number of charge discharge cycles is very high. The major drawbacks of this technology being very high initial
cost as well as losses associated with auxiliaries. This paper covers various aspects of SMES configuration and its connection in
power system.
Keywords— Energy Storage, Superconducting Magnetic Energy Storage (SMES), Voltage Source Converter (VSC), Current Source
Converter (CSC),Wind Energy Conversion System (WECS), Doubly Fed Induction Generators (DFIG), Voltage Sag, Voltage Swell
INTRODUCTION



Fig. 1. Block diagram of an SMES unit

I. CONTROL METHODS FOR SMES

Various controlling methods for an SMES unit are discussed below:


THYRISTOR BASED SMES
A thyristor based SMES technology uses a Star- Delta transformer along with a thyristorised AC to DC bridge converter and an SMES
coil. A layout of a thyristorized SMES controller is shown in Fig. 2. Converter assigns polarity to the superconductor. Charging and
discharging operation is performed by varying the sequence of firing thyristors by modifying the delay angle. The converter performs
rectification operation for a delay angle is set lesser than 90º. This enables charging of the SMES coil. For a converter angle set more
than 90º the converter allows discharging of SMES by operating as an inverter. Thus energy transfer can be achieved as desired. When
the power system is operating in steady state the SMES coil should not supply or absorb any active or reactive power.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


57
www.ijergs.org


Fig. 2. SMES unit controlled by a AC- DC 6 pulse thryristorized bridge converter

If V
sm0
is the no load max. DC voltage of the bridge, the voltage across DC terminals of the converter is
V
sm
= V
sm0
cosα (1)

If I
sm0
is the initial coil current and P
sm
is the active power transferred between SMES and the grid, then the relation between current
and voltage of SMES coil is given as
(2)
P
sm
= V
sm
I
sm
(3)

The polarity of bridge current I
sm
cannot be changed therefore the value of active power P
sm
is a function of α that has polarity as per
V
sm
. If V
sm
is positive the SMES unit gets charged by absorbing power from the grid. Whereas if V
sm
is negative the SMES coil is
discharged by feeding the power from SMES to the grid. The amount of energy that is stored within the SMES coil is given by
(4)
defines the initial energy in SMES



VOLTAGE SOURCE CONVERTER BASED SMES
The various components of a voltage source converter based SMES are a star- delta transformer, IGBT controlled six pulse width
modulation based converter and an IGBT controlled 2 quadrant chopper and an SMES unit. The two converters are connected with a
DC link capacitor. A schematic diagram of this arrangement is shown in Fig. 3 and the control technique of voltage source converter is
depicted in Fig. 4. The voltage source converter serves as interfacing device linking the SMES coil to the grid. The potential integral
controllers generate the values of direct and quadrature axis currents by comparing the actual value of the DC link voltage and
terminal voltage to their reference values. This quantity is used as an input signal to the voltage source converter. PWM converter
performs the operation of maintaining the voltage across DC link capacitor constant.


Fig. 3. Controlling technique of voltage source converter

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


58
www.ijergs.org


Fig. 4. Control technique of a voltage source converter

The chopper controls of energy transfer through the SMES coil. The chopper performs the operation of switching appropriate IGBTs
by controlling the polarity of V
sm
. This voltage can be adjusted by varying the duty cycle of the chopper. If the duty cycle is greater
than 0.5 the energy is stored into the SMES coil whereas if the duty cycle has a value lesser than 0.5 the SMES coil is discharged. The
gate signals for chopper circuit are generated by comparing the PWM signals with a triangular signal.


CURRENT SOURCE CONVERTER BASED SMES
The block diagram of a current source converter controlled SMES is shown in Fig. 5.


Fig. 5. Controlling technique for a current source converter

SMES coil is directly linked to the DC terminals of the current source converter whereas the AC terminals of the converter are
connected with the grid. The shunt connected capacitor bank protects from the energy stored within the line inductance during
commutation of AC current and also filter out the higher order harmonics. The input signal to IGBTs is regulated to control the current
flowing through SMES. SMES stores energy in the form of current. Therefore the real as well as reactive power gets transferred at a
very hih speed. A pulse width modulation technique is implemented to ensure that the higher order harmonics of a 12 pulse current
source controller are minimized. If the value of modulation index is maintained in the range between 0.2 to 1 then the higher order
harmonics are totally eliminated. The ripple content on DC side is higher when a 6 pulse current source converter is employed
whereas it is reduced in case of a 12 pulse converter. This eliminates the losses on AC side of the system. As depicted in Fig. 5 the
proportional integral controller compares the actual and reference values of I
d
. L stands for the inductance of the superconductor coil
whereas R
d
and V
d
are the respective resistance and voltage of the DC circuit. The rate of charging superconductor coil is influenced
by the value of V
d
which is a function of modulation index.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


59
www.ijergs.org

II. COMPARISON OF VARIOUSCONTROL TECHNIQUES
Comparison of control techniques for SMES coil is represented in Table 1. The topologies are compared on the basis of their ability of
controlling active and reactive power, layout and operational features of the control unit, the effective total harmonic distortion
generated by the control technique, the installation and operational costs as well as their self- commutation capabilities.

CRITERIA
SMES CONTROL TECHNIQUE
THYRISTORIZED CONTROL
VOLTAGE SOURCE
CONVERTER CONTROL
CURRENT SOURCE
CONVERTER CONTROL
ABILITY TO
CONTROL ACTIVE
AND REACTIVE
POWER
Effective control over real power but
inefficient in controlling the reactive power
since the controller has a lagging pf to
network. Significant lower order harmonics
generated by firing of thyristors. Real and
reactive power cannot be controlled
independently.
Independent real and reactive
power control is possible.
Continuous reactive power
support at rated capacity even in
absence of negligible current in
the superconductor.
Independent control of real as
well as reactive power
exchange through SMES.
Reactive power support to the
coil depends upon the coil.
OPERATION OF
CONTROL UNIT
Highly controllable due to presence of a
single AC- DC converter unit.
The control technique is
convoluted compared to the
other two techniques due to the
presence of AC- DC converter
and DC- DC chopper unit.

It has an a single AC- DC unit
and hence can be controlled
easily. For applications of
higher rated power they can
be operated in parallel
connection.
TOTAL HARMONIC
DISTORTION (THD)
Generation of total harmonic distortion is
more than the other two techniques.
The value of total harmonic
distortion is reduced in case of
this control technique
The value of total harmonic
distortion is reduced in case of
this control technique
COST OF
INSTALLATION
AND OPERATION
Very economic installation and operational
costs
Lower than CSC having
equivalent rating
The total cost of switching
devices is over 170 percentage
of the cost of switchnig
devices and diodes used in a
voltage source controller of
equivalent rating
SELF
COMMUTATION
Poor self commutating capabilities than VSC Better than CSC
Poor self commutating
capabilities than VSC

Table 1. Comparison of various SMES control techniques

V. APPLICATION OF SMES
Because of its capability to respond instantaneously proves beneficial for several application in power system.

STORAGE DEVICE:
SMES has the ability of storing as high as 5000 MWh of energy at an efficiency as high as 95 percent. The efficiency is found to be
higher for larger sized units. It can respond within few ms which makes it suitable during dynamic changes in the power system. It can
serve as a spinning reserve. It can serve as a spinning reserve or as a supplementary reserve and hence provide supply during outages.

IMPROVEMENT OF PERFORMANCE OF FACTS DEVICES
An SMES unit is capable of storing energy for operation with FACTS devices. The inverter used for FACTS application and the
power conditioning system of an SMES unit have similarity in their configuration.The only dissimilarity being that the FACTS
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


60
www.ijergs.org

devices performs their operation by using the energy which is provided by the power system and utilizes a capacitor unit to the DC
side of converters. SMES provides real power along with the reactive power through the DC bus and hence improves the operation of
FACTS devices.



Fig. 6. SMES unit applied to FACTS devices


LOAD FOLLOWING:
SMES can support the generators to maintain the output energy at a constant value by following the variations in load pattern.

STABILITY ENHANCEMENT:
SMES unit can effectively damp the oscillations of lower frequencies and maintain the system stability after occurrence of any
transient. It absorbs the excessive energy from the power system and releases energy in case of any deficiency. Thus it increases the
stability of system by energy transfer.

AUTOMATIC GENERATION CONTROL:
SMES can be implemented to minimize the value of area control error in the automatic generation and control [4].

SPINNIG RESERVES:
When there is an outage of major generation units due to faults or maintenance purpose, the unallocated spinning reserves are
implemented to feed the load. When the superconductor coil is completely charged the SMES can serve as a large share of spinning
reserve. This is more economical alternative than other spinning reserves [4,5].

REACTIVE POWER COMPENSATION AND IMPROVEMENT OF POWER FACTOR:
SMES has an ability for independent active and reactive power control and therefore it can provide reactive power support and
enhance the power factor of the system [4].

SYSTEM BLACK START:
SMES units have the ability to make provisions for stating a generation unit by drawing power from SMES unit instead of
withdrawing power from the power system. This can help the system to restore from faulty conditions on the grid side [4].

ECONOMIC ENERGY TRANSFER:
By storing energy when it is available in excess and discharging it during deficiency or congestion it can reduce the price of electrical
energy and hence be an economic alternative of supplying energy.

SAG RIDE THROUGH IMPROVEMENT:
Voltage sag can be defined as a drop in the rms value of the voltage level from 0.1 to 0.9 per unit at the power frequency level for a
time ranging from 0.5 cycle to 1 minute. Some of the causes of voltage sag are starting of large motors, switching of large loads.
SMES unit can efficiently provide voltage during such conditions [6].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


61
www.ijergs.org


Fig. 7. Active and reactive power supplied by SMES connected to PCC




DYNAMIC STABILITY:
During sudden addition of large load or a large generating unit is lost the power system becomes dynamically instable. The reactive
power available within the system is not sufficient to maintain the stability of the system. An SMES unit can be used to provide the
requisite active as well as reactive power support to the grid [6,7].

REGULATION OF TIE LINE POWER:
While transferring electricity from one control area into another the amount of power transferred must match its predefinedvalue. If
the generating units are ramped up for sending power from on control area and the amount of loading of that system gets changed.
This variation may cause errors in the amount of power delivered and consequently inefficient utilization of generating units. An
SMES unit can be used for elimination of such errors and to ensure efficient utilization of the generators [6].

LOAD SHEDDING DURING LOW FREQUENCY:
When a large load or a transmission line is lost the resultant frequency of the system drops and keeps reducing as long as the available
generation and the load are balanced. Due to the ability of the SMES unit to supply active power quickly in the system it serves as an
effective means to bring the system frequency to its rated value by eliminating the imbalance among generation and load [6].

RECLOSING OF CIRCUIT BREAKERS:
In order to clear a system fault and bring the line back into operation the circuit breakers are reclosed. Circuit breaker reclosing is
performed when the power angle difference of the circuit breaker lies inside the limitations. But when the differences in the value of
the power angle is very high then the protective equipments prohibits the reclosing operation of the circuit breaker. The SMES unit
can feed some part of the load and hence decrease the power angle difference to inhibit its reclosing. Thus power flow can be restored
back to normal conditions after outage of transmission lines [6].

ENHANCEMENT OF POWER QUALITY:
SMES unit has the ability to improve the quality of power by increasing the LVRT and HVRT capabilities of the power system. It
eliminates the variations in power which interrupts the supply to critical consumers. In case of any momentary variations in load like a
flashover or thunder strokes the transmissions system trips the power supply which leads to a voltage sag. by providing a quick
response the SMES unit can avoid disconnection of critical loads [6].

BACKUP SOURCE:
The capability of SMES to store energy can serve as a backup source for sensitive loads and has the ability to supply the heavy
industries if there is any outage of generating units. The SMES units size can be designed to provide storages and prove economic at
the same time [6,7].

DAMPING SSR:
Sub synchronous resonances are observed in generating units that have a connection with transmission line that contains large series
compensation of capacitive form. This can be damaging for generators. This sub synchronous resonance can be avoided by using
SMES.




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


62
www.ijergs.org


ELECTRO- MAGNETIC LAUNCHERS:
Electro-magnetic launchers have an application in large power pulsating source. They are utilized as rail gun in defense areas. Rail
guns are capable of releasing a projectile having velocity more than 2000 meters per second. Since the SMES configuration has very
large energy densities they are prove as an attractive alternative for this application.

STABILITY OF WIND TURBINES:
The wind turbine generator has issues related to stability of power system during transients. A voltage source converter based SMES
unit controls the real as well as reactive power independently. This characteristic feature of the SMES configuration serves as an
efficient device for stabilizing the wind energy conversion system [8, 9].

STABILIZATION OF VOLTAGE AND POWER FLUCTUATION IN WECS:
Because of the variation of the velocity of wind, the value of voltage and power generated by wind turbine generators is always
varying. Such variations gives rise to flickering of incandescent bulbs and inaccurate operation of timing device. As the SMES device
has the ability to control real as well as reactive power independently, it serves as an attractive means for reduction of fluctuations
present in voltage and power.


VI. CURRENT STATUS AND FUTURE SCOPE
In 1982- 83 An SMES system of rated 30 MJ was assembled in Bonneville Power Administration Tacoma. The installed configuration
functioned for 1,200 hour and from the various results obtained it can be concluded that the SMES configuration had successfully met
the design requirements [11]. A 20 MWh SMES unit was proposed by Wisconsin university in the year 1988- 89. An array of D-
SMES was developed for stabilization of transmission system. The transmission system in this particular area was introduced with
huge suddenly changing loads because of operation of paper mills which gave rise to uncontrollable load fluctuation and collapsing of
voltages. The SMES units were efficient in stabilization of grid and improving the power quality [12]. The largest installation includes
six or seven units in upper Wisconsin by American Superconductor in year 2000. These units of 3 MW/0.83 kWh are currently
operated by the American Transmission Company, and are used for power quality applications and reactive power support where each
can provide 8 MVA [4]. In USA super-conductivity inc. supplies 1 and 3 MJ rated SMES devices.

Current an SMES having an energy rating of 100 MJ/ 50 MW is being designed. It is said to be the largest SMES configuration till
date. The purpose of design of this SMES unit is for damping the low freq. oscillations generated within the transmission network.
The superconducting magnet which is to be used for this configuration was materialized in 2003 and the tests on this magnet were
carried out from the center of advanced power system [13]. In Japan in 1986 an institute named 'The Superconductive energy storage
research association set up for promotion of applications of the SMES configuration practically. The Kyushu Electric corporation had
manufactured a 30 KJ SMES device in 1991for the stabilization of a 60 kW Hydro- electric generation plant. Several tests were
performed to prove the suitability of SMES unit to yield a desirable performance [14]. To simplify the choice of the capacity of an
SMES unit with the most suitable and appropriate cost and quality a 1 KWh 11 MW and a 100 KWh 120 MW SMES configuration is
manufactured. The 1 KWh 11MW unit is being validated by connecting it to a 6 KW and a 66 KW grids. These units were tested for
compensation of variations in load present in the network [15]. In Japan a 100 MW of wind farm was connected with a 15 MWh of
SMES unit in the year 2004, for stabilization of the output generated from the wind farm [16]. In the year 1988, in Russia an institute
named T 15 Superconducting Magnet has manufactured an SMES unit that has a capacity as high as 370 to 760 MJ [17]. After 1990
the Russian scientists are designing a 100 MJ 120 MW SMES unit [18]. Korea has developed a 1 MJ 300 KV SMES unit for UPS
applications. This unit can compensate a 3 second interruption of power and is 96 percent efficient [19]. The Korean electro
technology research institute had fabricated a 3 MJ, 750 KVA superconducting magnetic energy storage unit having 1000 Amperes of
operational current for enhancement of power quality in the year 2005[20]. Delengation generale pour L'annement support the
researches of applied superconductivity held in France. DGA has built a 100 KJ SMES made from Bi 2212 tapes having liquidized
Helium as a coolant. Later on it was decided to materialize an SMES unit that could work at higher temperatures like 20 Kelvin. DGA
had targeted to manufacture an SMES unit oh 800 KJ which would work on high temperature storage principle. The proposed SMES
unit was expected to operate at temperatures as high as 20 Kelvin which will have current density more than 300 MA/m
2
[21]. Some
organizations in Germany are working together for designing an SMES unit having a rating of 150 KJ and 20 KVA. The SMES unit is
designed for operation as an uninterrupted power supply [22].

The foremost high temperature superconductor based SMES unit was fabricated by American superconductors in the year 1977. This
unit was applied to a scald power system located in Germany. Several tests were conducted which revealed that high temperature
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


63
www.ijergs.org

superconductors based SMES units were a viable and attractive alternative for commercialized production [23]. Distributed SMES
units of small size called micro SMES having a rating between 1 to 10 MW are available in the commercial market.

Currently United States Dept. of Energy advanced research projects agency for energy has sponsored projects to validate the
application of SMES unit in power system. The project is undertaken by a Swiss industry named ABB and has received funds of US
dollars 4.2 million grant. According to the outlines laid for the plan a 3.3 KWh SMES configuration is proposed. The project will be
done in collaboration with Superconducting wire manufacturers super power, Brookhaven National laboratory as well as university of
Houston. The unit must be manufactured for 1 to 2 MWh and must be economic compared to lead acid batteries [25]. In Japan, high
energy acceleration research organization have promoted research on SMES. The scientists here are working for combining liquid
hydrogen refrigeration based SMES unit with a hydrogen fuel cell. The concept behind this combination is that when there is an
interruption of power the SMES unit can supply energy instantaneously and later on the fuel cell can feed the loads. However the
device is not materialized yet though the simulation as well as designs are under studies [10].
VII. RATING OF SMES CONFIGURATION
Capacity of the SMES unit is dependent upon the various applications and the cycling times available. An SMES unit having a very
high capacity can damp oscillations quickly. Such an unit won't be much economic since it will carry very large currents in the coil.
Whereas a very small capacity of the SMES configuration is ineffective for damping the system oscillations immediately. This is
because the output power of the SMES unit will be limited.


VIII. SYSTEM UNDER STUDY

Fig. 8. Block diagram of the system under consideration

The proposed system has doubly fed induction generators of rating equal to 9 MW. The SMES unit chosen has an energy rating of 1
MJ and the inductance of 0.5 H. Rated current through the superconductor is calculated to be 2 KA. The operation of SMES unit
during swell conditions is feasible only if the value of rated inductor is chosen greater than the rated currents inside the coils. The
system under consideration has a nominal current of 2 KA flowing through the coil. Therefore the max. amount of energy that can be
stored within SMES coil is as high as 1 MJ while occurrence of a voltage swell.

RESPONSE OF SMES UNIT IN THE EVENT OF VOLTAGE SAG AND SWELL
The current flowing through the SMES coil is unidirectional but the value of duty cycle of chopper circuit obtained from fuzzy logic
controller gives numerous positive as well as negative values for SMES voltage. This provides a reversible and continuous flow of
power for all operating conditions. The proposed SMES unit works in three different operating modes:
(1) stand- by mode
(2) discharging mode
(3) charging mode

(1) Stand-by mode:
Standby mode of operation occurs when the wind energy conversion system is working in healthy operating conditions. The standby
operating mode is selected when the value of duty cycle is selected to be 0.5. And the SMES coil is maintained at the rated value, in
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


64
www.ijergs.org

this case 2 ka. There is no transfer of energy to or from the SMES coil while the SMES coil is charged for maximum energy, i.e., 1 MJ
in this case. The dc link capacitor has a constant voltage of 10 kv across its terminals.


(a) current

(b) voltage

(c) duty cycle

(d) energy stored in SMES


(e) DC voltage in SMES
Fig. 9: SMES transient responses during voltage sag and swell including; (a) current
(b) voltage, (c) duty cycle, (d) energy stored in SMES and (e) DC voltage in SMES

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


65
www.ijergs.org

(2) Discharging mode
During a voltage sag on the grid, discharging mode occurs in the SMES unit. In discharging mode d has a value less than 0.5. In this
mode of operation the energy stored inside the SMES unit is supplied to the power system. At time t= 2 seconds a voltage sag is and
the current flowing through SMES coil reduces with a negative slope. The rate of discharging of SMES coil is predetermined and is a
function of d. Voltage across SMES is dependent upon the value of d and the voltage present across the dc link capacitor. When the
fault is cleared the coil is recharged. Discharging mode of operation of SMES is compared with the charging mode in Fig. 7.16.

(3) Charging mode:
During a voltage swell event the SMES unit undergoes charging operation. The value of d in this mode lies above 0.5. At time t= 2
seconds a voltage swell is simulated therefore the current flowing through the SMES coil raises positively and the charge stored inside
SMES unit increases. The transfer of energy occurs from the power system to the SMES unit until it reaches a max. Capacity which is
determined by the value of duty cycle. In the system under consideration the max. Capacity of the unit is 1.03 MJ. Power modulation
is till this capacity is permissible and beyond this V
SMES
drops and becomes 0 when max. SMES current is acquired. Fig.7.16
represents the charging mode of an SMES coil.

Below mentioned observations are drawn
(i) The current flowing through the SMES unit during dip and swell occurrence are analogues to the energy that is stored inside the
coil. The level of energy at any instance is calculated as 1/2 LI
2

(ii) During both sang as well as swell occurrence the voltage across SMES unit is kept 0 after the max. Current starts flowing through
SMES. In order to reduce the SMES operating expenses it is advisable to bypass the SMES unit when the power system becomes
stable. This can be done by using a bypass switch in parallel with the SMES unit.
(iii) During occurrence of both voltage dip as well as swell the voltage across dc link capacitor of the SMES unit is observed to
oscillate in reverse manner then the voltage across SMES coil. The level of this voltage at any instant is dependent upon the SMES
voltage and D.
(iv) Max. overshoot of the voltage in dc link voltage is lies inside the safety limit of 1.25 per unit of the system voltage.
CONCLUSION
The paper gives a brief account of various control techniques used for SMES which include thyristorised control, control using a
Voltage Source Converter and control using a Current Source Controller. A comparative account of these methods is done. A brief
summary of the various applications for SMES and the installations of SMES technology throughout the world so far is also
highlighted along with a note on selection of the rating of the SMES unit for a given application. The behavior of SMES during
charging and discharging event on occurrence of a sag and a swell in the distribution end of the system is also analysed.

REFERENCES:
[1] Mahmoud Y. Khamaira, A. M. Shiddiq Yunus, A. Abu-Siada, "Improvement of DFIG-based WECS Performance Using SMES
unit" The Australasian Universities Power Engineering Conference, 2013.
[2] R. H. Lasseter, S . G. Jalali, "Dynamic Response of Power Conditioning Systems for Superconductive Magnetic Energy Storage",
IEEE Transactions on Energy Conversion, Vol. 6.
[3] Knut Erik Nielsen, "Superconducting magnetic energy storage in power systems with renewable energy sources", Master of
Science in Energy and Environment Thesis, Norweigan university of science and technology
[4] P. D. Baumann, ―Energy conservation and environmental benefits realized from SMES,‖ IEEE Transaction on Energy
Conservation, vol. 7.
[5] C.-H. Hsu, W.-J. Lee, ―SMES storage for power system application,‖ IEEE Transaction of Industrial Applications, vol. 29
[6] W. V. Torre, S. Eckroad, ―Improving power delivery through application of SMES, IEEE Power and Engineering Society Winter
Meeting, 2001
[7] X. D. Xue, K. W. E. Cheng, D. Sutanto, ―Power system applications of SMES‖, in IEEE Industrial Applications Conference 2005,
vol. 2
[8] O. Wasynczuk, ―Damping SSR using energy storage,‖ IEEE Transactions of Power Application Systems, vol. PAS-101
[9] C.-J.Wu, C.-F. Lu, ―Damping torsional oscillations by SMES unit,‖ Electrical Machines Power System, vol. 22
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


66
www.ijergs.org

[10] Makida Y, Hirabayashi H, Shintomi T, Nomura S, "Design of SMES with liquid hydrogen for emergency purpose", Applied
Superconductivity IEEE Transactions 17
[11] D. Rogers, H. J. Boenig, "Operation of 30MJ SMES in BPA Electrical Grid," IEEE Transactions on Magnetics, vol.21
[12] R. W. Boom, "SMES for electric utilities-A review of 20 year Wisconsin program," Proceedings of the International Power
Sources Symposium, vol.2
[13] Michael Steurer, Wolfgang Hribernik, "Frequency Response Characteristics of 100MJ SMES its Measurements and Model
Refinement," IEEE Transactions on Applied Superconductivity, vol.l S.
[14] F me, and M. Takeo, "A field Experiment on Power Line Stabilization by an SMES System," IEEE Transactions on Magnetics,
vol.l S
[15] Tsuneo Sannomiya, Hidemi Hayashi, "Test Results of Compensation for Load Fluctuation under a Fuzzy Control by IkWhIMW
SMES," IEEE Transactions on Applied Superconductivity, vol.l l [16] S. Nomura, Y. Ohata, "Wind Farms Linked by SMES
Systems," IEEE Transactions on Applied Superconductivity, vol.l S
[17] N. A. Chernoplekov, N. A. Monoszon, "T-15 Facility and Test," IEEE Transactions on Magnetics, vol. 23
[18] V. V. Andrianov, V. M. Batenin, "Conceptual Design of a 100MJ SMES," IEEE Transactions on Magnetics, vol.27
[19] K. C. Seong, H. J. Kim, "Design and Testing of I MJ SMES," IEEE Transactions on Applied Superconductivity, vol2
[20] H. J. Kim, K. C. Seong, "3 MJ/750 kVA SMES System for Improving Power Quality," Transactions on Applied
Superconductivity
[21] P. Tixador, B. Bellin, "Design of 800 kJ HTS SMES," IEEE Transactions on Applied Superconductivity, vol.l S
[22] M. Ono, S. 1. lanai, "Development of IMJ Cryo cooler Cooled Split Magnet with silver Sheathed Bi2223 Tapes for silicon
Single-Crystal Growth Applications," IEEE Transactions on Applied Superconductivity, vol. 10
[23] Weijia Yuan, "Second-GenerationHTS and Their Applications for Energy Storage", Springer Thesis, Doctoral Thesis accepted by
the University of Cambridge, Cambridge
[24] Phil Mckenna, "Superconducting Magnets for Grid-Scale Storage", Technology Review, Energy. March. 2011
[25] H. Chen, "Progress in electrical energy storage system A critical review", Progress in Natural Science 19















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


67
www.ijergs.org

Controlling Packet Loss at the Network Edges by Using Tokens
B.Suryanarayana
1
, K. Bhargav Kiran
2

1
Research Scholar (PG), Dept of Computer Science and Engineering, Vishnu Institute of Engineering, Bhimavaram, India
2
Assistant professor, Dept of Computer Science and Engineering, Vishnu Institute of Engineering, Bhimavaram, India
E-mail- Surya0530@gmail.com
Abstract— The Internet accommodates simultaneous audio, video and data traffic. It requires the Internet to guarantee the packet
loss which at its turn depends very much on congestion controls. A series of protocols have been introduced to supplement the
insufficient TCP mechanism controlling the network congestion's. CSFQ was designed as an open-loop controller to provide the fair
best effort service for supervising the per-flow bandwidth consumption and has become helpless when the P2P flows started to
dominate the traffic of the Internet. Token-Based Congestion Control (TBCC) is based on a closed-loop congestion control principles,
which restricts token resources consumed by an end-user and provides the fair best effort service with O(1) complexity. As Self-
Verifying Re-feedback and CSFQ, it experiences a heavy load by policing inter-domain traffic for lack of trusts. In this paper, Stable
Token-Limited Congestion Control (STLCC) is introduced as new protocols which appends inter-domain congestion control to TBCC
and make the congestion control system to be stable. STLCC is able to shape input and output traffic at the inter-domain link with
O(1) complexity. STLCC produce a congestion index is pushes the packet loss to the network edge and improves the network
performance. At last, the simple version of STLCC is introduced. This version is deployable in the Internet without any IP protocols
modifications and preserves also the packet datagram.

Keywords—TCP, Tokens, Network, Congestion Control Algorithm, Addressing, Formatting, Buffering,
Sequencing, Flow Control, Error Control, Qos, Random Early Detection (RED).
INTRODUCTION
Modern IP network services provide for the simultaneous digital transmission of video, voice and data. These services
require congestion control protocols and algorithms which can solve the packet loss parameter can be kept under control. Congestion
control is the cornerstones of packet switching networks. It should prevent congestion collapse it provide fairness to competing flows
and optimize transport performance indexes such as throughput, loss and delay. The literature abounds in papers on this subject; there
are papers on high-level models of the flow of packets through the network, and on specific network architectures.
Despite this vast literature, congestion control in telecommunication networks struggles with two major problems that are not
completely solved. The first one is the time-varying delay between the control point and the traffic sources. The second one is related
to the possibility that the traffic sources do not follow the feedback signal. This latter may happen because some sources are silent as
they have nothing to transmit. Originally designed for a cooperative environment. It is still mainly dependent on the TCP congestion
control algorithm at terminals, supplemented with load shedding [1] at congestion links. This model is called the Terminal Dependent
Congestion Control case.
Core-Stateless Fair Queuing (CSFQ) [3] set up an open- loop control system at the network layer, it inserts the label of the
flow arrival rate onto the packet header at edge routers and drops the packet at core routers based on the rate label if congestion
happens. CSFQ is first to achieve approximate fair bandwidth allocation among flows with O(1) complexity at core routers.
According to Cache Logic report, P2P traffic was 60% of all the Internet traffic in 2004, of which Bit-Torrent [4] was
responsible for about 30% of the above, although the report generated quite a lot of discussions around the real numbers. In networks
with P2P traffic, CSFQ can provide fairness to competing flow, but unfortunately it is not what end-users and operators really want.
Token-Based Congestion Control (TBCC) [5] restricts the total token resource consumed by an end-user. So, no matter how many
connections the end-user has set up, it cannot obtain extra bandwidth resources when TBCC is used.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


68
www.ijergs.org

In this paper a new and better mechanism for congestion control with application to Packet Loss in networks with P2P traffic
was proposed. In this new method the edge and the core routers will write a measure of the quality of service guaranteed by the router
by writing a digital number in the Option Field of the datagram of the packet, this is called a token. This token is read by the path
routers and interpreted as its value will give a measure of the congestion [2] especially at the edge router. Based on the token number
the edge router at the source, reducing the congestion on the path. In Token-Limited Congestion Control (TLCC) [9], the inter-domain
router restricts the total output token rate to peer domain. When the output token rate exceeds the threshold, TLCC will decreases the
Token-Level of output packets, and then the output token rate will decrease.

Fig 1. Architecture


2. RELATED WORK

The basic idea of peer- to- peer network is to have peers participate in an application level overlay network and operate as
both A number of approaches for queue management at Internet gateways have been studied previously. Droptail gateways are used
almost universally in the current Internet because of their simplicity. A droptail gateway drops an incoming packet only when the
buffer becomes full, thus the providing congestion notifications to protocols like TCP. While simple to implement, it distributes losses
among the flows arbitrarily [5]. Often results in the bursts losses from a single TCP connection, reducing its window sharply. Thus,
the flow rate and consequently throughput for that flow drops. Tail dropping also results in multiple connections simultaneously
suffering from losses leading to global synchronization [6]. Random early detection (RED) addresses some [11][12] of the drawbacks
of droptail gateways. The RED gateway drops incoming packets with a dynamically computed probability when the exponential
weighted moving average queue size avg q exceeds a threshold. In [6], the author does per-flow accounting maintaining only a single
queue. It is suggest changes to the RED algorithm to ensure fairness and to penalize the misbehaving flow. It puts a maximum limit on
the number of packets a flow can have in the queue.

Besides it also maintains the per flow queue use. Drop or accept decision for an incoming packet is then based on the
average queue length and the state of that flows. It also keeps track of the flows which consistently violate the limit requirement by
maintaining a per-flow variable called as strike and penalizes those flows which have a high value for strike. It is intended that this
variable will becomes high for non- adaptive flows and so they will be penalized aggressively. It has been shown through simulations
[7] that FRED fails to ensure the fairness in many cases. CHOKE [8] is an extension to RED protocols. It does not maintain any per
flow state and works on the good heuristic that a flow sending at a high rate is likely to have more packets in the queue during the time
of the congestion. It decides to drop a packet during congestion if in a random toss, it finds another packet of the same flow. In [9], the
authors establish how rate guarantees can be provided by simply using buffer management. They show that the buffer management
approach is indeed capable of providing reasonably accurate rate guarantees and the fair distribution of excess resources.

3. Core Stateless Fair Queuing

In the proposed work, a model called the Terminal Dependent Congestion Control case which is a best-effort service in the
Internet that was originally designed for a cooperative environment which is the congestion control but still it is mainly dependent on
the TCP congestion control algorithm at terminal, supplemented with load shedding at[13][14] congestion links is shown in Figure 2.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


69
www.ijergs.org

In high speed network Core Stateless Fair Queuing (CSFQ) is enhanced to fairness set up an open- loop control system at the network
layer, which insert the label of the flow arrival rate onto the packet header at edge routers and drops the packet at core routers based on
the rate label if congestion happens. At the core routers CSFQ is the first to achieve approximate fair bandwidth allocation among
flows with O (1) complexity.

CSFQ can provide fairness to competing flows in the networks with P2P traffic, but unfortunately it is not what end-users
really want. By an end user Token Based Congestion Control (TBCC) restricts the total token resource consumed. It cannot obtain
extra bandwidth resources when TBCC is used so no matter how many connections the end user has set up. The Self Verifying CSFQ
tries to expand the CSFQ across the domain border. It randomly selects a flow,[15] then re-estimates the flow‘s rate, and the checks
whether the re-estimated rate is consistent with the label on the flow‘s packet. Consequently Self-Verifying CSFQ will put a heavy
load on the border router and makes the weighted CSFQ null as well as avoid.

The congestion control architecture re-feedback, which aims to provides the fixed cost to end-users and bulk inter-domain
congestion charging to network operator. Re-feedback not only demands very high level complexity to identify the malignant end
user, but it is difficult to provide the fixed congestion charging to the inter domain interconnection with low complexity. There are
three types of inter domain interconnection polices: the Internet Exchange Points[16], the private peering and the transits. In the
private peering polices, the Sender Keep All (SKA) peering arrangements are those in which the traffic is exchanged between two
domains without mutual charges. As Re-feedback is based on the congestion charges to the peer domain, it is difficult for re-
feedback to support the requirements of SKA.
The modules of the proposed work are:

- NETWORK CONGESTION
- STABLE TOKEN LIMIT CONGESTION CONTROL (STLCC) TOKEN
- CORE ROUTER
- EDGE ROUTER

Network Congestion: Congestion occurs when the number of packets being transmitted through the network crosses the packet
handling capacity of the networks. Congestion control aims to keep number of packets below the level at which performance falls off
dramatically.

Stable Token Limit Congestion Control (STLCC): STLCC is able to shape output and input traffic at the inter domain link with
O(1) complexity. STLCC produce a congestion index, pushes the packet loss to network edge and improves the overall network
performance. To solve the oscillation problems, the Stable Token-Limited Congestion Control (STLCC) is also introduced. It integrate
the algorithms of TLCC and XCP [10] altogether. In STLCC, output rate of the sender is controlled using the algorithm of XCP, there
is almost no packet lost at the congested link. At the same time, the edge router allocates all the access token resources to the
incoming flow equally. When congestion happens, the incoming token rate increases at the core router, and the congestion level of the
congested link will also increased as well. Thus STLCC can measure the congestion level analytically, and then allocates network
resources according to the

Token: A new and better mechanism for the congestion control with application to Packet Loss in networks with P2P traffic is
proposed. In this method the edge and the core routers will write a measure of the quality of service guaranteed by the router by
writing the digital number in the Option Field of the datagram of the packet. This is called as token. The token is read by the path
routers and then interpreted as its value will give a measure of the congestion especially at the edge routers. Based on the token
numbers, the edge router at the source, it reducing the congestion on the path.

Core Router: A core router is a router designed to operate in the Internet Backbone (or core). To fulfill this role, a router must be able
to support multiple telecommunications interfaces of the highest speed in use in the core Internet and must be able to forward the IP
packets at full speed on all of them. It must also supports the routing protocols being used in the backbone. A core router is distinct
from the edge routers.

Edge Router: Edge routers sit at the edge of a backbone network and connect to the core routers. Then the token is read by the path
routers and then interpret as its value will give a measure of the congestion especially at the edge routers. Based on the token number
of the edge router at the source, it reducing the congestion on the path.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


70
www.ijergs.org

4. RESULTS
Packets of Edge Router:





















Edge Router3:





International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


71
www.ijergs.org



CONCLUSION:
The architecture of Token-Based Congestion Control (TBCC), which provides fair bandwidth allocation to end-users in the same
domain will be introduced. It evaluates two congestion control algorithms CSFQ and TBCC. In this STLCC is presented and the
simulation is designed to demonstrate its validity. It presents the Unified Congestion Control Model which is the abstract model
of STLCC, CSFQ and Re-feedback. Finally, conclusions will be given. To inter-connect two TBCC domains, then the inter-
domain router is added to the TBCC system. To support the SKA arrangements, the inter-domain router should limit its output
token rate to the rate of the other domains and police the incoming token rate from peer domains.

REFERENCES:
[1] Andrew S. Tanenbaum, Computer Networks, Prentice-Hall International, Inc.

[2] S. Floyd and V. Jacobson. Random Early Detection Gateways for Congestion Avoidance, ACM/IEEE Transactions on
Networking, August 1993.

[3] Ion Stoica, Scott Shenker, Hui Zhang, "Core-Stateless Fair Queueing: A Scalable Architecture to Approximate Fair
Bandwidth Allocations in High Speed Networks", In Proc. of SIGCOMM, 1998.

[4] D. Qiu and R. Srikant. Modeling and performance analysis of BitTorrent-like peer-to-peer networks. In Proc. of SIGCOMM,
2004.


[5] Zhiqiang Shi, Token-based congestion control: Achieving fair resource allocations in P2P networks, Innovations in NGN:
Future Network and Services, 2008. K-INGN 2008. First ITU-T Kaleidoscope Academic Conference.

[6] I. Stoica, H. Zhang, S. Shenker , Self-Verifying CSFQ, in Proceedings of INFOCOM, 2002.

[7] Bob Briscoe,Policing Congestion Response in an Internetwork using Refeedback, In Proc. ACM SIGGCOMM05,
2005.

[8] Bob Briscoe,Re-feedback:Freedom with Accountability for Causing Congestion in a Connectionless Internetwork,
http://www.cs.ucl.ac.uk/staff/B.Briscoe/project s /e2ephd/ e2ephd_y9_cutdown_ appxs.pdf

[9] Zhiqiang Shi, Yuansong Qiao, Zhimei Wu, Congestion Control with the Fixed Cost at the Domain Border, Future Computer
and Communication (ICFCC),2010.

[10] Dina Katabi, Mark Handley, and Charles Rohrs, "Internet Congestion Control for Future High Bandwidth-Delay Product
Environments." ACM Sigcomm 2002, August 2002.

[11] Abhay K. Patekh, ―A Generalized Processor Sharing Approach Flow Control in Integrated Services Networks: The
Single- Node Case‖, IEEE/ACM Trans. on Network, Vol. 1, No.3, June 1993.

[12] Sally Floyd, Van Jacobson, Link-sharing and Resource Management Models for Packet Networks, IEEE\ACM Transactions
on Networking, Vol.3, No.4, 1995.

[13] John Nagle, RFC896 congestion collapse, January 1984.

[14] Sally Floyd and Kevin Fall, Promoting the Use of End-to-End Congestion Control in the Internet, IEEE/ACM Transactions
on Networking, August 1999.
[15] V. Jacobson. ―Congestion Avoidance and Control‖. SIGCOMM Symposium on Communications Architectures and
Protocols, pages 314–329, 1988
[16]http://www.isi.edu/nsnam/ns/

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


72
www.ijergs.org

A Design for Secure Data Sharing in Cloud
Devi D
1
, Arun P S
2

1
Research Scholar (M.Tech), Dept of Computer Science and Engineering, Sree Buddha College of Engg, Alappuzha, Kerala, India
2
Assistant professor, Dept of Computer Science and Engineering, Sree Buddha College of Engg, Alappuzha, Kerala, India
E-mail- devidharman@gmail.com

Abstract— Cloud computing, which enables on demand network access to shared pool of resources is the latest trend in today‘s IT
industry. Among different services provided by cloud, cloud storage service allows the data owners to store and share their data
through cloud and thus become free from the burden of storage management. But, since the owners lose physical control over their
outsourced data, there arise many privacy and security concerns. A number of attribute based encryption schemes are proposed for
providing confidentiality and access control to cloud data storage where the standard encryption schemes face difficulties. Among
them, Hierarchical Attribute Set Based Encryption (HASBE) provides scalable, flexible and fine grained access control as well as easy
user revocation. It is an extended form of Attribute Set Based Encryption (ASBE) with a hierarchical structure of users. Regarding
integrity and availability, HASBE is not sufficient to provide the data owner with the ability to perform checking against missing or
corruption of their outsourced data. So, this paper extends HASBE with privacy preserving public auditing concept which additionally
allows owners to securely ensure the integrity of their data in the cloud. We are using homomorphic linear authenticator technique for
this purpose.
Keywords— Cloud Computing, Access control, Personal Health Record, HASBE, Integrity, TPA, Homomorphic Linear
Authenticator.
INTRODUCTION
Cloud computing is a general term for anything that involves delivering hosted services over the Internet. Three distinct
characteristics differentiate cloud service from traditional hosting. It is sold on demand- giving the cloud consumer the
freedom to self-provision the IT resources, it is elastic - which means that at any given time a user can have as much or as
little of a service as they want, the service is fully managed by the provider-the consumer needs nothing but a personal
computer and Internet access. Other important characteristics of cloud are measured usage and resilient computing. In
measured usage cloud keep track of usage of it‘s IT resources and the consumer need to pay only for what they actually
use. For resilient computing, cloud distributes redundant implementations of IT resources across physical locations. IT
resources can be pre-configured so that if one becomes imperfect, processing is automatically handed over to another
redundant implementation.
Infrastructure as a Service(IaaS), Platform as a Service(PaaS), and Software as a Service(SaaS)are the major service
oriented cloud computing models. Cloud storage is an important service of cloud computing which allows data owners to
move data from their local computing systems to the cloud. The physical storage spans across multiple servers and
locations. People and organizations buy or lease storage capacity from the providers to store end user,organization, or
application data. Cloud storage has several advantages over traditional data storage: relief from the burden for storage
management, universal data access with location independence and avoidance of capital expenditure on hardware,
software and personnel maintenances. It also allows sharing of data with others in a flexible manner Moving the data to
an off-site storage system, maintained by a third party(cloud service provider), on which data owner does not have any
control posses many data security challenges of privacy - the risks of unauthorized disclosure of the users‘ sensitive data
by the service providers, data integrity-validity of outsourced data due to its internet-based data storage and management
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


73
www.ijergs.org

etc. In cloud environment data confidentiality is not the only data security requirement. Since cloud allows data sharing, a
great attention to be given to fine- grained access control to the stored data.
The traditional method to provide confidentiality to such sensitive data is to encrypt them before uploading to the cloud.
In traditional public key infrastructure, each user encrypts his file and stores it in the server and the decryption key is
disclosed only to the particular authorized user. Regarding confidentiality, this scheme is secure, but this solution requires
efficient key management and distribution which is proven to be difficult. Also, as the number of users in the system
becomes large this method will not be efficient. These limitations and the need for fine- grained access control for data
sharing, lead to the introduction of new access control schemes based on attribute based encryption(ABE)[3].Unlike in
traditional cryptography where the intended recipient identity is clearly known, in an attribute based systems one onl y
needs to specify the attributes or credentials of the recipient(s).Here cipher texts are not encrypted to one particular user as
in traditional public key cryptography.It enables to handle unknown users also. Different types of ABE schemes are
proposed to provide fine-grained access control to data stored in cloud. But they could not satisfy the requirements such as
scalability- ability to handle increasing number of system users without degrading efficiency, flexibility-should support
complex access control policies with great easiness and easy user revocation -should avoid re-encryption of data and re-
distribution of new access keys during the revocation of each user. These limitations of ABE schemes are covered by
Hierarchical Attribute Set Based Encryption (HASBE)[1].It is an extension of Attribute Set Based
Encryption(ASBE).HASBE achieves scalability due to its hierarchical structure and also inherits fine-grained access
control and flexibility in supporting compound attributes from ASBE[7].Another highlighting feature of HASBE is its
easy user revocation method. In addition to these access control needs, the data owners want to know the integrity of the
data which they uploaded to the cloud. HASBE does not include integrity checking facility and it is the major drawback of
this scheme. This paper integrates integrity checking module based on privacy preserving public auditing with HASBE
scheme and thus provides more security to the system.
RELATED WORKS

This section reviews the concept of attribute based encryptions and provide a brief overview of Attribute Set Based
Encryption(ASBE) and Hierarchical Attribute Set Based Encryption(HASBE).All these schemes are proposed as access control
mechanisms to cloud storage.
Sahai and Waters proposed Attribute based encryption to provide better solution for access control. It used user identities as
attributes and these attributes play important role in encryption and decryption. The primary ABE used a threshold policy for access
control, but it lacks expressibility. ABE schemes are further classified into key-policy attribute based encryption (KP-ABE) and
ciphertext-policy attribute-based encryption (CP-ABE), in which concept of access policies are introduced. In KP-ABE[4] access
policies are asscociated with users private key while in CP-ABE[5] it is in the ciphertext. In the ABE scheme, ciphertexts are not
encrypted to one particular user as in traditional public key cryptography.Rather, both ciphertexts and users‘ decryption keys are
associated with a set of attributes or a policy over attributes. A user is able to decrypt a ciphertext only if there is a match between
attributes in the decryption key and the ciphertext.
In KP-ABE since the access policy is built in to the users private key, the data owner who encrypt the data can‘t choose who
can decrypt the data. He has to trust the key issuer. But in CP-ABE since users‘ decryption keys are associated with a set of attributes,
it is more natural to apply. These scheme provided fine grained access control to the sensitive data in the cloud but it failed in the case
of handling complex access control policies. It lacks scalability and in case a previously legitimate user needs to be revoked, related
data has to be re-encrypted. Here data owners need to be online all the time so as to encrypt or re-encrypt data .
In CP-ABE scheme decryption keys only support user attributes that are organized logically as a single set. So users can only
use all possible combinations of attributes in a single set issued in their key to satisfy a policy. To solve this problem, Bobba [7]
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


74
www.ijergs.org

introduced ciphertext-policy attribute-set-based encryption (CP-ASBE or ASBE for short). ASBE is an extended form of CP-ABE
which organizes user attributes into a recursive set structure and allows users to impose dynamic constraints on how those attributes
may be combined to satisfy a policy. It groups user attributes into sets such that those belonging to a single set have no restrictions on
how they can be combined. Similarly, multiple numerical assignments for a given attribute can be supported by placing each
assignment in a separate set.
To achieve scalability, flexibility and fine grained access control and efficient user revocation, Hierarchical attribute set
based encryption [HASBE] by extending cipher-text-policy attribute set based encryption [CP-ASBE or ASBE] scheme is
proposed[1]. HASBE extends the ASBE algorithm with a hierarchical structure to improve scalability and flexibility while at the same
time inherits the feature of fine-grained access control of ASBE. HASBE supports compound attributes due to flexible attribute set
combinations as well as achieves efficient user revocation without requiring re-encryption because of attributes assigned multiple
values.
HASBE system consists of five types of parties: a cloud service provider, data owners, data consumers, a number of domain
authorities, and a trusted authority. The trusted authority is the root authority and responsible for managing top-level domain
authorities. Each data owner/consumer is administrated by a domain authority. A domain authority is managed by its parent domain
authority or the trusted authority. Data owners encrypt their data files and store them in the cloud for sharing with data consumers.
Data consumers download and decrypt the file stored in cloud. Data owners, data consumers, domain authorities, and the trusted
authority are organized in a hierarchical manner and keys are delegated through this hierarchy.
PROBLEM STATEMENT
Even though HASBE scheme achieves scalability, flexibility and fine grained access control, there is no method called integrity
scheme in HASBE to ensure that the data will be remained correctly in the cloud. Hence it is the major drawback of HASBE scheme.
The data owners are facing a serious risk of corrupting or missing their data because of lack of physical control over their outsourced
data. In order to overcome this security risk, privacy preserving public auditing concept could be proposed, which integrates data
integrity proof with HASBE scheme.
OBJECTIVES
The data owners want to prevent the server and unauthorized users from learning the contents of their sensitive files. Each of them
owns a privacy policy. In particular, the proposed scheme has the following objectives:
- Fine grained access control : Different users can be authorized to read different sets of files.
- User revocation: Whenever it is necessary, a user‘s access privileges should be revoked from future access in an efficient and
easy way.
- Flexible policy specification: The complex data access policies can be specified in a flexible manner.
- Scalability: To support a large and unpredictable number of users, the system should be highly scalable, in terms of
complexity in key management, user management, and computation and storage.
- Enable users to ensure the integrity of data they are outsourced.
o Public audit ability: to allow a Third Part Auditor (TPA) to verify the correctness of the cloud data on demand
without retrieving a copy of the whole data or introducing additional online burden to the cloud users.
o Storage correctness: to ensure that there exists no cheating cloud server that can pass the TPA‘s audit without
indeed storing users data intact.
o Privacy-preserving: to ensure that the TPA cannot derive users data content from the information collected during
the auditing process.

METHODOLOGY
The entire system applies to Personal Health Record (PHR), which is an electronic record of an individual's health information.
Online PHR service [8-9] allows an individual to create, store, manage and share his personal health data in a centralized way. Since
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


75
www.ijergs.org

cloud computing provides infinite computing resources and elastic storage, PHR service providers shift the data and applications in
order to lower their operational cost.
The overall methodology of this work can be divided into two parts - Secure PHR Sharing using HASBE and Secure data
auditing. The architecture of Secure PHR sharing is given in figure 1 and secure data auditing in figure 2.

B. Secure PHR Sharing

For secure PHR sharing, HASBE has a hierarchical structure of system users. Hierarchy enables the system to handle increasing
number of users without degrading the efficiency. PHR owners can upload their encrypted PHR files to cloud storage and data
consumers can download and decrypt the required file from the cloud. In this system, the PHR owners need not be online all the time
since they are not responsible for issuing decryption keys to data consumers. It is the responsibility of a domain authority to issue
decryption keys to users under its domain. The system can be extended to any depth and in the same level there can be more than one
domain authorities so that no authority should become a bottleneck to handle large number of system users.Here, the system under
consideration uses a depth 2 hierarchy and there are five modules for secure PHR sharing.
1. Trusted Authority Module
2. Domain Authority Module
3. Data Owner Module
4. Data Consumer Module
5. PHR Cloud Service Module


Fig 1:HASBE Architecture


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


76
www.ijergs.org

1. Trusted Authority Module
The trusted authority is the root or parent authority. It is responsible for generating and distributing system parameters and root
master keys as well as authorizing the top-level domain authorities. In our system the Ministry of Health is the trusted authority.
The major functions of Ministry of Health are,
 Admin can login from the home page and can perform domain authority registration.
 To set up the system by generating master secret key MK
0
and a public key based on universal set of system attributes .
 Generate master key for domain authority using public key PK, master key MK
0
and set of attributes corresponding to
domain authority.
2. Domain Authority Module
Domain Authority (DA) is responsible for managing PHR owners and authorizing data consumers. In our system a single
domain authority called National Medical Association(NMA) comes under Ministry of Health.
 NMA first registers to the trusted authority. During the registration the attributes corresponding to the DA is specified. Also
a request for domain key is send to trusted authority through web services. Only after receiving domain key- public key and
domain master key, DA can authorize users in it‘s domain.
 Major functions of NMA are,
o To provide public key for the patients to perform attribute based encryption.
o Log in and View the details of medical professionals.
o To provide attribute based private key for the medical professionals for decrypting the medical records.
o Perform user revocation.
3. Data Owner Module
In our system patients are the data owners. A patient application is there which allows the patient to interact with PHR service
provider. The main functions of these module are,
 Patients first register to the system and then log in.
 Patients can set the access privilege as who can view the files and upload encrypted files to cloud.
 Patient application performs encryption in two stages. First the file is encrypted with AES, then AES key is encrypted
with patient specified policy and public key provided by NMA. Second stage corresponds to attribute set based
encryption.
 Encrypted file along with encrypted AES key is uploaded to the cloud.
4. Data Consumer Module
Medical professionals act as data consumers. Through the medical professional application doctors interact with PHR service
provider.
 Each hospital administrator log in and creates employees by entering their details. Registration details are also
given to NMA through web services.
 Doctors can later log in to the application using their username and password.
 The application allows doctors to view required patient details and download their files by interacting with PHR
service provider in cloud through web services.
 Medical professional application performs decryption of files for each employee by requesting corresponding
private key based on attributes of the employee from NMA.
5. PHR Cloud Service Module
Responsible for storing encrypted files. It preprocess the file for generating metadata for auditing purpose.
A. Secure Data Auditing

Data auditing is performed by a third party Auditor (TPA) on behalf of the PHR service provider. For the cloud PHR service
provideris the data owner. On the other hand PHR service provider is the client of TPA. It first registers to TPA. The initial
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


77
www.ijergs.org

verification details about uploaded files are given to TPA through proper communication channels. Upon getting data auditing
delegation from PHR service provider, TPA interact with cloud and performs a privacy preserving public auditing. Homomorphic
Linear Authenticator is used to allow TPA to perform integrity checking without retrieving the original data content. It issues
challenges to cloud which indicates random file blocks to be checked. Cloud generates data correctness proof and TPA verifies
it and indicates the result.

Fig 2: Auditing Architecture

CONCLUSION
In this paper, we proposed the privacy preserving public auditing concept for HASBE scheme, to overcome the drawback of, absence
of integrity assurance method in HASBE. Even though HASBE scheme achieves scalability, flexibility and fine- grained access
control, it fails to prove data integrity in the cloud. Since, the data owner has no physical control over his outsourced data, such an
auditing is necessary to prevent cloud service provider from hiding data loss or corruption information from the owner. Audit result
from TPA would also be beneficial for the cloud service providers to improve their cloud based service platform, and users can give
their data to the cloud and be worry free about the data integrity.The proposed system preserves all advantages of HASBE and also
adds an additional quality of integrity proof to this system.

REFERENCES:
[1] Zhiguo Wan, Jun‘e Liu, and Robert H. Deng, Senior Member, IEEE,” HASBE: A Hierarchical Attribute set-Based Solution for Flexible and Scalable Access
Control in Cloud Computing‖, ieee transactions on information forensics and security, vol. 7, no. 2, april 2012
[2] Kangchan Lee,‖ Security Threats in Cloud Computing Environments‖, International Journal of Security and Its Applications ,Vol. 6, No. 4, October, 2012.
[3] Cheng-Chi Lee1, Pei-Shan Chung2, and Min-Shiang Hwang ,”A Survey on Attribute-based Encryption Schemes of Access Control in Cloud Environments‖,
International Journal of Network Security, Vol.15, No.4, PP.231-240, July 2013
[4] Vipul Goyal Omkant Pandeyy Amit Sahaiz Brent Waters,” Attribute-Based Encryption for Fine-Grained Access Control of Encrypted Data‖
[5] John Bethencourt, Amit Sahai, Brent Waters ―Ciphertext-Policy Attribute-Based Encryption ‖, in Proc. IEEE Symp. Security and Privacy, Oakland, CA, 2007.
[6] Guojun Wanga, Qin Liu a,b, Jie Wub, Minyi Guo, Hierarchical attribute-based encryption and scalable user revocation for sharing data in cloud servers,
www.elsevier.com locate / /cose
[7] Rakesh Bobba, Himanshu Khurana and Manoj Prabhakaran,‖ Attribute-Sets: A Practically Motivated Enhancement to Attribute-Based Encryption‖ University of
Illinois at Urbana-Champaign, July 27, 2009
[8] Ming Li, Shucheng Yu, ,Yao Zheng, Kui Ren, and Wenjing Lou,‖ Scalable and Secure Sharing of Personal Health Records in Cl oud Computing using
Attribute-based Encryption‖ in IEEE Transactions On Parallel And Distributed Systems ,2012
[9] Chunxia Leng1, Huiqun Yu, Jingming Wang, Jianhua Huang,‖ Securing Personal Health Records in Clouds by Enforcing Sticky Policies‖ in TELKOMNIKA,
Vol. 11, No. 4, April 2013, pp. 2200 ~ 2208 e-ISSN: 2087-278X.
[10] Cong Wang, Qian Wang, Kui Ren, Wenjing Lou (2010), ―Privacy Preserving Public Auditing for Data Storage Security in Cloud Computing‖.
[11] Jachak K. B., Korde S. K., Ghorpade P. P. and Gagare G. J.,―Homomorphic Authentication with Random MaskingTechnique Ensuring Privacy & Security in
CloudComputing‖, Bioinfo Security Informatics, vol. 2, no. 2,pp. 49-52, ISSN. 2249-9423, 12 April 2012
[12] Devi D,‖ Scalable and Flexible Access Control with Secure Data Auditing in Cloud Computing‖, (IJCSIT) International Journal of Computer Science and
Information Technologies, Vol. 5 (3) , 2014, 4118-4123,ISSN:0975-9646

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


78
www.ijergs.org

Design of Impact Load Testing Machine for COT
Sandesh G.Ughade
1
, Dr. A.V.Vanalkar
2
, Prof P.G.Mehar
2

1
Research Scholar (P.G), Dept of Mechanical Engg, KDK College of Engg, Nagpur, R.T.M, Nagpur university, Maharashtra, India
2
Assistant professor, Dept of Mechanical Engg, KDK College of Engg, Nagpur, R.T.M, Nagpur university, Maharashtra, India
E-mail- Sandesh.ughade@gmail.com
Abstract: this paper describes the design of a new pneumatically load applied machine that has been specifically design for studying
the dynamic mechanical behavior of COT (wooden bed). Such type of equipment has been used to generate simple and measurable
fracture processes under moderate to fast loading rates which produce complicated crack patterns that are difficult to analyze. We are
developing the machine as a facility to provide experimental data to validate numerical data of impact load on COT that obser ve
kinetic energy during collision. The machine consists of two main parts, the mechanical structure and the data acquisition system. The
development process included the design, development, fabrication, and function tests of the machine.
Keywords: component; load; impact; design;

I. INTRODUCTION

The starting point for the determination of many engineering timber properties is the standard short duration test where failure is
expected within a few minutes. During the last decades much attention is given to study the behavior of timber and timber joints with
respect to damaging effect of sustained loads, the so-called duration of load effect. In the design process of wooden structures is like a
cot. To increase human safety, some parts of automotive structure made by wood are designed to absorb kinetic energy during
collision. These components are usually in the form of columns which will undergo progressive plastic deformation during collision.
The impact force, i.e. the force needed to deform the cot, determines the deceleration of the load during collision and indicates the
capability of the cot to absorb kinetic energy. The value of impact force is determined by the geometry and the material of the cot. For
this purpose advance impact testing machine is required for checking the adult sleeping cot. Impacts are made on different desired
positions (depending on the size of the cot and location specified by quality engineers) with specified load. This assures the cot is safe
and is ready for the customer use. The test also provides assurance of mechanical safety and prevents from serious injury through
normal functional use as well as misuse that might reasonably expected to occur. For this purpose, in this project this impact testing
machine for testing adult sleeping cot is fabricate. Developing the interface for controlling the machine is one of the most important
parts of control system which includes the software analysis, design, and development and testing. Here we are going to develop a
program for controlling the fabricated wireless impact testing for testing sleeping cot.

II. DESIGN PROCEDURE

The aim of this is to give the complete design information about the impact testing machine. In this, the explanations and some other
parameters related to the project are included. With references from various sources as journal, thesis, design data book, literature
review has been carried out to collect information related to this project.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


79
www.ijergs.org




A. Design consideration
- Considered element
- Standard size of COT
- Material of COT plywood
- Maximum weight applied to the surface of COT.
- Height of impact
B. Design calculations
Determination of impact force

Impact force = (1/2 mv2) /d
Where,
m = mass
v = velocity
d = distance travel by material after impact
d = WL³/ 48EI (data book, table I-7)

C. Cylinder specification
Force acting by cylinder
F = DLP
Where,
D = bore diameter
Fig.-1: Modeling of machine
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


80
www.ijergs.org

L = length of stroke
P = pressure
D. Design of Plate
T = thickness of plate
D = circular diameter of plate

Consider,
Shear of plate at joint.

Shear stress produced = Fc/Aj

Material used for plate - mild steel
Yield point stress (Syt)
Factor of safety (fos)

Shear strength = (0.5 Syt)/fos

Shear stress induced < Permissible stress
Then the plate is safe in compression and shear.

E. Design of Lead Screw

Type of thread used = square thread
Nominal diameter of sq. screw; d
Material for lead screw: Hardened steel - Cast Iron.
Syt = 330 N/mm2
Coefficient of friction, µ= 0.15

Therefore force on the lead screw= F ( max) + self-weight of screw and impactor assembly.

Lead angle = α= 150
Nut material: FG200
Sut = 200 N/mm2

Torque required for the motion of impactor ( T);

T=P x dm /2
Where,
dm: Mean diameter of lead screw
dm = d - 0.5P
d:outside diameter of lead screw
p: pitch


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


81
www.ijergs.org

F. Design of Screw

ζ c =P/ (∏/4 x dc2)
Where,
ζc = direct compressive stress.
dc = root dia. of square thread screw.

ζc = Syt/ FOS
Let FOS= 2

Therefore we take dc from data book for safety.

Also; Torsional shear stress

(ηs) =16T /∏dc3 ----- (1)

ηs = 0.5 Syt/2

Also screw will tend to shear off the threads at the root diameter.

Shear area of one thread= ∏ dc x t x z
Where
z: no. of threads in engagement with nut.

Transverse shear;
ηs =P/∏ x dc x t x z
As, t = p/2

Therefore we take standard size of z

G. Design of nut

The nut is subjected to shear off due to P.

Total shear area of nut = ∏x d x t x z
Also
ηn = 0.5 Sut /FOS

t = pitch / 2

z: No. of threads;
Therefore
We take standard value of z from data book for safety

Length of nut= 5 x pitch

H. Design of compression spring

P = Force on each spring
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


82
www.ijergs.org


∂L= Deflection of spring
Therefore
Stiffness of spring = p / ∂L

Material of the spring = cold drawn steel.

Ultimate tensile strength, Sut =1050 N/mm2

Modulus of rigidity, G = 81370 N/mm2.

Therefore shear stress = 0.30 Sut

Assume spring index = C

Therefore, Wahl shear factor,

k = {(4c-1)/ (4c-4)} + {(0.615)/(c)}

We know,
Shear stress = k x {(8PC)/∏d
2
}

Coil diameter of spring, D = cd

Number of active coils (N)
We know,
∂l = {8P (d
3
) N}/ (Gd
4
)

Spring used is square and ground end.

Therefore
Nt = actual no. of active coil = N-2

Solid length of spring = Nt x d

Assume gap between the coils (when the total compression)
Therefore,
Total gap = (Nt-1) x 2

Free length of the spring,

Free length = solid length + total gap + ∂l

Pitch of the coil (p)

P = free length / (Nt-1)

We know when,
Free length/mean coil diameter (D) ≤2.6 guide not necessary

Free length /mean coil diameter (D) ≥2.6 guide is required


III. FABRICATION

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


83
www.ijergs.org


Mechanical components
- C- channel ( for column and beam)
- Lead screw: ( 3 nos)
- Pneumatic cylinder : ( 1 nos)
- Guide ways: ( 3 nos)
- Compressor: ( 1 nos)
- Bearings
- Spring(3 nos)
- Base plate of impactor (1 nos)


IV. CONCLUSION

Automatic technology used for testing the machine and requires very less human assistance which further reduces the labor cost for
quality testing of sleeping beds. Thus the objectives such as testing the quality of adult sleeping bed can reduce the human efforts for
testing the quality of bed.

REFERENCES:
[1] X. X. Zhang, G. Ruiz and Rena C. Yu, ―a new drop weight impact machine for studying Fracture process in structural concrete,‖
Anales de Mecánica de la Fractura 25, Vol. 2 (2008)


[2] S. Elavenil and G.M. Samuel Knight, ―Impact response of plates under drop weight Impact testing,‖ Daffodil International
university journal of science and technology, volume 7, issue 1, january 2012


[3] Leonardo Gunawan, Tatacipta Dirgantara, and Ichsan Setya Putra, ―Development of a Dropped Weight Impact Testing Machine,‖
International Journal of Engineering & Technology IJET-IJENS Vol: 11 No: 06.


[4] Siewert, T. A., Manahan, M. P., McCowan, C. N., Holt, J. M., Marsh, F. J., and Ruth, E. A., "The History and Importance of Impact
Testing," Pendulum Impact Testing: A Century of Progress, ASTM STP 1380, T.A. Siewert and M. P. Manahan, Sr., Eds., American Society
for Testing and Materials, West Conshohocken








International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


84
www.ijergs.org

Heat Treating of Non Ferous Alloys
Jirapure S.C
1
, Borade A. B
2

1
Assistant Professor, Mechanical Engg. Dept, JD Institute of Engg & Tech, Yavatmal (MS), India
2
Professor and Head, Mechanical Engg. Dept, JD Institute of Engg & Tech, Yavatmal (MS), India
E-mail- Sagarjirapure@rediffmail.com
Abstract—Non ferrous alloys are the most versatile engineering materials. The combination of physical properties such as strength,
ductility, conductivity, corrosion resistance and machinability makes these suitable for a wide range of applications. These properties
can be further enhanced with variations in composition and manufacturing processes. Present paper gives a clear idea about various
strengthening processes of non ferrous alloys and prepares it as per the need of the user.
Keywords—hardening, heat treatment, properties, processes, grain structure, solid solution
INTRODUCTION
The hardenability of a steel is broadly defined as the property which determines the depth and distribution of hardness induced by
quenching. Hardenability is a characteristic determined by the following factors The hardenability is the depth and evenness of
hardness of a steel upon quenching from austenite [1].
Heat treatment is an operation or combination of operations involving heating at a specific rate, soaking at a temperature for a
period of time and cooling at some specified rate. The aim is to obtain a desired microstructure to achieve certain predetermined
properties (physical, mechanical, magnetic or electrical) [3].
Heat treating is a group of industrial and metalworking used to alter the physical, and sometimes chemical, properties of a
material. The most common application is metallurgical. Heat treatments are also used in the manufacture of many other materials,
such as glass. Heat treatment involves the use of heating or chilling, normally to extreme temperatures, to achieve a desired result such
as hardening or softening of a material. It is noteworthy that while the term heat treatment applies only to processes where the heating
and cooling are done for the specific purpose of altering properties intentionally, heating and cooling often occur incidentally during
other manufacturing processes such as hot forming or welding [4].
OBJECTIVE
- To increase strength, harness and wear resistance
- To increase ductility and softness
- To increase toughness
- To obtain fine grain size
- To remove internal stresses induced by differential deformation by cold working, non-uniform cooling from high temperature
during casting and welding
- To improve machineability
- To improve cutting properties of tool steels
- To improve surface properties
- To improve electrical properties
- To improve magnetic properties
PHYSICAL PROCESS
Metallic materials consist of a microstructure of small crystals called ‗grains‘. The nature of the grains (i.e. grain size and
composition) is one of the most effective factors that can determine the overall mechanical behavior of the metal. Heat treatment
provides an efficient way to manipulate the properties of the metal by controlling the rate of diffusion and the rate of cooling within
the microstructure. Heat treating is often used to alter the mechanical properties of an alloy, manipulating properties such as
the hardness, strength, toughness, ductility, and elasticity [7].
There are two mechanisms that may change an alloy's properties during heat treatment. The martensite causes the crystals
to deform intrinsically. The diffusion mechanism causes changes in the homogeneity of the alloy.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


85
www.ijergs.org

Non ferrous metals and alloys exhibit a martensite transformation when cooled quickly. When a metal is cooled very quickly, the
insoluble atoms may not be able to migrate out of the solution in time. This is called a ‗diffusionless transformation‘. When the crystal
matrix changes to its low temperature arrangement, the atoms of the solute become trapped within the lattice. The trapped atoms
prevent the crystal matrix from completely changing into its low temperature allotrope, creating shearing stresses within the lattice.
When some alloys are cooled quickly, such as steel, the martensite transformation hardens the metal, while in others, like aluminum,
the alloy becomes softer [15].

Effect of Composition:
The specific composition of an alloy system will usually have a great effect on the results of heat treating. If the percentage of
each constituent is just right, the alloy will form a single, continuous microstructure upon cooling. Such a mixture is said to
be eutectoid. However, If the percentage of the solutes varies from the eutectoid mixture, two or more different microstructures will
usually form simultaneously. A hypoeutectoid solution contains less of the solute than the eutectoid mix, while a hypereutectoid
solution contains more [20].

Effect of Time and Temperature:
Proper heat treating requires precise control over temperature, time held at a certain temperature and cooling rate.
Most heat treatments begin by heating an alloy beyond the upper transformation (A3) temperature. The alloy will usually be held at
this temperature long enough for the heat to completely penetrate the alloy, thereby bringing it into a complete solid solution. Since a
smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are
often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing
too large. For instance, when steel is heated above the upper critical temperature, small grains of austenite form. These grow larger as
temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain size directly affects the
martensitic grain size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually
controlled to reduce the probability of breakage.
The diffusion transformation is very time dependent. Cooling a metal will usually suppress the precipitation to a much lower
temperature. Austenite, for example, usually only exists above the upper critical temperature. However, if the austenite is cooled
quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperature. Such austenite is
highly unstable and, if given enough time, will precipitate into various microstructures of ferrite and cementite. The cooling rate can
be used to control the rate of grain growth or can even be used to produce partially martensitic microstructures. However, the
martensite transformation is time-independent. If the alloy is cooled to the martensite transformation (Ms) temperature before other
microstructures can fully form, the transformation will usually occur at just under the speed of sound.
When austenite is cooled slow enough that a martensite transformation does not occur, the austenite grain size will have an
effect on the rate of nucleation, but it is generally temperature and the rate of cooling that controls the grain size and microstructure.
When austenite is cooled extremely slowly, it will form large ferrite crystals. This microstructure is referred to as "sphereoidite." If
cooled a little faster, then coarse pearlite will form. Even faster, and fine pearlite will form. If cooled even faster, bainite will form.
Similarly, these microstructures will also form if cooled to a specific temperature and then held there for a certain time.
Most non-ferrous alloys are also heated in order to form a solution. Most often, these are then cooled very quickly to produce
a martensite transformation, putting the solution into a supersaturated state. The alloy, being in a much softer state, may then be cold
worked. This cold working increases the strength and hardness of the alloy, and the defects caused by plastic deformation tend to
speed up precipitation, increasing the hardness beyond what is normal for the alloy. Even if not cold worked, the solutes in these
alloys will usually precipitate, although the process may take much longer. Sometimes these metals are then heated to a temperature
that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitation [14].

TECHNIQUES

Strain hardening
The phenomenon where ductile metals become strong and hard when they are deformed plastically is called strain hardening
(or) work hardening. The application of cold work, usually by rolling, forging or drawing operations, strengthens copper and its alloys,
while strength, hardness and elastic modulus increase and ductility decreases during this process. The effect of cold work can be
removed by annealing. Strain hardening is used for hardening/strengthening materials that are not responsive to heat treatment.

Solid solution hardening
Solid solution hardening of copper is a common strengthening method. In this method a small amount of alloying elements
such as zinc, aluminum, tin, nickel, silicon, beryllium etc. are added to the molten copper to completely dissolve them and to form a
homogeneous microstructure (a single phase) upon solidification. This is because stress fields generated around the solute atoms
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


86
www.ijergs.org

present in the substitutional sites interact with the stress fields of moving dislocations, thereby increasing the stress required for plastic
deformation. Traditional Brasses and Bronzes fall into this category. It is to be noted that these alloys are not heat treatable.



Grain boundary hardening
In a poly-crystalline metal, grain size has a tremendous influence on the mechanical properties. Because grains usually have
varying crystallographic orientations, grain boundaries arise. While undergoing deformation, slip motion will take place. Grain
boundaries act as an impediment to the dislocation motion for the following two reasons: (a) dislocation must change its direction of
motion due to the differing orientation of grains and (b) discontinuity of slip planes one grain to another. The stress required to move a
dislocation from one grain to another in order to plastically deform a material depends on the grain size. The average number of
dislocations per grain decreases with average grain size. A lower number of dislocations per grain results in a lower dislocation
'pressure' building up at the grain boundaries. This makes it more difficult for dislocations to move into adjacent grains. This
relationship is called the Hall-Petch equation.

Dual-phase hardening
Bronze is usually a single phase alloy. Aluminium Bronze is a type of Bronze in which aluminium is the main alloying
element added to copper, in contrast to standard Bronze (Cu and Sn) or Brass (Cu and Zn). A variety of aluminium Bronzes of
differing compositions have found industrial use, with most ranging 5wt. % to 11wt. % aluminium. Other alloying agents such as iron,
nickel, manganese, and silicon are also sometimes added to aluminium Bronzes. When adding aluminium above 10%, another phase
forms. The second phase also contributes the strengthening of the alloy.

Precipitation hardening
Precipitation hardening refers to a process where a supersaturated solid solution is heated at a low temperature for a period
(aging) so as to allow the excess solute to precipitate out in the form of a second phase. This process is often used for Cu alloys
containing Be. Precipitation hardening has several distinct advantages. Many combinations of ductility, impact resistance, hardness,
conductivity and strength can be obtained by varying the heat treatment time and temperature. The Cu-Be alloy possesses a
remarkable combination of properties such as tensile strength, electrical conductivity and corrosion resistance and wear resistance.
They may be cast and hot or cold worked. Despite its excellent properties, it is high cost because of addition of Be. Moreover, it is a
health hazardous material.

Order hardening
When the atoms of a disordered solid solution arrange themselves in an orderly manner at a lower temperature ordered
structure forms. Lattice strain develops due to the ordered nature of the structure and this strain contributes to the hardening and the
strengthening of these alloys.

New approach of hardening
The various new approaches of hardening of copper and its alloys are (a) Dispersion hardening/Metal matrix composites (b)
Surface modification and (c) Spinodal decomposition [17].

Dispersion hardening
Conventional strengthening mechanisms, such as cold working and precipitation hardening, are ineffective at high
temperature, owing to the effects of recrystallization, and particle coarsening and dissolution respectively. Applications require
materials with a high thermal conductivity in combination with high elevated temperature strength in oxygen or hydrogen rich
environments, for which copper based alloys are natural choices. In addition to its high thermal conductivity, copper has the advantage
of a low elastic modulus, which minimizes thermal stresses in actively cooled structures. Copper also offers good machinability, good
formability and, for fusion applications, it is attractive for its excellent resistance to neutron
displacement damage. However, copper requires a considerable improvement in strength to meet the design requirements for
high temperature applications. A substantial amount of recent work has emphasized particle and fiber strengthening of copper
composites, with up to 40 vol. % of reinforcing phase. The dispersion hardening is also called Metal Matrix Composites in the recent
literatures. Copper based composites appear to be a promising material for engineering applications due to their excellent thermo-
physical properties coupled with better high temperature mechanical properties as compared to pure copper and its alloys. In the
copper based metal matrix composite, SiCp is widely used as reinforcing element to the matrix to enhance their various properties.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


87
www.ijergs.org

Further, the metal matrix composites, in which hard ceramic particles are dispersed in a relatively ductile matrix, exhibit a
superior combination of properties such as high elastic modulus, high specific strength, desirable co-efficient of thermal expansion,
high temperature resistance and wear resistance. Metal matrix composites are being increasingly used for structural, automobile and
aerospace industry, sporting goods and general engineering industries. Copper matrix composites have the potential for use as wear
resistance and heat resistant materials; brush and torch nozzle materials and for applications in electrical sliding contacts such as those
in homopolar machines and railway overhead current collector systems where high electrical/thermal conductivity and good wear
resistant properties are needed.
Dispersion particles such as oxides, carbides and borides, which are insoluble in the copper matrix and thermally stable at
high temperature, are being increasingly used as the reinforcement phase.

Surface modification
In the surface modification process, hard-facing is a commonly employed method to improve surface properties. An alloy is
homogeneously deposited onto the surface of a soft material usually by welding, with the purpose of increasing hardness and wear
resistance without significant loss in ductility and toughness of the substrate.
A wide variety of hard-facing alloys is commercially available for protection against wear.
Spray forming or spray atomization and deposition is a newly emerging science and technology in the field of materials
development and production in recent years. Spray forming technology, as an advanced processing, combined the advantages of rapid
solidification, semi-solid processing and near net shape processing. Spray forming attracted great attention lately because it would
bring about a distinct improvement in microstructure and properties of materials. It can be used for developing new types of materials
and for improving microstructure and properties of commercial materials. The spray formed Cu-15Ni-8Sn alloy is an example of
developing new types of materials, in which Ni and Sn are sprayed over the Cu substrate. This alloy is of particular interest because
high strength can be
achieved with fairly high conductivity and good corrosion resistance. The alloy may replace Cu-Be alloys in highly
demanding applications in electronic equipment, e.g. electrical switchgear, spring, contacts, connectors etc.

Spinodal Decomposition
The theory of spinodal decomposition as developed by Cahn–Hilliard has been discussed in detail by several authors. The
principal concept of the theory is described below.
A pair of partially miscible solids, i.e. solids that do not mix in all proportions at all temperatures, show a miscibility gap in
the temperature-composition diagram. Figure 1.1 (Favvas et al., 2008) shows a phase diagram with a miscibility gap (lower frame)
and a diagram of the free energy change (upper frame). Line (1) is the phase boundary. Above this line the two solids are miscible and
the system is stable (region-s). Below this line there is a meta-stable region (m). Within that region (point a to b) the system is stable
(where, ∂2ΔG/xB2 > 0; ΔG = Free energy of mixing; xB = Concentration of element B). Line (2) is the spinodal. Below this line, the
system is unstable (region-u) (where, ∂2ΔG/xB2 < 0). With the spinodal region (u), the unstable phase will decompose into solute rich
and solute lean regions. This process is called spinodal decomposition. The spinodal decomposition depends on the temperature. For
example above Tc (Figure 1.1) the spinodal decomposition will not takes place.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


88
www.ijergs.org

Figure 1.1 Phase diagram with a miscibility gap (Favvas et al., 2008)

CONCLUSION
The Hall-Petch method, or grain boundary strengthening, is to obtain small grains. Smaller grains increase the likelihood of
dislocations running into grain boundaries after shorter distances, which are very strong dislocation barriers. In general, smaller grain
size will make the material harder. When the grain size approach sub-micron sizes, some materials may however become softer. This
is simply an effect of another deformation mechanism that becomes easier, e.g. grain boundary sliding. At this point, all dislocation
related hardening mechanisms become irrelevant.

REFERENCES:
[1] Archard, J.F. (1953), ―Contact and rubbing of flat surfaces‖, Journal of Applied Physics, Vol.24, No.8, pp. 981-988.
[2] Arther, (1991), ―Heat treating of copper alloys‖, Copper Development Association, ASM Hand Book, Vol. 4, pp. 2002-2007.
[3] Barrett, C.S. (1952), ―Structure of Metals‖, Metallurgy and Metallurgical Engineering series, Second Edition,McGRAW-Hill
book Co, INC.
[4] ―Copper & Copper Alloy Castings Properties & Applications‖ a handbook published by Copper Development Association, British
Standards Institution, London W1A 2BS TN42 (1991).
[5] ―Copper – The Vital Metal‖ Copper Development Association, British Standards Institution, London W1A 2BS CDA Publication
No. 121, (1998).
[6] ―Cost-Effective Manufacturing -Design for Production‖ a handbook published by Copper Development Association, British
Standards Institution, London W1A 2BS CDA Publication No 97, (1993).
[7] ―Copper and copper alloys- compositions, applications and properties‖ a handbook published by Copper Development
Association, British Standards Institution, London W1A 2BS publication No. 120 (2004).
[8] ―Copper-Nickel Welding and Fabrication‖, Copper Development Association, British Standards Institution, London W1A 2BS
CDA Publication No. 139, 2013, pp.01-29.
[9] ―Copper Nickel Sea water piping systems ‖, Application datasheet by Copper Development Association, British Standards
Institution, London W1A 2BS CDA Publication.
[10] ―Corrosion Resistance of Copper and Copper Alloys‖, Copper Development Association, British Standards Institution, London
W1A 2BS CDA Publication No. 106.
[11] Donald R. Askeland et al. (2011), ―Materials science and engineering‖, Published by Cengage Learning, Third Indian Reprint,
pp. 429.
[12] ―Equilibrium Diagrams Selected copper alloy diagrams illustrating the major types of phase transformation‖, Copper
Development Association, British Standards Institution, London W1A 2BS CDA Publication No 94, (1992).
[13] Jay L. Devore. (2008), ―Probability and statistics for engineers‖, Cengage Learning.
[14] John W. Cahn. (1966), ―Hardening by spinodal decomposition‖, Acta Metallurgica, Vol. 11, No. 12, pp. 1275-1282.
[15] Kodgire V.D. and Kodgire, S.V. (2011), ―Material Science and Metallurgy for Engineers‖, 30th Edition, A Text book published
by Everest Publishing house with ISBN 8186314008.
[16] Mike Gedeon (2010), ―Thermal Strengthening Mechanisms‖, ©2010 Brush Wellman Inc.,Issue No. 18.
[17] Ilangovan, S. and Sellamuthu, R. (2012), ―An Investigation of the effect of Ni Content and Hardness on the Wear Behaviour of
Sand Cast Cu-Ni-Sn Alloys‖, International Journal of Microstructure and Materials Properties, Vol. 7, No.4. pp. 316-328.
[18] Naeem, H.T. and Mohammed, K.S.(2013), ― Microstructural Evaluation and Mechanical Properties of an Al-Zn-Mg-Cu-Alloy
after Addition of Nickel under RRA Conditions‖, Materials Sciences and Applications, 4, pp.704-711.
[19] Peters, D.T., Michels, H.T. and Powell, C.A. (1999), ―Metallic coating for corrosion control of marine structures‖ published by
Copper development Association Inc., pp.01-28.
[20] Zhang, J.G., Shi, H.S. and Sun, D.S. (2003), ―Research in spray forming technology and its applications in metallurgy‖, Journal
of Materials Processing Technology, Vol.138, No. 1-3, pp.357-360




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


89
www.ijergs.org

Hadoop: A Big Data Management Framework for Storage, Scalability,
Complexity, Distributed Files and Processing of Massive Datasets
Manoj Kumar Singh
1
, Dr. Parveen Kumar
2

1
Research Scholar, Computer Science and Engineering, Faculty of Engineering and Technology, Shri Venkateshwara University,
Gajraula, U.P, India
2
Professor, Department of Computer Science and Engineering, Amity University, Haryana, India

Abstract: Every day people make 2.5 quintillion bytes of information. In the most recent two years alone in excess of 90% of the
information on the planet has been made and there is no sign that this will change, truth be told, information creation is expanding.
The purpose behind the enormous blast in information is that there are such a variety of sources, for example, sensors used to gather
barometrical information, information from posts on social networking locales, advanced and movie information, information created
from every day transaction records and cell and GPS information, simply to name a couple. The greater part of this information is
called Big Data and it incorporates three measurements: Volume, Velocity, Variety. To infer esteem from Big Data, associations need
to rebuild their reasoning. With information developing so quickly and the ascent of unstructured information representing 90% of the
information today, associations need to look past the legacy and select schemas that place extreme restrictions on overseeing Big Data
productively and gainfully. In this paper we give an in-profundity theoretical review of the modules identified with Hadoop, a Big
Data administration schema.
Keywords: Hadoop, Big Data Management, Big Data, Large Datasets, MapReduce, HDFS
Introduction:
Organizations over the globe are confronting the same unwieldy issue; a regularly developing measure of information joined with a
restricted IT base to oversee it. Enormous Data is considerably more than simply a substantial volume of information gathering inside
the association, it is presently the signature of most business ventures and crude unstructured information is the standard passage.
Slighting Big Data is no more a decision. Associations that are not able to deal with their information will be overwhelmed by it.
Humorously, as associations access to always expanding measures of information has expanded significantly, the rate that an
association can transform this gold mine of information has diminished. Removing subsidiary worth from information is the thing that
empowers an association to improve gainfulness and preference. Today the innovation exists to productively store, oversee and
examine basically boundless measures of information and that engineering is called Hadoop [1].

Hadoop?
Apache Hadoop is 100% open source, and spearheaded an on a very basic level better approach for putting away and preparing
information [2]. As opposed to depending on lavish, exclusive fittings and diverse frameworks to store and procedure information,
Hadoop empowers conveyed parallel transforming of immense measures of information crosswise over reasonable, industry-standard
servers that both store and methodology the information, and can scale without breaking points [1]. With Hadoop, no information is
too huge. Also in today's hyper-joined world where more information is constantly made consistently, Hadoop's leap forward focal
points imply that organizations and associations can now discover esteem in information that was as of late considered pointless.
However what precisely is Hadoop, and what makes it so unique? In its fundamental structure, Hadoop is hugely versatile capacity
and information handling framework which supplements existing frameworks by taking care of information that is ordinarily an issue
for them. Hadoop can at the same time assimilate and store any kind of information from an assortment of sources [2]. It is a method
for putting away huge information sets crosswise over circulated groups of servers and afterward running "appropriated" dissection
applications in each one group. It's intended to be vigorous, in that the Big Data applications will keep on running actually when
disappointments happen in individual servers or groups. It's additionally intended to be proficient, in light of the fact that it doesn't
require the applications to shuttle tremendous volumes of information over the system. It has two fundamental parts; an information
preparing structure called Mapreduce and an appropriated document framework called HDFS for information storage (fig 1).
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


90
www.ijergs.org


Fig. 1

These are the parts that are at the heart of Hadoop however some different segments Hbase, Pig, Hive, Impala Sqoop, Chukwa,
YARN, Flume, Oozie, Zookeeper, Mahout, Ambari, Hue, Cassandra, and Jaql(fig 2). Every module fills it need in the substantial
Hadoop biological system, right from organization of huge bunches of datasets to inquiry administration. By contemplating every
module and accomplishing learning on it, we can successfully execute answers for Big Data[1].


High-level languages



Fig. 2

Hadoop Distributed File System (HDFS
The Hadoop Distributed File System (HDFS) [1] is a dispersed record framework intended to run on merchandise fittings. Despite the
fact that we may discover numerous likenesses with existing appropriated record frameworks, they are much diverse .HDFS has a
high level of shortcoming tolerance and is typically produced for conveying on ease equipment. Hadoop Distributed File System gives
proficient access to information and is fitting for applications having enormous information set.
HDFS has expert slave structural engineering, with a solitary expert called the Namenode and numerous slaves called
DataNodes.NameNode oversees and store the meta-information of the record framework [5]. The metadata is kept up in the
fundamental memory of the Namenode to guarantee quick get to the customer, on read/compose demands [5]. Datanodes store and
administration read/compose asks for on documents in HDFS, as regulated by the Namenode (Fig 3i). The records put away into
HDFS are duplicated into any number of Datanodes according to design, to guarantee dependability and information accessibility.
These reproductions are circulated over the bunch to guarantee fast reckonings. Documents in HDFS are separated into littler squares,
regularly square size of 64mb, and each one piece is recreated and put away in different Datanodes. The Namenode keeps up the
metadata for each one record put away into HDFS, in its fundamental memory. This incorporates a mapping between put away
filenames, the comparing squares of each one record and the Datanodes that have these pieces. Henceforth, every solicitation by
customer to make, compose, read or erase a record passes through the Namenode (Fig 3ii). Utilizing the metadata put away,
Namenode need to regulate each solicitation from customer to the fitting set of Datanodes. The customer then speaks specifically with
the Datanodes to perform record operations [5].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


91
www.ijergs.org



Fig 3(i) Fig 3(ii)

MapReduce
Mapreduce is a programming model and a related usage for handling and producing expansive information sets with a parallel,
dispersed calculation on a bunch. Computational preparing can happen on information put away either in a file system (unstructured)
or in a database (organized) [16]. Mapreduce can exploit territory of information, transforming it on or close to the stockpiling
possessions to decrease the separation over which it must be transmitted. The expert hub takes the data, partitions it into more modest
sub-issues, and appropriates them to laborer hubs. A specialist hub may do this again thus, prompting a multi-level tree structure. The
specialist hub forms the littler issue, and passes the reply once more to its ace hub. The expert hub then gathers the explanations for all
the sub-issues and joins together them somehow to structure the yield – the response to the issue it was initially attempting to fathom.
The Mapreduce motor comprises of a Jobtracker and a Tasktracker. Mapreduce Jobs are submitted to the Jobtracker by the customer
[6]. The Jobtracker passes the occupation to the Tasktracker hub which tries to keep the work near the information. Since HDFS is a
rack mindful record framework, the Jobtracker knows which hub holds the information, and which different machines are adjacent.
On the off chance that the work can't be facilitated on the genuine hub where the information dwells, necessity is given to hubs on the
same rack. This lessens system activity on the primary spine system. On the off chance that a Tasktracker comes up short or times out,
that some piece of the employment is reschedule.


HBase
Hbase is the Hadoop application to utilize when you oblige ongoing read/compose irregular access to vast datasets. This is a non-
social disseminated database model [17]. Hbase gives line level questions as well as utilized for constant application transforming
dissimilar to Hive. Despite the fact that Hbase is not an accurate substitute for customary RDBMS, it offers both, direct and secluded
versatility and is strictly keeps up consistency of read and compose which as an exchange helps in programmed failover help. Hbase is
not social and does not help SQL, yet given the correct issue space, it can do what a RDBMS can't: have substantial, inadequately
populated tables on groups produced using ware fittings [18]. The sanctioned Hbase utilization case is the webtable, a table of
slithered site pages and their traits, (for example, dialect and MIME sort) keyed by the page URL. The webtable is huge, with line
tallies that run into the billions.

Pig (Programming Tool)

Pig is an abnormal state stage for making MapReduce projects utilized with Hadoop. The dialect for this stage is called Pig Latin [19].
Pig was at first created at Yahoo! to permit individuals utilizing Hadoop to center all the more on examining extensive infor mation
sets and invest less time needing to compose mapper and reducer programs. The Pig programming dialect is intended to handle any
sort of information. The Apache Pig, incorporates a Pig Latin programming dialect for communicating information streams, is an
abnormal state dataflow dialect which is utilized to decrease the complexities of MapReduce by changing over its administrators into
MapReduce code. It utilizes SQL-like operations to be performed on vast conveyed datasets. Pig Latin digests the programming from
the Java MapReduce colloquialism into a documentation which makes MapReduce programming abnormal state, like that of SQL for
RDBMS frameworks [20]. Pig Latin could be expanded utilizing UDF (User Defined Functions) which the client can compose in
Java, Python, JavaScript, Ruby or Groovy and after that call straightforwardly from the dialect.
Hive
Hive is an information distribution center base based on top of Hadoop for giving information synopsis, inquiry, and analysis.[1]while
at first created by Facebook [21]. Hive was made to make it feasible for experts with solid SQL abilities to run questions on the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


92
www.ijergs.org

colossal volumes of information that Facebook put away in HDFS. At the point when beginning Hive surprisingly, we can watch that
it is working by posting its tables: there ought to be none. The order must be ended with a semicolon to advise Hive to execute it:

hive> SHOW TABLES;
OK
Hive fails to offer a couple of things contrasted with RDBMS however, for instance, it is best suited for cluster occupations not
ongoing application preparing (fig 4). Hive needs full SQL help and does not give line level embeds redesigns or erase. This is the
place Hbase, an alternate Hadoop module is worth contributing [22].



Fig. 4

Zookeeper

Zookeeper is an elite coordination administration for dispersed applications where appropriated techniques coordinate with one
another through an imparted progressive name space of information registers. Zookeeper is connected with specific perspectives that
are obliged while planning and creating some coordination administrations [23]. The design administration helps putting away setup
information and offering the information over all hubs in the appropriated setup. The naming administration permits one hub to
discover a particular machine in a group of a huge number of servers. The synchronization administration gives the building pieces to
Locks, Barriers and Queues. The locking administration permits serialized access to an imparted asset in the conveyed framework.
The Leader Election administration serves to recoup the framework from programmed disappointment. Zookeeper is exceptionally
performant, as well. At Yahoo!, where it was made, Zookeeper's throughput has been benchmarked at in excess of 10,000 operations
for every second for compose predominant workloads.

Oozie
Oozie is a Java Web-Application that runs in a Java servlet-holder - Tomcat and utilization a database to store: Workflow definitions
& Currently running work process examples, including occurrence states and variables Oozie work process is an accumulation of
activities (i.e. Hadoop Map/Reduce occupations, Pig employments) masterminded in a control reliance DAG (Direct Acyclic Graph),
tagging an arrangement of activities execution [10] . With such a variety of Hadoop occupations running on diverse groups, there was
a requirement for a scheduler when Oozie came into the scene. The highlight of Oozie is that it joins numerous consecutive
employments into one consistent unit of work. There are two essential sorts of Oozie occupations: Oozie Workflow Jobs which is
more like a Directed Acyclic Graph, which tags a succession of employments to be executed, and the other is Oozie Coordinator Jobs
which are repetitive Workflow Jobs that are activated by the date and time accessibility.

Ambari
Ambari is a device for provisioning, overseeing, and observing Hadoop groups. The immense gathering of administrator instruments
and Apis conceal the multifaceted nature of Hadoop consequently rearranging the operation of and on bunches. Regardless of the
extent of the bunch, Ambari improves the organization and support of the host. It preconfigures adjusts for viewing the Hadoop
benefits and envisions and showcases the group operations in a straightforward web interface. The occupation symptomatic
instruments help to imagine work interdependencies and perspective timetables for noteworthy employment execution and
troubleshooting for the same [9]. The most recent adaptation holds Hbase multi-expert, controls for host and improved neighborhood
storehouse setup.


Sqoop
Sqoop is an apparatus which gives a stage to trade of information in the middle of Hadoop and any social databases, information
distribution centers and Nosql datastore. The change of the foreign made information is carried out utilizing Mapreduce or whatever
available abnormal state dialect like Pig, Hive or Jaql[1]. Sqoop imports a table from a database by running a Mapreduce work that
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


93
www.ijergs.org

concentrates columns from the table, and composes the records to HDFS. How does Map- Reduce read the lines? This area clarifies
how Sqoop functions under the hood.





At an abnormal state, Figure shows how Sqoop interfaces with both the database source and Hadoop. Like Hadoop itself, Sqoop is
composed in Java. Java gives an API called Java Database Connectivity, or JDBC, that permits applications to get to information put
away in a RDBMS and investigate the way of this information.

YARN

Yet Another Resource Navigator (YARN) The beginning arrival of Hadoop confronted issues where group was hard coupled with
Hadoop and there were a few falling disappointments. This prompted the advancement of a structure called YARN [8]. Not at all like
the past form, the expansion of YARN has given better adaptability, group usage and, client dexterity. The fuse of MapReduce as a
YARN system has given full retrogressive similarity existing MapReduce errands and applications. It pushes viable use of assets
while giving appropriated environment to the execution of an application. The approach of YARN has opened the Conceivable
outcomes of building new applications to be based on top of Hadoop.

J AQL
JAQL is a JSON based question dialect, which is abnormal state much the same as Pig Latin and Mapreduce. To endeavor enormous
parallelism, JAQL changes over abnormal state inquiries into low-level questions. Like Pig, JAQL likewise does not uphold the
commitment of having a pattern [15]. JAQL helps various in-fabricated capacities and center administrators. Include and Output
operations on JAQL are performed utilizing I/O connectors, which is in charge of preparing, putting away and deciphering and
furnishing a proportional payback as JSON organization.


Impala
Impala is an open source inquiry dialect for gigantic parallel handling created by Cloudera that runs locally on Hadoop. The key
profits of utilizing Impala is that it can perform intelligent dissection progressively, diminish information development and copy
stockpiling in this way lessening expenses and furnishing joining with heading Business Intelligence apparatuses.

Flume
One exceptionally basic utilization of Hadoop is taking web server or different logs from an expansive number of machines, and
intermittently preparing them to haul out investigation data. The Flume venture is intended to make the information social event
prepare simple and versatile, by running executors on the source machines that pass the information upgrades to gatherers, which then
total them into extensive pieces that might be effectively composed as HDFS records. It's normally set up utilizing a charge line
apparatus that backings normal operations, such as tailing a record or listening on a system attachment, and has tunable unwavering
quality certifications that let you exchange off execution and the potential for information misfortune.

Hue
Shade remains for Hadoop User Experience. It is an open source GUI for Hadoop, created by Cloudera. Its objective is to let client
free from stresses over the underlying and backend unpredictability of Hadoop. It has a HDFS record program, YARN & MapReduce
Job Browser, Hbase and Zookeeper program, Sqoop and Spark manager, an inquiry proofreader for Hive and Pig, application for
Ozzie work processes, access to shell and application for Solr looks [12].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


94
www.ijergs.org

Chukwa
Chukwa is an information gathering framework for observing extensive conveyed frameworks. It is based on top of the HDFS and
Mapreduce system and inherits Hadoop's adaptability and power. It exchanges information to gatherers and spares information to
HDFS [13]. It holds information sins which saves crude unsorted information. A usefulness called Demux is utilized to add structures
to make Chukwa records which in the long run go to the database for examination. It incorporates an adaptable tool compartment for
showing, observing and dissecting results to bring about a noticeable improvement utilization of the gathered information.

Mahout
Mahout is an open source schema that can run basic machine learning calculations on gigantic datasets. To accomplish that
adaptability, the greater part of the code is composed as parallelizable occupations on top of Hadoop. Mahout is an adaptable machine
learning library based on top of Hadoop focusing on synergistic separating, grouping and characterization [11]. With information
developing at speedier rate consistently, Mahout explained the requirement for recollecting yesterday's strategies to process
tomorrow's information It accompanies calculations to perform a great deal of normal errands, such as bunching and ordering items
into gatherings, prescribing things focused around other clients' practices, and spotting traits that happen together a considerable
measure. It's a vigorously utilized task with a dynamic group of engineers and clients, and its well worth attempting on the off chance
that you have any huge number of transaction or comparative information that you'd get a kick out of the chance to get more esteem
out of.

Cassandra
Cassandra was produced to address the issue of customary databases. It takes after Nosql structure and consequently creates straight
versatility and gives shortcoming tolerance via naturally reproduced to multi hubs on merchandise equipment or whatever available
cloud foundation administrations. It brags of lower dormancy and represses local outages [14]. It is decentralized, flexible and has
profoundly accessible nonconcurrent operations which are upgraded with different gimmicks.

Conclusion

Presently a day, Hadoop may be more qualified for a lot of information; it is not the proposed result or the substitution for all issues.
Just on account of information sets surpassing exabytes requesting expansive stockpiling, adaptability many-sided quality and
dispersed records is Hadoop a suitable alternative. Separated from elucidating on the capacities of every framework, this paper gives
knowledge on the functionalities of the different modules in the Hadoop .With information developing consistently; it is apparent that
Big Data and its usage are the innovative result without bounds. Before long, very nearly all commercial ventures and associations
around the globe will receive Big Data engineering for information administration.

REFERENCES:

[1] Tom White, ―Hadoop: The Definitive Guide‖, O‘Reilly Media, 2012 Edition.

[2] Intel It Center, ―Planning Guide- Getting started with Big Data‖

[3] Academia.edu, ―Processing Big Data using Hadoop Framework‖

[4] Robert D. Schneider,‖ Hadoop for Dummies‖

[5] Hadoop Distributed File System Architecture Guide, Online:
http://hadoop.apache.org/docs/stable1/hdfs_design.html

[6] Donald Miner, Adam Shook, ―MapReduce Design Patterns‖, O‘Reilly Media, 2012 Edition

[7] Jason Venner, ―Pro Hadoop, Apress, 2009 Edition

[8] Hadoop Yet Another Resource Navigator – Hortonworks, Online: http://
hortonworks.com/hadoop/yarn/
[9] Apache Ambari – Hortonworks, Online: http://hortonworks.com/hadoop/ambari/

[10] Apache Oozie – Hortonworks, Online: http://hortonworks.com/hadoop/oozie/

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


95
www.ijergs.org

[11] Sean Owen, Robin Anil, Ted Dunning, Ellen Friedman, ―Mahout in Action, Manning, 2011 Edition.

[12] Apache Hue, Online: http://gethue.tumblr.com/


[13] Chukwa Processes and Data Flow, Online: http://wiki.apache.org/hadoop /Chukwa_Processes_and_Data_Flow/

[14]] Ebin Hewitt, ―Cassandra: The Definitive Guide‖, O‘Reilly Media, 2010 Edition

[15] http://en.wikipedia.org/wiki/Jaql

[16] http://en.wikipedia.org/wiki/MapReduce

[17] http://en.wikipedia.org/wiki/Apache_HBase

[18] http://hbase.apache.org/

[19] http://pig.apache.org/

[20] http://en.wikipedia.org/wiki/Pig_(programming_tool)

[21] https://hive.apache.org/

[22] http://www-01.ibm.com/software/data/infosphere/hadoop/hive/

[23] Aaron Ritchie, Henry Quach, ―Developing Distributed Applications Using
Zookeeper‖, Big Data University, Online: http://bigdatauniversity.com/bduwp/
bdu-course/developin-distributed-applications-using-zookeeper













International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


96
www.ijergs.org

VLSI Based Design of Low Power and Linear CMOS Temperature Sensor
Poorvi Jain
1
, Pramod Kumar Jain
2

1
Research Scholar (M.Teh), Department of Electronics and Instrumentation,SGSIS, Indore
2
Associate Professor, Department of Electronics and Instrumentation, SGSIS, Idore
E-mail- pjpoorvijain1@gmail.com
Abstract— Complementary Metal Oxide Semiconductor (CMOS) temperature sensor is introduced in this paper which aims at
developing the MOSFET as a temperature sensing element operating in sub-threshold region by using dimensional analysis and
numerical optimization techniques A linear CMOS temperature to voltage converter is proposed which focuses on temperature
measurement using the difference between the gate-source voltages of transistors that is proportional to absolute temperature with low
power. This proposed CMOS temperature sensor is able to measure the temperature range from 0
o
C to 120
o
C . A comparative study is
made between the temperature sensors based on their aspect ratio under the implementation of UMC 180nm CMOS process with
single rail power supply of 600mV.
Keywords— Aspect ratio , CMOS-Complementary Metal Oxide Semiconductor, MOSFET-Metal Oxide
Semiconductor Field Effect Transistor, Sub-threshold ,Temperature sensor, Low power , Linearity.
INTRODUCTION
An important issue for powerful, high-speed computing systems (containing microprocessor cores and high speed DRAM) is thermal
management. This is of special concern with laptops and other portable computing devices where the heat sinks and/or fans can only
help dissipate the heat to a limited degree. This makes variations in clock frequency and/or variation in modes of device operation for
DRAM, Flash, and other systems necessary. On-chip smart CMOS temperature sensors have been commonly used for thermal
management in these applications. The main considering factors of temperature sensor are as follow:
Power : In VLSI implementation many small devices are incorporated resulting into higher and higher level of integration causing
too much of heat dissipation. So there is a need of reducing power and thereby also reducing the production cost. For this purpose the
power consumption must be in nanowatt.
Area : Series connected MOSFET's used for current sink increases the die area of the design, also the sizing of transistor plays an
important role in deciding the chip area. The area should be small approximately 0.002m
2
.
Start-up circuit: Start-up circuit is required in the design if the transient response of the sensor takes a significant amount of time in
reaching the steady state. If steady state time is less than 200ms in the worst case then it eliminates the necessity to start-up circuit.
As the CMOS technology scales down, the supply voltage also scales down from one generation to the next. It becomes difficult to
guarantee that all the transistors work in saturation as the supply voltage drops.Therefore, the traditional temperature sensor
configuration is not suitable for ultra low voltage applications for that reason the sensor should incorporate some modifications. This
modification can be brought by making MOS transistors to work in sub-threshold region.
This paper presents a nanoWatt integrated temperature sensor for ultra-low power applications such as battery powered portable
devices are designed and simulated using Cadence analog and digital system design tools UMC 180nm CMOS technology. Ultra-low
power consumption is achieved through the use of sub-threshold ( also known as weak inversion) MOS operation. The transistor are
used in this domain because the current here is exponentially dependent on the control voltages of the MOSFET and they draw small
currents so as to reduce power consumption. The sensor sinks current in nano-amperes from a single power supply of 0.6V and its
power consumption is in nanoWatt . The performance of the sensor is highly linear in the range of 0–120
0
C.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


97
www.ijergs.org

PROPOSED SCHEME
The proposed CMOS temperature sensor as shown in Fig 1 consists of main three blocks:

Fig 1. Circuit diagram of CMOS temperature sensor
(i) Current source sub-circuit: Analog circuits incorporate current references which are self biasing circuit . Such references are dc
quantities that exhibit little dependence on supply and process parameters and a well defined dependence on the
(ii) Temperature variable sub-circuit: The temperature variable sub-circuit consists of three couples of serially connected transistors
operating in sub-threshold region. It accepts current through PMOS current mirrors current mirror which gives a replica ( if necessary,
attenuated or amplified ) of a bias or a signal current and produces an output voltage proportional to temperature.
(iii) One point calibration sub-circuit: Calibration consists of determining the indication or output of a temperature sensor with
respect to that of a standard at a sufficient number of known temperatures so that, with acceptable means of interpolation, the
indication or output of the sensor will be known over the entire temperature range of use. After packaging, the sensor is calibrated by
measuring its die temperature at reference point using on-chip calibration transistors.
METHODOLOGY
The sub-threshold drain current I
D
of a MOSFET is an exponential function of the gate-source voltage V
GS
and the drain source
voltage V
DS
, and given by[8]:

=
0

×1 −

, (1)
Where

0
=

−1 ×
,
2
(2)
and K is the aspect ratio (=W/L) of the transistor, is the carrier mobility, C
OX
is the gate-oxide capacitance, V
T
is the thermal voltage
V
TH
is the threshold voltage of a MOSFET, and is the sub-threshold slope factor.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


98
www.ijergs.org

In the current-source sub-circuit, gate-source voltage V
GS9
is equal to the sum of gate-source voltage V
GS8
and drain source voltage
V
DS10
:

9
=
10
+
8
(3)

10
=
9

8
=

8

9
(4)
M
10
is operated in the sub-threshold region, so trans-conductance G
DS10
is obtained by using Eqs. (1) and (4)

10
=

10

10
=

10

0


10

10

(5)
=
10
×
10

=
1

0

9

8

8

9


10

(6)
As M
10
operates in sub-threshold region (V
m
- V
TH10
<0), I is increased by temperature, so the highest power consumption is in the
upper temperature limit .Choosing the maximum current is a tradeoff between power consumption and linearity that can be obtained
by simulation. In the temperature variable sub-circuit, as M
5,
M
6,
M
15,
M
12,
M
16
,M
17
are in sub-threshold region, the relation between
gate to source voltage and MOS current is equal to Eq. (4). According to Fig.1, currents of M
5
, M
12
, M
16
, M
17
are I and currents of M
6

and M
15
are 3I and 2I . The transistor sizes in our design are simple having the same aspect ratio.

=
6

5
+
15

12
+
16

17
(7)
By using Eq. (4) with regard to currents of MOSFETs,output voltage is given by:

=

6

15

16

5

12

17

5

12

17

6

15

16
+∆

(8)
By replacing the currents of transistors, output voltage is obtained by:

=

6
5

12

17

6

15

16
+∆

(9)
By combination of Eqs. (7) and (9) output voltage can be obtained:

=

6
5

12

17

6

15

16
+∆
0
= × + (10)
where T is absolute temperature, A and B are temperature independent constants. Eq. (10) shows a linear relationship between
absolute temperature and output voltage as depicted in Fig 3. Based on aspect ratio (W/L), temperature sensor is designed into two
ways:
(i) Temperature sensor based on designed W/L ratio : In this design all the MOS transistors used in the circuit diagram are of
different width and length. By using large length transistors (L
M6-11
>> L
min
)

the sensitivity to the geometric variations can be
minimized and an accurate temperature coefficient is expected. Large transistors also help to reduce the impact on threshold voltage
due to random doping fluctuations. Different values of W/L ratio for corresponding MOS transistors is given in the Table I.
(ii) Temperature sensor based on minimum W/L ratio : In this design all the MOS transistors used in the circuit diagram are of same
width and length. By minimum technology parameter it indicates that width(W) of transistor is 240nm and length(L) of transistor is
180nm. So the equation (10) becomes

=

6 + ∆
0
= × + .
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


99
www.ijergs.org

Table I Size of transistors
Transistor W/L (µm/µm)

M
1
1 (1.5/20)
M
2
10 (3/3)
M
3
4 (3/3)
M
6
, M
8
, M
10
1 (1/3)
M
7
3 (3/3)
M
9
4 (3/3)
M
11
28 (3/3)
M
4
, M
5
, M
12-14
1 (3/10)
M
C1
, M
C2
1 (1/20)



Fig 2.The linear relationship of output voltage and temperature of a Fig 3. The linear relationship of output voltage and temperature
of a temperature Sensor based on designed W/L Temperature Sensor based on minimum W/L .

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


100
www.ijergs.org

Fig 4.
Sink current versus temperature graph for minimum W/L sensor Fig 5. Sink current versus temperature graph for designed W/L sensor

Fig 6. Power versus temperature graph for Temperature Sensor Fig 7. Power versus temperature graph for Temperature Sensor
based on designed W/L based on minimum W/L

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


101
www.ijergs.org


Fig 8. Transient response of designed W/L sensor at temperature 17
0
C Fig 9. Transient response of minimum W/L sensor at temperature 17
0
C
Table II Comparison of temperature sensor with previous works
Sensor Power supply Power cons. Temp. range Inaccuracy Process
[1] 0.5, 1 V 119 nW -10–30 ˚C -0.8 to +1 ˚C 180 nm CMOS
[2] – 38.5 µW – ±2 ˚C 65 nm CMOS
[3] 2.7–5.5 V 429 µW -50–125 ˚C ±0.5 ˚C 0.5 µm CMOS
[4] 1V 220 nW 0–100 ˚C -1.6 to +3 ˚C 180 nm CMOS
[5] 3.0–3.8 V 10 µW 0–100 ˚C -0.7 to +0.9 ˚C 0.35µm CMOS
[6] 1V 25 µW +50–125 ˚C -1 to +0.8 ˚C 90 nm CMOS
[7] – 8.6 µW -55–125 ˚C ±0.4 ˚C 160 nm CMOS
[8] 0.6–2.5 V 7 nW +10–120 ˚C ±2 ˚C 180 nm CMOS
[This work
designed W/L]
0.6–2.5 V 12.5nW 0–120 ˚C ±3 ˚C 180 nm CMOS
[This work minimum W/L] 0.6–2.5 V 1.05nW 0–120 ˚C ±6–7 ˚C 180 nm CMOS

SIMULATION RESULTS AND DISCUSSION
A linear temperature sensor that incorporates semiconductor devices and is capable of high accuracy over a very wide temperature
range, such that the voltage drop varies approximately linearly in negative or positive dependence on temperature. The linear
relationship of output voltage and temperature at a supply voltage of 600mV is shown in Fig 2 and Fig 3.Temperature Sensor based on
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


102
www.ijergs.org

designed W/L sinks upto 28nA from a wide range of temperature with a V
DD
of 600mV is given in Fig 5. As sink current is
exponentially increasing with temperature its power consumption also increases. The overall power consumption is more of this
temperature design as compared to temperature sensor based on minimum W/L ratio as shown in Fig 4.As the temperature is increased
power consumed by the sensor also increases. The power consumption is of merely 12.5nW at 120
0
C is shown in Fig 7.The power
consumed is more by this sensor as compared to temperature sensor based on minimum W/L given in Fig 6.At temperature of 15
0
C
sustained oscillations are obtained in case of designed aspect ratio temperature sensor, at temperature of 17
0
C smooth response is
obtained spontaneously shown in Fig 8 and further on increasing the temperature oscillations are dominant. This proves temperature
of 17
0
C is the best temperature for transient response. In case of minimum aspect ratio temperature of 17
0
C is the suitable temperature
for transient response given in the Fig 9. The transient response of temperature sensor based on designed W/L is more practically
realizable ( similar to unit step response ) as compared to temperature sensor based on minimum W/L .
CONCLUSION

This research investigate an ultra low power temperature sensor. Tables II and III shows the comparison between the designed sensor
with previous works and its performance summary. As the oscillations were profound in the transient response of the temperature
sensor, this can be eliminated by using Proportional Integral Derivative controller based on IMC approach so as to get smooth steady
state response at a particular temperature. This transient response is helpful in determining the need of start-up circuit. It is required
that if steady state time is less than 200ms then there is no need of start -up circuit. As a result, the temperature sensor based on two
approaches of aspect ratio are significantly important according to their performances in relative desired characteristics. The layout
area of the sensor is shown in the Fig 10 and Fig 11. From area and power point of view temperature sensor based on minimum aspect
ratio is preferred whereas considering linearity, temperature error inaccuracy and transient response temperature sensor based on
designed aspect ratio is dominant.
Table III Performance Summary

Parameter temperature sensor based on
designed W/L
Value
temperature sensor based on
minimum W/L
Value
Power Supply 0.6-2.5V 0.6-2.5V
Power Consumption 12.5nW @ 120
0
C,
V
DD
= 0.6V
1.05nW @ 120
0
C,
V
DD
= 0.6V
Circuit area 0.0076mm
2
0.00013mm
2

Inaccuracy versus temperature 3
0
C 6-7
0
C
Inaccuracy versus V
DD
0.52
0
C/V 0.47
0
C/V
Sensitivity 1.41mV/
0
C 0.354mV/
0
C
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


103
www.ijergs.org

Transient response Stable at 17
0
C Sustained oscillations
Sink current 28nA @ 120
0
C,
V
DD
= 0.6V
5nA @ 120
0
C,
V
DD
= 0.6V
Transconductance 5.005 nA/V @ 25
0
C,
V
DD
= 0.6V
0.815 nA/V @ 25
0
C,
V
DD
= 0.6V
Transresistance 22.7 MΩ @ 25
0
C,
V
DD
= 0.6V
416.6 MΩ @ 25
0
C,
V
DD
= 0.6V



Fig 10. The layout of CMOS temperature sensor based Fig 11. The layout of CMOS temperature sensor based on designed
aspect ratio on minimum aspect ratio

REFERENCES:
[1] Law, M. K., Bermak, A., & Luong, H. C. A. Sub-µW embedded CMOS temperature sensor for RFID food monitoring
application. IEEE Journal of Solid-State Circuits, 45(6), 1246–1255, (2010)
[2] Intel Pentium D Processor 900 Sequence and Intel Pentium Processor Extreme Edition 955 Datasheet. On 65 nm Process in the
775-Land LGA Package and Supporting Intel Extended Memory 64 Technology, and Supporting Intel Virtualization Technology,
Intel Corp., Document 310306-002,(2006).
[3] Pertijs, M. A. P., Niederkorn, A., Xu, M., McKillop, B., Bakker, A., & Huijsing, J. H. A CMOS smart temperature sensor with
a 3ζ inaccuracy of 0.5 ˚C from -50 to 120 ˚C. IEEE Journal of Solid-State Circuits, 40(2), 454–461. (2005).
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


104
www.ijergs.org

[4] Lin, Y. S., Sylvester, D., & Blaauw, D. An ultra low power 1 V, 220 nW temperature sensor for passive wireless applications. In
Custom Integrated Circuits Conference (CICC). IEEE, pp. 507–510. (2008).
[5] Chen, P., Chen, C. C., Tsai, C. C., & Lu, W. F. A time-to digital- converter-based CMOS smart temperature sensor. IEEE Journal
of Solid-State Circuits , 40(8), 1642–1648. (2005).
[6] M. Sasaki, M. Ikeda, and K. Asada, "A temperature sensor with an inaccuracy of -1/+0.8
o
C using 90-nm I-V CMOS for online
thermal monitoring of VLSI circuits," IEEE Trans. Semiconductor Manufacturing, Vol. 2 1, No.2, pp.201 -20S, May (2005).
[7] Souri, K., Chae, Y., Ponomarev, Y., & Makinwa, K. A.. A precision DTMOST-based temperature sensor. In Proceedings of the
ESSCIRC (ESSCIRC) ,pp. 279–282, (2011).
[8] Sahafi, Jafar Sobhi, Ziaddin Daie Koozehkanani,"Nanowatt CMOS temperature sensor".Analog Integr circ sig process in
springer,75:343-348, (2013).
[9] Ueno, K., Asai, T., & Amemiya, Y. Low-power temperature- to-frequency converter consisting of sub-threshold CMOS circuits
for integrated smart temperature sensors. Sensors and Actuators A Physical, 165, 132–137 (2011).
[10] Balachandran, G. K., & Barnett, R. E. A 440-nA true random number generator for passive RFID tags. IEEE Transactions on
Circuits and Systems I Regular Papers, 55(11), 3723–3732, (2008).
[11] Bruce W. Ohme, Bill J. Johnson, and Mark R. Larson ―SOI CMOS for Extreme Temperature Applications‖ Honeywell
Aerospace, Defense and Space, Honeywell International Plymouth Minnesota USA, (2012).
[12] Q. Chen, M. Meterelliyoz, and K. Roy, "A CMOS thermal sensor and its applications in temperature adaptive design," Proc. of
the 7th Int'l Symposium on Quality Electronic Design, pp.-24S, (2006).
Man kay law, and A Bermak ―A 405-nw CMOS Temperature sensor based on linear MOS operation‖ IEEE Transaction on Circuits
and Systems,vol.56,no,12,December (2009)














International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


105
www.ijergs.org

Analysis of Advanced Techniques to Eliminate Harmonics in AC Drives
Amit P. Wankhade
1
, Prof. C. Veeresh
2

2
Assistant Professor, MIT mandsour
E-mail- amitwankhade03@gmail.com

Abstract— Variable speed AC drives are finding their place in all types of industrial and commercial loads. This Work covers the
current source converter technologies, including pulse width-modulated current-source inverters (CSIs) and in addition, this also
addresses the present status of the direct converters & gives an overview of the commonly Used modulation schemes for VFD
systems. The proposed work flow is that to work with the simulation of three phases PWM Current Source Inverter fed Induction
Motor (CSI-IM) drive systems using Matlab/Simulink simulation Software. This work primarily presents a Unified approach for
generating pulse width-modulated patterns for three-phase current-source rectifiers and inverters (CSR/is) that provides unconstrained
selective harmonic elimination and fundamental current control. This conversion Process generates harmonics in the motor current
waveform. This project deals with the analysis of motor current Harmonics using FFT analysis and use of filter for mitigating them for
smooth operation of motor. The filter used for Reduction of harmonics is passive filter. The filter is such it reduces only the 5th & 7th
order harmonics. Thus the Analysis of motor current harmonics is done firstly without filter & then it has been compared with the
results after the Addition of filter. It is found that the 5th & 7th order harmonics has reduced considerably.
Keywords: Harmonics, Total harmonic distortion (THD), variable frequency drives (VFD), power factor, and current Source
inverter (CSI), Fast Fourier Transform (FFT).
INTRODUCTION
The proposed work is based on current source inverter fed induction motor scheme. At the front end a current source Rectifier is
connected which converts the 6.6Kv ac voltage into dc by rectifying it. The inverter converts the dc voltage again into ac & then
supplies to induction motor. As the switches used in the rectifier & inverter are GTO‘s & SCR‘s which requires triggering pulse? The
triggering pulse is given by the discrete six pulse generator which is connected to the gate of both rectifier & inverter having six
switching devices in each section. Due to the switching processes Harmonics are produced in the system. The output of the inverter
which is ac but not sinusoidal due to switching time Taken by the switches & is in quazi square form which is the main cause of
harmonics. As six switches are used the Harmonics which are dangerous to the system are 5th & 7th. Thus main focus is to reduce this
harmonic order. For doing So low pass filter is to be used so as to reduce this harmonics. An LC filter is used by selecting the values
of inductor & Capacitor. Thus it is a passive filter which is used in this scheme. The output of the induction motor is given to the bus -
Bar which shows the stator, rotor & mechanical quantities. As our main focus is on current on stator side we choose Stator quantities
from bus-bar. A scope is connected to observe the waveforms.
METHODOLOGY
Adding a variable frequency drive (VFD) to a motor-driven system can offer potential energy savings in a system in Which the
loads vary with time. The operating speed of a motor connected to a VFD is varied by changing the Frequency of the motor supply
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


106
www.ijergs.org

voltage. This allows continuous process speed control. Motor-driven systems are often Designed to handle peak loads that have a
safety factor. This often leads to energy inefficiency in systems that operate For extended periods at reduced load. The ability to adjust
motor speed enables closer matching of motor output to load And often results in energy savings. The VFD basically consist of a
rectifier section which converts the ac supply into Dc, a dc choke which is used to smooth the dc output current & an inverter section
which converts dc into ac supply Which is fed to induction motor? The VFD consists of a switching devices such as Diodes, IGBT,
GTO, SCR etc.[1]


Fig.1: Generalized Variable Frequency Drive
A VFD can be divided into two main sections:
A. Rectifier stage: A full-wave, solid-state rectifier converts three-phase 50 Hz power from a standard 208, 460, 575 or higher utility
supply to either fixed or adjustable DC voltage.
B. Inverter stage: Electronic switches power transistors or thyristor switch the rectified dc voltage on and off, And produce a current
or voltage waveform at the desired new frequency. The amount of distortion depends On the design of the inverter and filter.

III. SYSTEM SIMULATION
The proposed work is based on current source inverter fed induction motor scheme. At the front end a current source Rectifier is
connected which converts the 6.6Kv ac voltage into dc by rectifying it. For smoothing this voltage before Applying it to inverter a DC
choke coil is used this removes the ripples. The inverter converts the dc voltage again into Ac & then supplies to induction motor. As
the switches used in the rectifier & inverter are GTO‘s & SCR‘s which Requires triggering pulse. The triggering pulse is given by the
discrete six pulse generator which is connected to the gate Of both rectifier & inverter. Six switching devices in each section. Due to
the switching processes harmonics are Produced in the system. The output of the inverter which is ac but not sinusoidal due to
switching time taken by the Switches & is in quazi square form which is the main cause of harmonics. As six switches are used the
harmonics Which are dangerous to the system are 5th & 7th. Thus main focus is to reduce this harmonic order. For doing so low pass
filter is to be used so as to reduce this harmonics. An LC filter is used by selecting the values of inductor & Capacitor. Thus it is a
passive filter which is used in this scheme. The output of the induction motor is given to the bus - Bar which shows the stator, rotor &
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


107
www.ijergs.org

mechanical quantities. As our main focus is on current on stator side we choose Stator quantities from bus-bar. A scope is connected
to observe the waveforms. An FFT block is connected to the motor Current of any one phase whose order of harmonic is to be found
out. To this FFT block an FFT spectrum window is connected which displays the order of harmonics from 0 to 19th order of
harmonic. Also a bar graph is displayed which shows the order of harmonics which is shown by the FFT spectrum. Thus the work is
divided into two sections one before use of filter and after the use of filter. After running the simulation it is observed that the 5th &
7th harmonic components are reduced than that without filter which is shown by the FFT spectrum block.



Fig.2: Simulation Diagram of CSI Fed Induction.


Table: Induction Motor Specifications
Motor Supply Voltage 6600V
Horse Power Rating of Motor 200 HP
Supply Frequency 50 Hz
Stator resistance [Rs] , Stator
inductance [Ls]
1.485Ω,
0.03027 H
Pole Pairs 2
IV.HARMONICS
Harmonics are the major problems in any industrial drives. They cause serious problems in the motor which is connected as a load fed
from the VFD. The VFD is a current source inverter fed (CSI) .At the front end a current source rectifier is connected which converts
the 6.6Kv ac voltage into dc by rectifying it. For smoothing this voltage before applying it to inverter a DC choke coil is used this
removes the ripples. The inverter converts the dc voltage again into ac & then supplies to induction motor. As the switches used in the
rectifier & inverter are GTO‘s & SCR‘s which requires triggering pulse. The triggering pulse is given by the discrete six pulse
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


108
www.ijergs.org

generator which is connected to the gate of both rectifier & inverter. Six switching devices in each section. Due to the switching
processes harmonics are produced in the system. The output of the inverter which is ac but not sinusoidal due to switching time taken
by the switches & is in quazi square form which is the main cause of harmonics. As six switches are used the harmonics which are
dangerous to the system are 5th & 7th . Thus main focus is to reduce this harmonic order. For doing so low pass filter is to be used so
as to reduce this harmonics. Total harmonic distortion is the contribution of all the harmonic frequency currents to the fundamental.
Table 3.2.1: Harmonics & Multiples of Fundamental Frequencies

The nonlinear loads such as AC to DC rectifiers produce distorted waveforms. Harmonics are present in waveforms that are not
perfect sine waves due to distortion from nonlinear loads Around the 1830‘s a French mathematician named Fourier discovered that a
distorted waveform can be represented as a series of sine waves each an integer number multiple of the fundamental frequency and
each with a specific magnitude. For example, the 5th harmonic on a system with a 50 Hertz fundamental waveform would have a
frequency of 5 times 50 Hertz, or 2500Hertz. These higher order waveforms are called ―harmonics‖. The collective sum of the
fundamental and each harmonic is called a Fourier series. This series can be viewed as a spectrum analysis where the fundamental
frequency and each harmonic component are displayed. [8] Graphically in pu is shown in a bar chart in Figure 3.4.1

Figure 3: Harmonics order in per unit with respect to fundamental
From above study it is clear that harmonic currents flow in an AC drive with a 6-pulse front end, let‘s address what, if any, problems
this may cause. Power is only transferred through a distribution line when current is in phase with voltage. This is the very reason for
concerns about input ―power factor‖. Displacement power factor in a motor running across the line can be explained as the cosine of
the phase angle between the current and voltage. Since a motor is an inductive load, current lags voltage by about 30 to 40 degrees
when loaded, making the power factor about 0.75 to 0.8 as opposed to about 0.95 for many PWM AC drives? In the case of a resistive
load, the power factor would be 1 or ―unity‖. In such a case all of the current flowing results in power being transferred. Poor power
factor (less than 1 or ―unity‖) means reactive current that does not contribute power is flowing.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


109
www.ijergs.org

CONTROL STRATEGIES

The induction motor control can be done with the help variable frequency drives. The high power drives can be divided into subparts
depending upon the areas of applications. Following chart shows the high frequency drives with different schemes.

Figure 4. Chart showing types of VFD schemes
Adding a variable frequency drive (VFD) to a motor-driven system can offer potential energy savings in a system in which the loads
vary with time. VFDs belong to a group of equipment called adjustable speed drives or variable speed drives. (Variable speed drives
can be electrical or mechanical, whereas VFDs are electrical.) The operating speed of a motor connected to a VFD is varied by
changing the frequency of the motor supply voltage. This allows continuous process speed control. Motor-driven systems are often
designed to handle peak loads that have a safety factor. This often leads to energy inefficiency in systems that operate for extended
periods at reduced load. The ability to adjust motor speed enables closer matching of motor output to load and often results in energy
savings
OVERALL WORKING OF MODEL
The proposed work is based on current source inverter fed induction motor scheme. At the front end a current source rectifier is
connected which converts the 6.6Kv ac voltage into dc by rectifying it. For smoothing this voltage before applying it to inverter a DC
choke coil is used which removes the ripples. The inverter converts the dc voltage again into ac & then supplies to induction motor.
As the switches used in the rectifier & inverter are GTO‘s & SCR‘s which requires triggering pulse. The triggering pulse is given by
the discrete six pulse generator which is connected to the gate of both rectifier & inverter. Six switching devices in each section. Due
to the switching processes harmonics are produced in the system. The output of the inverter which is ac but not sinusoidal due to
switching time taken by the switches & is in quazi square form which is the main cause of harmonics. As six switches are used the
harmonics which are dangerous to the system are 5th & 7th . Thus main focus is to reduce this harmonic order. For doing so low pass
filter is to be used so as to reduce this harmonics. An LC filter is used by selecting the values of inductor & capacitor. Thus it is a
passive filter which is used in this scheme. The output of the induction motor is given to the bus -bar which shows the stator ,rotor &
mechanical quantities. As our main focus is on current on stator side we choose stator quantities from bus-bar. A scope is connected to
observe the waveforms. An FFT block is connected to the motor current of any one phase whose order of harmonic is to be found out.
To this FFT block an FFT spectrum window is connected which displays the order of harmonics from 0 to 19th order of harmonic.
Also a bar graph is displayed which shows the order of harmonics which is shown by the FFT spectrum. Thus the work is divided into
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


110
www.ijergs.org

two sections one before use of filter and after the use of filter. After running the simulation it is observed that the 5th & 7th harmonic
components are reduced than that without filter which is shown by the FFT spectrum block. The Total harmonic distortion is also
found out by connecting a THD block available in simulink library. It is found that the THD is also reduced after the use of filter. Also
a single tuned filter can be used to reduce the harmonics which takes care of only one frequency harmonic component which is to be
reduced. The FFT analysis can also be done by using powergui which contains FFT tool. The motor current signal is imported in work
space by connecting a simout block to the scope & by selecting structure with time option. Further the proposed work deals with the
Wavelet analysis of motor current signal. The Wavelet analysis is performed by two methods. The first one by doing programming in
M-file. The program is written so that the motor current without & with filter are compared. The Wavelet analysis uses the time
scaling technique. Thus the Low & high frequency components of motor current are compared. It is observed that higher frequency
components become zero after filter is used.
FFT ANALYSIS OF MOTOR CURRENT
The FFT analysis of motor current is done in two steps i.e without filter & after the addition of filter circuit. The output of the
induction motor is given to the bus -bar which shows the stator ,rotor & mechanical quantities. As our main focus is on current on
stator side we choose stator quantities from bus-bar. A scope is connected to observe the waveforms.
An FFT block is connected to the motor current of any one phase whose order of harmonic is to be found out. To this FFT block an
FFT spectrum window is connected which displays the order of harmonics from 0 to 19th order of harmonic. Also a bar graph is
displayed which shows the order of harmonics which is shown by the FFT spectrum. Thus the work is divided into two sections one
before use of filter and after the use of filter. After running the simulation it is observed that the 5th & 7th harmonic components are
reduced than that without filter which is shown by the FFT spectrum block. The Total harmonic distortion is also found out by
connecting a THD block available in simulink library. It is found that the THD is also reduced after the use of filter. Also a single
tuned filter can be used to reduce the harmonics which takes care of only one frequency harmonic component which is to be reduced.
The FFT analysis can also be done by using powergui which contains FFT tool.

Figure 5: Bar-graph showing magnitude of harmonics without filter
FFT analysis of motor current with LC filter
The FFT analysis of motor current harmonics is done after adding filter An FFT block is connected to the motor current of any one
phase whose order of harmonic is to be found out as seen in the diagram . To this FFT block an FFT spectrum window is connected
which displays the order of harmonics from 0 to 19th order of harmonic. After the simulation is run the FFT spectrum displays the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


111
www.ijergs.org

harmonic orders. As we are using six switches in both current source rectifier & current source inverter we are more concerned with
5th & 7th order harmonic. Also a bar graph is displayed which shows the order of harmonics which is shown by the FFT spectrum.
The analysis is done for 5th & 7th harmonic components. The magnitude of this components is 6.19A & 6.18A respectively.


Figure 6. Bar-graph showing magnitude of harmonics with filter
VII Conclusion
The simulation of CSI fed Induction Motor drive caused harmonics in the motor current. This harmonics are the byproduct of
switching devices used in rectifier & inverter section. From all the harmonic orders 5th & 7th harmonic cause problem as we use 6
pulse rectifier & Inverter Sections. For the reduction of harmonics we have used LC filter with typical values of inductor & Capacitor.
Thus reduction in the 5th & 7th harmonics components is done by passive filter.
REFERENCES:
1. K. H. J. Chong and R. D. Klug, ―High power medium voltage drives,‖ in Proc. Power Con, vol. 1, pp. 658–664. Nov. 21–24, 2004.
2. Bin Wu, S.B.Dewan and G.R.Slemon, ‖PWMCSI Inverter Induction Motor Drives", IEEE Trans. IA, Vol. 28, NO. 1, pp. 64-71,Jan
1992.
3. P. M. Espelage, J. M. Nowak, " Symmetrical GTO current source inverter for wide speed range control of 2300 to 4160 volt, :350
tp 7000hp,
induction motors," IEEE IAS Annual Meeting, pp302-307, 1988.
4. M.Salo and H.Tuusa, ―A vector-controlled PWM current-source-inverter fed induction motor drive with a new stator current control
method,‖
IEEE Trans. Ind. Electron., vol. 52, no. 2, pp. 523–531, Apr. 2005.
5. H. Karshenas, H. Kojori, and S. Dewan, ―Generalized techniques of selective harmonic elimination and current control in current
source
inverters/ converters,‖ IEEE Trans. Power Electron., vol. 10, pp. 566–573,Sept. 1995.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


112
www.ijergs.org

6. B.Wu, G. R. Slemon, and S. B. Dewan,―Stability analysis of GTO-CSI induction machine drive using constant rotor frequency
control,‖ in
Proc. 6th Int. Conf. Elect. Machines and Drives, pp. 576–581,1993.
7. J. Espinoza and G. Joos, ―On-line generation of gating signals for current source converter topologies,‖ ISIE, pp. 674–678, 1993




















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


113
www.ijergs.org

Fitting Performance of Empirical and Theoretical Soil Water Retention
Functions and Estimation of Statistical Pore-Size Distribution-Based
Unsaturated Hydraulic Conductivity Models for Flood Plain Soils
Alka Ravesh
1
, R.K.Malik
2

1
Assistant Professor, Department of Applied Sciences, Savera Group of Institutions, Farrukhnagar, Gurgaon, Haryana, India
2
Professor of Hydrology and water Resources Engineering and Head,, Department of Civil Engineering, Amity School of Engineeing
and Technology, Amity university, gurgaon, Haryana, India
E-mail- rkmalik@ggn.amity.edu

Abstract –For identifying the soil water retention function for its best fitting performance, the empirical retention functions of
Brooks-Corey, Van Genuchten and theoretical Kosugi were parameterized for the clay loam and silt loam flood plain soils. The
parameters using non-linear least- squares optimization technique as used in the RETC code were optimized and these were used in
the Mualem‘s statistical pore-size distribution-based unsaturated hydraulic conductivity models. It was observed that the log-normal
function of Kosugi gave an excellent fitting performance having the highest co-efficient of determination and the lowest residual sum
of squares. The physically-based Kosugi function was observed to be followed by empirical functions of Van Genuchten and Brooks-
Corey in their fitting performances, respectively
Keywords−Soil water retention functions-Brooks-Corey, van Genuchten, Kosugi, RETC computer code, parameterization, fitting
performance, Mualem-based hydraulic conductivity models, model estimation.
INTRODUCTION
Modeling of water dynamics within the partially-saturated soil profile of a specific textural class requires knowledge of the
related soil hydraulic characteristics viz: soil water retention functions and soil hydraulic conductivity models and has applications in
analyzing the hydrological, environmental and solute transport processes within the soil profile. Different functions have been
proposed by various investigators and were reviewed [1]. For estimation of these functions, direct and indirect methods have been
employed and in 2005 these have been discussed by Durner and Lipsius [2]. They reported that the direct measurement of unsaturated
hydraulic conductivity is considerably more difficult and less accurate and they further suggested the use of indirect method using
easily measured soil water retention data from which soil water retention functions can be developed. These retention functions, either
empirical or theoretical expressions, fitting the observed soil water retention data to different extents having the specific number of
parameters are further embedded into the statistical pore-size distribution-based relative hydraulic conductivity models to develop
corresponding predictive theoretical unsaturated hydraulic conductivity models having the same parameters as in the corresponding
soil water retention functions given the saturated hydraulic conductivity and the related tortuosity factor. The estimation of the
parameters of the retention functions is, therefore, important. In 2012 Solone, et al. [3] reported that the parameterization of the soil
water retention functions can be obtained by fitting the function to the observed soil water retention data using the least-squares non-
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


114
www.ijergs.org

linear fitting algorithms or employing the inverse methods in which the function parameters are iteratively changed so that a given
selected function approximates the observed response or using the pedotransfer functions which are regression equations.
Scarce information is available about the parameterization of these functions and the extent of fitting performance of various empirical
and theoretical soil water retention functions and subsequently based on these parameters the hydraulic conductivity models for
the flood plain soils which constitute mainly the clay loam and silt loam soils need to be estimated. So, in this study, the
parameterization of empirical and theoretical soil water retention functions fitting the observed data of these soils has been made
to identify suitable functions and further to estimate the unsaturated hydraulic conductivity models based on the estimated
parameters for identifying the appropriate models of unsaturated hydraulic conductivity for further use in the modeling of soil
water dynamics.
Materials and Methods
Soil water retention data
The average soil water retention data [4] for the soil water suction heads of 100, 300, 1000, 2000 , 3000 , 5000 , 10000 and
15000 cm of different soil samples from the soil profiles (depth 150 cm) of silt loam (percentage of sand, silt and clay: 58.6, 21.9,
14.6, respectively) and clay loam (percentage of sand, silt and clay ranging from 38.3 to 37.4, 20.5 to 24.3 and 34.2 to 37.6,
respectively) soils of the flood plains of a seasonal river Ghaggar flowing through a part of Rajasthan was utilized for estimating the
parameters of the soil water retention functions described below.
Soil water retention functions
The empirical soil water retention functions proposed by van Genuchten in 1980 [5] with independent m and n of each other and fixed
(m = 1-1/n) shape parameters, by Brooks-Corey in 1964 [6] and the statistical pore-size distribution-based soil water retention
function by Kosugi in 1996 [7] were used for parameterization. The van Genuchten proposed the sigmoidal- shaped continuous
(smooth) five-parametric power-law function as:
θ(h) = θ
r

s
−θ
r
1 +(∝
VG
h)
n

−m
(1)
Where θ is the soil water content at the soil water suction head h and θ
s
and θ
r
are the residual and saturated soil water contents,
respectively. The parameter ∝
VG
is an empirical constantL
−1
. In this function the five unknown parameters are θ
r
, θ
s
, ∝
VG
, n and m
when the shape parameters n and m are independent of each other and when n and m are fixed then these unknown parameters
reduced to four. The dimensionless parameters n and m are the parameters related to the pore-size distribution affecting the shape of
the function. However, Durner reported that the constraint of fixed condition eliminated some of the flexibility of the function [8].
Brooks –Corey proposed the following empirical four-parametric power-law soil water retention function as:
θ(h) = θ
r
+ θ
s
− θ
r
(∝
BC
h)
−λ
BC
(2)
Where ∝
BC
is an empirical parameterL
−1
which represents the desaturation rate of the soil water and is related to the pore-
size distribution and whose inverse is regarded as the reciprocal of the height of the capillary fringe. The parameter λ
BC
is the pore-
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


115
www.ijergs.org

size distribution index affecting the slope of this function and characterizes the width of the pore-size distribution. In this function, the
four unknown parameters areθ
r
, θ
s
, ∝
BC
and λ
BC
.
In 1994, Kosugi [9] assumed that the soil pore-size is a log-normal random variable and based on this hypothesis he derived
physically-based three-parameter model for the soil water retention function; the three parameters being the mean, variance of the
pore-size distribution and the maximum pore radius. In the limiting case, where the maximum pore radius becomes infinite, the three-
parameter model simplifies to two-parameter model and based on this simplification Kosugi in 1996 improved the function by
developing a physically-based (theoretical) two-parameter log-normal analytical model based on the log-normal distribution density
function of the pore radius for the soil water retention as:
θ (h) = θ
r
+ (θ
s
−θ
r
)
1
2
erfc
ln (h)−ln (h
m
)
2 σ
(3)
Where the parameters ln (h
m
) and denote the mean and standard deviation of ln (h), respectively. The function erfc denotes the
complementary error function [10].
Parameter estimation of soil water retention functions
For estimation of unknown parameters of these functions, RETC (RETention Curve) computer code [11] was used by utilizing the soil
water retention data only. These unknown parameters were represented by a vector b consisting of toθ
r
, θ
s
, ∝
VG,
n , m for independent
shape parameters and θ
r
, θ
s
, ∝
VG,
n for fixed shape parameters for van Genuchten function and for Brooks-Corey function the vector
b represented unknown parameters θ
r
, θ
s
, ∝
BC,
λ
BC
. For Kosugi function, the vector b represented the unknown parameters
θ
r
, θ
s
,

, .These parameters were optimized iteratively by minimizing the residual sum of squares (RSS) of the observed and fitted
soil water retention data θ(h) and the RSS was taken as the objective function O (b) which was minimized by means of a weighted
non-linear least-squares optimization approach based on the Marquardt-Levenberg‘s maximum neighborhood method [12] as:
O (b) = w
i
θ
i
− θ

i

2
N
i=1
(4)
Where θ
i
and θ

i
are the observed and the fitted soil water contents, respectively. N is the number of the soil water retention
points and equal to 8 in this analysis. The weighting factors w
i
, which reflects the reliability of the measured individual data, were set
equal to unity in this analysis as the reliability of all the measured soil water retention data was considered equal. A set of appropriate
initial estimates of these unknown parameters was used so that the minimization process converges fast after certain iterations to the
optimized values of these parameters.
The goodness of fit of the observed and fitted data was characterized by the coefficient of determination (r
2
) which measures
the relative magnitude of the total sum of squares associated with the fitted function as:
r
2

i
− θ

i

2
/ θ
i
− θ

i

2
(5)
Where θ

i
is the mean of observed soil water content data.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


116
www.ijergs.org

The soil water retention functions for these soils were identified in order of superior fitting performance having comparatively higher
coefficient of determination (r
2
) and lower residual sum of squares (RSS) for the observed and predicted soil water retention data.
Estimation of hydraulic conductivity models
For predicting the unsaturated hydraulic conductivity from the measured soil water retention data, approaches were developed based
on the capillary-bundle theory by Childs and Collis-George in 1950 [13], Burdine in 1953 [14] and Mualem in 1976 [15]. In this
analysis the widely used Mualem approach was used.
Mualem developed a relative hydraulic conductivity model based on the capillary theory which assumes that the pore radius is
inversely proportional to the suction head (h) at which the pore drains and conceptualized the pores as the pairs of capillary tubes
whose lengths are proportional to their radii and the conductance of each capillary-tube pair is determined according to the
Poiseuille‘s law (Poiseuille‘s law states that the flow rate per unit cross-sectional area of a capillary tube is proportional to the square
of the radius). He derived the model for the prediction of the relative unsaturated hydraulic conductivity from the soil water retention
function. He incorporated the statistical model based on some assumptions. One of these assumptions is that the pore size of a
particular radius is randomly distributed in the porous media and another assumption is to incorporate the average flow velocity given
by the Hagen-Poiseulle‘s formulation. He developed the relative hydraulic conductivity model as:
K
r
(h) = S
e

[h
θ
θ
r
(θ)]
−1

[h
θs
θ
r
(θ)]
−1

2
(6)
Where S
e
[= (θ − θ
r
) / (θ
s
− θ
r
)] is the dimensionless effective saturation. The parameter l is the tortuosity factor. K
r
(S
e
) (= K
(S
e
)/ K
s
) is the relative unsaturated hydraulic conductivity and K
s
is the saturated hydraulic conductivity measured independently.
Black reported that the Mualem model to predict the relative hydraulic conductivity from the behavior of the measured soil water
retention data is most commonly employed to obtain closed-form analytical expression of unsaturated hydraulic conductivity [16].
Coupling the Brooks-Corey soil water retention function with the Mualem model of relative hydraulic conductivity, the
corresponding h-based relative hydraulic conductivity function is expressed as:
k
r
(h) = ∝
BC
h
−λ
BC
+2 + 2
(7)
For developing the closed- form model of the hydraulic conductivity the van Genuchten soil water retention function was
coupled with the relative hydraulic conductivity model of Mualem. The condition of fixed shape parameter m = 1-1/n needs to be
satisfied for developing the closed form. Embedding the soil water retention function of van Genuchten into the Mualem model
resulted into the following corresponding h-based relative hydraulic conductivity model in the closed-form for the condition m = 1
− 1/n as:
K
r
(h) =
1−(∝
VG
h)
n−1
(1+∝
VG
h
n
)
−m

2
1+∝
VG
h
n

m
(8)
Kosugi developed a two-parameter hydraulic conductivity model using the corresponding soil water retention function in the
Mualem model as:
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


117
www.ijergs.org

K
r
(h) = S
e

1
2
erfc
ln (h/h
m
)
2
+

2

2
(9)
The value of tortuosity factor (l) equal 0.5 as reported by Mualem was used in this analysis. The optimized parameters were
used in these hydraulic conductivity models for estimation of unsaturated hydraulic conductivity of these soils for further use in
modeling the soil water dynamics


Results and Discussion
It is observed from Table 1 that the clay loam flood plain soil having comparatively more clay content was found to have less value of

in comparison to that for the silt loam flood plain soils indicating more height of the capillary fringe in the clay loam soil as the
inverse of ∝

represents the height of capillary fringe. Kalane et al. also observed more height of the capillary fringe as the clay
content in the soil increases [17]. The values of ∝

of the clay flood plain soil and silt loam flood plain soil were observed to the
more or less the same i.e. these values were observed 0.21325 and 0.2025, respectively indicating that the slope of the soil water
retention curve is more or less the same for these soils.
In 2002, Kosugi et al. reported that theoretically λ
C
value approaches infinity for a porous medium with a uniform pore-size
distribution, where as its value approaches a lower limit of zero for soils with a wide range of pore sizes [18]. They reported λ
C
value
in the range 0.3 to 10.0 while in 2013, Szymkiewicz reported that these values generally ranged from 0.2 to 5.0 [19]. Zhu and
Mohanty [20] also reported that the soil water retention function of Brooks and Corey was successfully used to describe the soil water
retention data for the relatively homogeneous soils, which have a narrow pore-size distribution with a value for λ
BC
= 2. Nimmo [21]
reported that a medium with many large pores will have a retention curve that drops rapidly to low water content even at low suction
head and conversely, a fine-pored medium will retain even at height suction so will have a flatter a retention curve.
Table 1. Optimized parameters of the soil water retention functions for clay loam
and silt loam soils of flood plain.

Flood plain soil (clay loam)
Soil water retention function
Optimized parameters

/∝

( 1/ cm )
n/

(−)
m
(−)
Brooks-Corey 0.00730 0.21325 −
Van Genuchten
Independent m , n
Fixed m = 1- 1/n


0.00799
0.00746


1.005
1.2378


0.2427


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


118
www.ijergs.org

Kosugi h
m
= 507.29 cm = 4.4236 cm
Flood plain ( Silt loam)
Brooks-Corey 0.0219 0.2025 −

Van Genuchten
Independent m , n
Fixed m = 1- 1/n


0.00497
0.00521


1.005
1.2575


0.2690

Kosugi h
m
= 1232.78 cm = 3.6815 cm -

In the van Genuchten function, when the factor one is disregarded (α
VG
h
n
≫1) then it becomes a limiting case and is
approximated to the Brooks-Corey function and the product of m and n in the van Genuchten function become equal to λ
BC
of the
Brooks-Corey function. The product of m and n remains constant and for that if n is increased then m must be simultaneously
decreased. For the fixed case i.e. m = 1-1/n, the parameter λ
C
should be equal to n – 1. The properties of the soil media which are
described by the two parameters (∝
BC
, λ
BC
) in the Brooks-Corey model are described by three parameters (∝
VG
, n, m) in the van
Genuchten model. From Table 1 it is observed that for the case of van Genuchten function with independent shape parameters (m , n)
and fixed shape parameters m = 1-1/n, the value of α
VG
was observed to be higher for clay loam soil (fine-textured) than the silt loam
soil which is comparatively medium- textured. The same observation was reported by Jauhiainen [22].
It is observed from Table 2 that the log-normal function of Kosugi gave an excellent description of the observed soil water retention
data having the highest r
2
= 0.9969 and the lowest RSS = 0.00016 for clay loam soil and r
2
= 0.9932 and RSS = 0.00033 for silt loam
soil followed by the van Genuchten function with independent shape parameters which yielded r
2
= 0.9929 and RSS = 0.00038 for
clay loam soil and r
2
= 0.9864 and RSS = 0.00066 for silt loam soil. Among the van Genuchten functions, the function with fixed
shape parameters yielded higher RSS (13.16 to 15.15 percent) for these soils. The non-linear least-squares fitting of the Brooks-Corey
function resulted in the least value of r
2
= 0.9881 and the highest value of RSS = 0.00063 for clay loam and r
2
= 0.9724 and RSS =
0.00135 for silt loam soils of flood plain showing that Brooks-Corey function followed the van Genuchten function in its fitting
performance and for these soils performed comparatively better in the clay loam. All the soil water retention functions gave
comparatively better fitting performance for the clay loam flood plain soil in comparison to silt loam flood plain soil.
Table 2.Statistics of the fitting performance of the soil water retention functions.
Soil water retention function
Flood plain soil (clay loam) Flood plain soil (silt loam)
RSS


RSS


Brooks-Corey 63 0.9881 135 0.9724
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


119
www.ijergs.org

Van Genuchten
Independent m , n
Fixed m = 1- 1/n


38
43


0.9929
0.9919


66
76


0.9864
0.9843

Kosugi 16 0.9969 33 0.9932

The physically-based log-normal function of Kosugi gave the best fitting performance followed by empirical van Genuchten and
Brooks-Corey functions in order of superior fitting performance for embedding in the statistical pore-size distribution-based Mualem‘s
relative hydraulic conductivity model for developing the unsaturated hydraulic conductivity function for modeling the soil water
dynamics in these flood plain soils. The log-normal function of Kosugi has a merit in that it is a theoretically derived function, and
therefore, the physical meaning of each parameter is clearly defined.
However, for optimizing the parameters of the soil water retention functions, the number of fitted parameters must be reduced in order
to minimize the non-uniqueness of the optimized parameters and efforts should be made to independently measure the parameters
such as the saturated soil water contentθ
s
. The assumed value of residual soil water content θ
r
can also be used as its measurement is
extremely difficult in the laboratory. This will further reduce the number of parameters to be optimized. It is also observed that the soil
water retention functions under study predict infinite value of the soil water suction head (h) as the effective saturation ( S
e
)
approaches zero which is not consistent with the fact that even under oven-dry condition the soil water suction has a finite value. So,
therefore, these functions should be used in the range of effective saturation significantly larger than zero.
Conclusion
The parameters of empirical soil water retention functions of Brooks-Corey and Van Genuchten and the theoretical soil water
retention function of Kosugi were optimized using non-linear least- squares optimization algorithm as used in the RETC computer
code for the clay loam and silt loam flood plain soils. These parameters were used in the Mualem‘s statistical pore-size distribution-
based models for estimation of corresponding unsaturated hydraulic conductivity models. The log-normal function of Kosugi gave an
excellent fitting performance with highest co-efficient of determination and the lowest residual sum of squares equal for these soils.
The physically-based Kosugi function was observed to be followed by empirical functions of Van Genuchten and Brooks-Corey in
their fitting performances. It is proposed that theoretical Kosugi model of unsaturated hydraulic conductivity can be used for
mathematical simulation studies of soil water dynamics.

REFERENCES:
[1] Leij, F.J., Russell,W.B., and Lesch, S.M. ‗‗Closed-form expressions for water retention and conductivity
data‖.Ground Water. 35(5): 848-858. 1997.
[2] Durner, W. and Lipsius, K..‗‗Encyclopedia of hydrological Sci. (Ed. M G Anderson)‖.John Wiley & Sons Ltd. 2005.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


120
www.ijergs.org

[3] Solone, R., Bittelli, M.,Tomei, F. and Morari, F. ―Errors in water retention curves determined with pressure plates:
Effects on the soil water balance‖. J. Hydrology. 470-471: 65-74.2012.
[4] Yadav, B.S., Verma, B. L. and Deo, R. ―Water Retention and Transmission Characteristics of Soils in Command
Area of North-Western Rajasthan‖ J. Ind. Soc. Soil Sci. 43 (1): 1-5.1995.
[5] Van Genuchten, M.Th., ―A closed-form equation for predicting the hydraulic conductivity of unsaturated soils‖ Soil
Sci. Soc. Am. J. 44: 892-898.1980.
[6] Brooks, R.H. and Corey, A.T. ―Hydraulic properties of porous media‖ Hydrology Paper, No.3, Colorado State
University, Fort Collins, Colorado.1964.
[7] Kosugi, K. ―Lognormal distribution model for unsaturated soil hydraulic properties‖ Water Resour. Res.32 (9): 2697-
2703.1996.
[8] Durner,W. ―Hydraulic conductivity estimation for soils with heterogeneous pore structure‖ Water Resour. Res. 30 (2) :
211-223.1994.
[9] Kosugi, K. ―Three-parameter lognormal model distribution model for soil water retention‖ Water Resour. Res. 30(4)
:891-901.1994.
[10] Abramowitz, M. and Stegun, I.A. ―Handbook of mathematical functions‖ Dover NewYork. 1972.

[11] Van Genuchten, M.Th., Leij, F.J. and Yates, S.R. ―The RETC code for quantifying the hydraulic functions of
unsaturated soils‖ Res. Rep. 600 2-91 065, USEPA, Ada. O.K. 1991.
[12] Marquardt, D.W. ―An algorithm for least-squares estimation of non-linear parameters‖. J. Soc. Ind. Appl. Math. 11:
431-441. 1963.
[13] Childs, E.C. and Collis-George,N. ―The permeability of porous materials‖ Soil Sci. 50: 239-252. 1950.
[14] Burdine, N.T. ―Relative permeability calculation size distribution data‖ Trans. Amer. Inst. Mining Metallurgical, and
Petroleum Engrs. 198: 71-78. 1953.
[15] Mualem, Y. ―A new model for predicting the hydraulic conductivity of unsaturated porous media‖ Water Resour.
Res. 12 (3): 513-522. 1976.
[16] Black, P.B. ―Three functions that model empirically measured unfrozen water content data and predict relative
hydraulic conductivity‖ CRREL Report. 90-5. U.S. Army Corps of Engineers.Cold Regions Research and
Engineering Laboratory. 1990.
[17] Kalane, R.L., Oswal, M.C. and Jagannath. ―Comparison of theoretically estimated flux and observed values under
shallow water table‖ J. Ind. Soc. Soil Sci. 42. (2): 169-172.1994.
[18] Kosugi, K., Hopmans, J.W. and Dane, J.H. ―Water Retention and Storage-Parametric Models. In Methods of Soil
Analysis‖ Part 4. Physical Methods. (Eds. E.J.H. Dane and G.C. Topp) pp. 739-758. Book Series No.5. Soil Sci.
Soc. Amer., Madison, USA.2002.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


121
www.ijergs.org

[19] Szymkiewicz, A. ―Chapter 2 :Mathmatical Models of Flow in Porous Media. In Modeling Water Flow in Unsaturated
Porous Media Accounting for Nonlinear Permeability and Material Heterogeneity‖ Springer. 2013
[20] Zhu, J. and Mohanty, B.P. ―Effective hydraulic parameter for steady state vertical flow in heterogeneous soils‖
Water Resour. Res. 39 (8) : 1-12.2003.
[21] Nimmo, J.R. ―Unsaturated zone flow processes‖ in Anderson, M.G. and Bear, J. eds. Encyclopedia of Hydrological
Sci. Part 13-Groundwater: Chichester, UK, Wiley, v.4,p. 2299-2322.2005.
[22] Jauhiainen, M. ―Relationships of particle size distribution curve, soil retention curve and unsaturated hydraulic
conductivity and there implications on water balance of forested and agricultural hill slopes‖ Ph. D. Thesis. Helsinki
University of Technology.pp. 167.2004



















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


122
www.ijergs.org

Information Security in Cloud
Divisha manral
1
, Jasmine Dalal
1
, Kavya Goel
1

1
Department of Information Technology, Guru Gobind Singh Indraprastha University
E-Mail- divishamanral@gmail.com

ABSTRACT-With the advent of the internet, security became a major concern where every piece of information was
vulnerable to a number of threats. Cloud is kind of centralized database where many clients store, retrieve and possibly
modify data. Cloud computing is environment which enables convenient and efficient access to a shared pool of
configurable computing resources. However the data stored and retrieved in such a way may not be fully trustworthy. The
range of study encompasses an intricate research on various information security technologies and proposal of an efficient
system for ensuring information security in cloud computing platforms. Information security has become critical to not
only personal computers but also corporate organizations and government agencies, given organizations these days rely
extensively on cloud for collaboration. Aim is to develop a secure system by encryption mechanisms that allow a client‘s
data to be transformed into unintelligible data for transmission.

Keywords – Cloud, Symmetric Key, Data storage, Data retrieval, Decryption, Encryption, security

I. INTRODUCTION

Information Security is nothing but to protect the database from destructive forces, actions of unauthorized users and to
guard the information from malicious modification, leakage, loss or disruption. The world is becoming more
interconnected with the advent of the Internet and new networking technology. Information security
[1]
is becoming of
great importance because of intellectual property that can be easily acquired. There have been numerous cases of breaches
in security resulting in the leakage or unauthorized access of information worth a fortune. In order to keep the information
system free from threats, analysts employ both network and data security technologies.

Cloud computing is a model which provides a wide range of applications under different topologies and every topology
derives some new specialized protocols. This promising technology is literally called Cloud Data Security. It is the next
generation computing platforms that provide dynamic resource pools, virtualization and high availability.

II. INFORMATION SECURITY TECHNOLOGY

A. ENCRYPTION

In cryptography
[2]
encryption is the process of encoding messages in such a way that eavesdroppers or hackers cannot
read it, but that authorized parties can. In an encryption scheme, information is encrypted using an encryption algorithm,
turning it into an unreadable cipher text. This is usually done with the use of an encryption key, which specifies how the
message is to be encoded. Any adversary that can see the cipher text should not be able to determine anything about the
original message. An authorized party, however, is able to decode the cipher text using a decryption algorithm, which
usually requires a secret decryption key, which adversaries do not have access to. For technical reasons, an encryption
scheme usually needs a key-generation algorithm to randomly produce keys. There are two basic types of encryption
schemes: Symmetric-key and public-key encryption
[3].

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


123
www.ijergs.org


B. SYMMETRIC-KEY CRYPTOGRAPHY

An encryption system in which the sender and receiver of a message share a single, common key that is used to encrypt
and decrypt the message. Contrast this with public key cryptology, which utilizes two keys - a public key to encrypt
messages and a private key to decrypt them. Symmetric-key systems are simpler and faster, but their main drawback is
that the two parties must somehow exchange the key in a secure way. Public-key encryption
avoids this problem because the public key can be distributed in a non-secure way, and the private key is never
transmitted.


Figure 1: Cryptography Model using Symmetric Key

C. HARDWARE BASED MECHANISM

Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such as
dongles may be considered more secure due to the physical access required in order to be compromised. Working of
hardware based security: A hardware device allows a user to login, logout and to set different privilege levels by doing
manual actions. The device uses biometric technology to prevent malicious users from logging in, logging out, and
changing privilege levels. The current state of a user of the device is read both by a computer and controllers in peripheral
devices such as hard disks.

D. DATA ERASURE

Data erasure is the process of permanently erasing data from disk media. It is not the same as file deletion. File deletion
and removal of the Volume Table of Contents (VTOC) simply erases the ―pointers‖ to the data stored on the media so the
data is not viewable in directories. It does not physically erase the data from the media. Many firms physically destroy
hard drives or use various software utilities to ―erase‖ data using these methodologies. However, these solutions are
inadequate and can potentially lead to data breaches, public disclosure and, ultimately, unplanned expenses as described
above.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


124
www.ijergs.org




E. DATA MASKI NG

Data masking technology provides data security by replacing sensitive information with a non-sensitive proxy, but doing
so in such a way that the copy looks – and acts – like the original. This means non-sensitive data can be used in business
processes without changing the supporting applications or data storage facilities. You remove the risk without breaking
the business! In the most common use case, masking limits the propagation of sensitive data within IT systems by
distributing surrogate data sets for testing and analysis. In other cases, masking will dynamically provide masked content
if a user‘s request for sensitive information is deemed ‗risky‘.

III. CLOUD

For some computer owners, finding enough storage space to hold all the data they've acquired is a real challenge. Some
people invest in hard drives. Others prefer external storage devices like pen drives or compact discs. Desperate computer
owners might delete entire folders worth of old files in order to make space for new information. But some are choosing
to rely on a growing trend: cloud storage. Cloud computing encompasses a large number of computers connected through
a real-time communication network such as the Internet. It is a type of computing that relies on sharing computing
resources rather than having local servers or personal devices to handle applications. Cloud computing allows consumers
and businesses to use applications without installation and access their personal files at any computer with internet access.

A. CLOUD STORAGE

A basic cloud storage system needs just one data server connected to the Internet. A client sends copies of files over the Internet to the
data server, which then records the information. When the client wishes to retrieve the information, he accesses the data server
through a Web-based interface. The server then either sends the files back to the client or allows the client to access and manipulate
the files on the server itself. Cloud storage systems generally rely on hundreds of data servers. Because computers occasionally require
maintenance or repair, it's important to store the same information on multiple machines. This is called redundancy. Without
redundancy, a cloud storage system couldn't ensure clients that they could access their information at any given time. Most systems
store the same data on servers that use different power supplies. That way, clients can access their data even if one power supply fails.

[4]



B. ADVANTAGES

Efficient storage and collaboration
Easy Information Sharing
Highly reliable and redundant
Widespread availability
Inexpensive

C. DISADVANTAGES
Possible downtime
Security issues Internet
[5]
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


125
www.ijergs.org

Compatibility
Unpredicted costs
Internet Dependency


IV. PROPOSED SYSTEM

Information security is the most important criterion for any data owner, as the data stored on the cloud will be accessible
not only to him but to many other cloud users. The following proposed system provides a secure yet flexible information
security mechanism which can be implemented easily at the time of data storage as well as data retrieval over the cloud.
The concept of symmetric key is being used where only the data owner, the data retriever and the third party auditor will
be having the access to the keys. Also double encryption will be used to make the system more secure.

A. DATA STORAGE

The proposed system for data storage is flexible as the encryption algorithms used will be of the choice of the user. The
model constitutes two major stages.
The first stage starts when the data owner uploads the data to the center. The owner will be asked to choose from a list of
available algorithms (Encryption Algorithm 1) or upload his own algorithm to encrypt the data. This will lead to the
creation of cipher text along with the primary key (Key 1). The final step of the first stage will be the transferring of the
cipher text on to the cloud.
The second stage starts with the encryption of the Key 1, where again the data owner is asked to choose from list t of
available algorithms (Encryption Algorithm 2) or upload his own algorithm to encrypt the key and create the secondary
key (Key 2). Then the center shares the Key 2 with the third party auditor for future verification. The auditor can verify
the data, and keep track of the shared keys only.
[6]

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


126
www.ijergs.org


Figure 2: Proposed Data Storage Model


B. DATA RETRIVAL

The data retrieval poses a bigger problem than the data storage in cloud computing.
In this proposed model the data retriever has to take data access permission from the data owner by sending in a data
access request. If the request is accepted by the data owner, he sends the secondary key (Key 2) and the information for
further decryption i.e. which all decryption algorithms are to be used for further decrypting and retrieving the final plain
text.
The data retriever sends a data request to the Third party auditor. The Auditor verifies the key send by the retriever with
the database present with him, if the keys match it will allow to take the cipher text from the cloud data storage.
The information given by the data owner to the retriever helps in decrypting key 2 into key 1 using the decryption
algorithm 2. With key 1 in hand the cipher text can be decrypted using decryption algorithm 1 into the final plain text,
which can be used by the retriever.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


127
www.ijergs.org


Figure 3: Proposed Data Retrieval Model

B. BENEFITS

The proposed model in this paper is highly secured because of the use of double encryption technique. The secondary key can be
accessed by data owner, data retriever and the third party auditor, but this only gives access to the cipher text and hence even the third
party auditor also doesn‘t have direct access to the data. No one can use the data unless he has been given the information by the data
owner about decrypting the secondary key into the primary key and further using it to again access to the plain text.
The proposed model is flexible since the model does not hold any constraints on the use of cryptography algorithms; the data owner
will be allowed to choose from a list of algorithms or given a choice to use his own algorithm for the encryption process.
The proposed model uses symmetric key cryptography technique; symmetric key is faster than asymmetric encryption algorithm. The
model encrypts plain text easily and produces cipher text with less time.
The data is stored as cipher text in the cloud. So even if the attacker hacks into the cloud system and gains access to the data stored
there, he cannot decrypt it and use it further. Hence making data stored in the cloud more secure and less vulnerable to threats.

V. CONCLUSION

Cloud computing is one of the most booming technology in
world right now. But this technology is facing many data security threats and challenges. With the help of the proposed
system which incorporates the use of Double key encryption technique and symmetric cryptography algorithm, one can
manage to keep their data securely in the cloud. It would provide high level speed and security in cloud environment. The
proposed system aims to achieve goals like confidentiality, data integrity and authentication in a simple manner without
compromising on security issues.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


128
www.ijergs.org


REFERENCES:

[1] Aceituno, V, Information Security Paradigms, ISSA Journal, September, 2005.
[2] Gold Reich, Oded, Foundations of Cryptography: Volume 2, Basic Applications. Vol. 2. Cambridge university press,
2004.
[3] Bellare Mihir, Public-Key Encryption in a Multi-user Setting: Security Proofs and Improvements, Springer Berlin
Heidelberg, 2000, Page 1.
[4] Herminder Singh & Babul Bansal, Analysis of Security Issues And Performance Enhancement In Cloud Computing" International
Journal of Information Technology and Knowledge Management, Volume 2, No. 2, pp. 345-349, July-December 2010
[5] B. Rajkumar, C. Yeo, S. Venugopal, S. Malpani, Cloud computing and emerging IT platforms: vision, hype, and
reality for delivering computing as the 5th utility
[6] Snsha Vijayaraghavan, K.Kiruthiga, B.Pattatharasi and S.Sathiskumar, Map-Reduce Function For Cloud Data Storage
and Data Integrity Auditing By Trusted TPA, International Journal of Communications and Engineering, Vol 05,No
5,Issues 03,pp 26-32,March 2012

















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


129
www.ijergs.org

Aerodynamic Characteristics of G16 Grid Fin Configuration at Subsonic and
Supersonic Speeds
Prashanth H S
1
, Prof. K S Ravi
2
, Dr G B Krishnappa
3
1
M.Tech Student, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
2
Associate Professor, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
3
Professor and HOD, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
e mail: hsprashanth63@gmail.com, Phone No: +91 9916886610

Abstract: Grid fins (lattice fins) are used as a lifting and control surface for highly maneuverable missiles in place of more
conventional control surfaces, such as planar fins. Grid fins also find their applications for air-launched sub-munitions. The main
advantages are its low hinge moment requirement and good high angle of attack performance characteristics. In this paper, one such
grid fin configuration named G16 grid fin was taken for the CFD analysis. The G16 fin was studied under standalone condition at 0.7
& 2.5 Mach number for different Angle of attack (AOA) from 0 to 30

. The aerodynamic characteristics were plotted and discussed.

Keywords: Grid fins, Lift and Drag, Angle of Attack, ANSYS
I. INTRODUCTION

In a modern military, a missile is a self-propelled guided weapon system. Missiles have four system components: targeting and/or
guidance, flight system, engine, and warhead. Missiles come in types adapted for different purposes: surface-to-surface and air-to-
surface (ballistic, cruise, anti-ship, anti-tank), surface-to-air (anti-aircraft and antiballistic), air-to-air and anti-satellite missiles.

Grid fins (or lattice fins) [1] are a type of flight control surface used on missiles and bombs in place of more conventional control
surfaces, such as planar fins. Grid fin looks much like a rectangular box filled with a lattice structure similar to a waffle iron or garden
trellis. The grid is formed by small intersecting planar surfaces that create individual cells shaped like cubes or triangles. The box
structure is inherently strong allowing the lattice walls to be very thin, reducing weight and the cost of materials.


Figure 1.1: Grid Fins and Planar Fins

The primary advantage of grid fins is that they are much shorter than conventional planar fins in the direction of the flow. As a result,
they generate much smaller hinge moments and require considerably smaller servos to deflect them in a high-speed flow [4]. The
small chord length of grid fins also makes them less likely to stall at high angles of attack. This resistance to stall increases the control
effectiveness of grid fins compared to conventional planar fins. Another important aerodynamic characteristic of grid fins concerns
drag, although it can be an advantage or a disadvantage depending on the speed of the airflow.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


130
www.ijergs.org

In general, the thin shape of the lattice walls creates very little disturbance in the flow of air passing through, so drag is often no higher
than a conventional fin. At low subsonic speeds, for example, grid fins perform comparably to a planar fin. Both the drag and control
effectiveness of the lattice fin are about the same as a conventional fin in this speed regime.


Figure 1.2: Flow over the grids at different flow regimes

The same behavior does not hold true at high subsonic numbers near Mach 1. Drag rises considerably higher and the fins become
much less effective in this transonic region because of the formation of shock waves [2]. The flow behavior over the grid fins at
various flow regimes are illustrated in Fig. 1.2.

II. G16 GRID FIN GEOMETRICAL DETAILS, MESH GENERTAION AND BOUNDARY CONDITIONS

The grid fin G16 geometry was taken from the previous existing experimental work [3]. Geometry was designed in CATIA V5 and
Mesh was generated using ANSYS ICEM CFD 14.5. The simulations were carried out in the ANSYS CFX 14.5 solver.



Figure 2.1: Geometric Details of G16 Fin (Dimensions are in mm) and Model created in CATIA V5 software

Wind tunnel like setup is provided by creating a fluid domain over the fin with suitable dimensions in the upstream and downstream
directions. The whole body was imported to ANSYS workbench ICEM CFD 14.5 meshing tool and the unstructured Tetrahedral mesh
(Fig. 2.2) is created. After a grid independence study a mesh with total of 2621519 elements and 495961 nodes were selected for the
analysis. After the mesh, 4 inflation layers (Fig. 2.2) were applied over the surfaces of the fin with growth rate of 1.15 from the initial
length. The insertion of inflation layers to the existing mesh helps to accurate capture of boundary effects near the proximities and
curves of the body and also gives quicker results.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


131
www.ijergs.org



Figure 2.2: Cut plane showing unstructured mesh around Grid fin and Inflation layers over the surface of Grid fins
The imported G16 mesh data is analyzed in the ANSYS CFD code CFX 14.5 solver [5] for the 0.7 and 2.5 Mach Numbers for AOA 0,
5, 10, 15, 20, 25 and 30 degrees. The following boundary conditions were applied. Velocity inlet for Inlet, static pressure condition in
the outlet, opening conditions for the domain walls and no-slip velocity for the fin surface.
For simulation, the k-ω based Shear Stress Transport (SST) turbulence model was selected. The SST the model accounts for the
transport of the turbulent shear and gives highly accurate prediction of the onset and the amount of flow separation under adverse
pressure gradients. Since the thermal problem was not of importance in the present study, option total energy was selected. Under the
equation class settings the upwind advection scheme was selected for faster results output and the convergence criteria is set to
residual type (RMS). The problem was setup for the standard atmospheric pressure conditions.
III. Results and Discussion
After the simulation achieved desired convergence criteria, the output results were analyzed in the post processor CFD POST 14.5.
The behavior of Velocity and Pressure contours and the body forces (Axial and Normal) were note down for the Aerodynamic
coefficients calculation. The graphs for the same were plotted.

Following figures show the pressure and velocity distribution over the grid fin for AOA 0◦ & 20◦ at 0.7 & 2.5 Mach numbers.




Figure 3.1: Velocity contour on a cut plane at Mach 0.7, AOA 0

Figure 3.2: Pressure contour on a cut plane at Mach 0.7, AOA 0


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


132
www.ijergs.org










Following Graphs shows the comparison between different aerodynamic characteristics against AOA for Mach 0.7 and Mach 2.5.

Figure 3.3: Velocity contour on a cut plane at Mach 2.5, AOA 0

Figure 3.4: Pressure contour on a cut plane at Mach 2.5, AOA 0


Figure 3.5: Velocity contour on a cut plane at Mach 0.7, AOA 30

Figure 3.6: Pressure contour on a cut plane at Mach 0.7, AOA 30


Figure 3.7: Velocity contour on a cut plane at Mach 2.5, AOA 30

Figure 3.8: Pressure contour on a cut plane at Mach 2.5, AOA 30


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


133
www.ijergs.org


Figure 3.9: C
N
v/s AOA for Mach 0.7 and Mach 2.5 Figure 3.10: C
A
v/s AOA for Mach 0.7 and Mach 2.5



Figure 3.11: C
L
v/s AOA for Mach 0.7 and Mach 2.5 Figure 3.12: C
D
v/s AOA for Mach 0.7 and Mach 2.5

Figure 3.13: L/D v/s AOA for Mach 0.7 and Mach 2.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 10 20 30
F
i
n

n
o
r
m
a
l

c
o
e
f
f
i
c
i
e
n
t
,

C
N
AOA in Degrees
Mach 0.7
Mach 2.5
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0 10 20 30
A
x
i
a
l

F
o
r
c
e

C
o
e
f
f
i
c
i
e
n
t
,

C
A
AOA in Degress
Mach 0.7
Mach 2.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 10 20 30
C
o
e
f
f
i
e
c
i
e
n
t

o
f

L
i
f
t
,

C
L
AOA in Degrees
Mach 0.7
Mach 2.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0 10 20 30
D
r
a
g

C
o
e
f
f
i
c
i
e
n
t
,

C
D
AOA in Degrees
Mach 0.7
Mach 2.5
0
0.5
1
1.5
2
0 10 20 30
L
i
f
t

t
o

D
r
a
g

R
a
t
i
o
,

L
/
D
AOA in Degrees
Mach 0.7
Mach 2.5
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


134
www.ijergs.org


Inference from the Graphs:
Figure 3.9 shows the graph of Normal force coefficient versus AOA for Mach 0.7 & 2.5. It is seen that C
N
value for Mach 0.7 is
greater than Mach 2.5 for all AOAs and the C
N
for both Mach 0.7 & 2.5 increases as the AOA increases.
Figure 3.10 shows the graph of Axial Force coefficient versus AOA for Mach 0.7 and 2.5. It is that the C
A
for the Mach 0.7 is slightly
greater than Mach 2.5. For both Mach 0.7 and 2.5, the C
A
value slightly decreases as the AOA increases.
Figure 3.11 shows the graph of Coefficient of Lift, C
L
versus AOA for Mach 0.7 and Mach 2.5. It is seen the lift produced for Mach
0.7 is greater than the lift produced at Mach 2.5 for all AOAs and also the C
L
varies linearly with the increase in AOA for both Mach
0.7 & 2.5, except for Mach 0.7 after AOA 20◦.
Figure 3.12 shows the graph of Coefficient of drag, C
D
versus AOA for Mach 0.7 & 2.5. It is seen that the drag levels at Supersonic
speeds i.e. at Mach 2.5 is considerably reduced compared to subsonic speeds. At higher speeds (supersonic), the drag tends to decrease
due to the smaller oblique shock angle and the shock passes through the grid along the chord length without intersecting it. However,
at low supersonic and subsonic speeds the oblique shocks reflects within the grids producing more drag force which in turn affects the
speed of the moving object. This shows that, the fin performs better at supersonic speeds. However, lift force is considerably low at
supersonic speeds compared to subsonic speeds.

Figure 3.13 shows the graph of Lift to Drag ratio, L/D versus AOA for Mach 0.7 & 2.5. It is seen that, the up to AOA 20◦, the L/D
ratio is more for Mach 0.7 and beyond 20◦, the L/D ratio is more of Mach 2.5. Also it is observed that for both Mach 0.7 and 2.5, the
maximum L/D ratio appears at 15◦ and then it decreases with increase in AOA for both subsonic and supersonic flow regimes.
IV. CONCLUSION
Numerical simulations were successful in predicting the flow behavior at different flow regimes with varying AOAs. The following
inferences can be seen from the analysis.
1. For all AOAs, the normal force coefficient C
N
, axial force coefficient C
A
and lift coefficient C
L
were comparably greater in
subsonic flow than in supersonic flow. Also it is seen that, C
N
& C
L
characters shows increase in the value as the AOA
increases. But, for C
A
it is vice versa.
2. At supersonic speeds, the drag levels were decreased compared to subsonic flows. This is due to the smaller oblique shock angle
at supersonic speeds and the shock passes through the grid along the chord length without intersecting it.
3. The L/D ratio shows that, the performance of G16 fin is better at subsonic speeds up to AOA 20

. At AOA beyond 20

, the fin
show improved performance at supersonic speeds. Also, the maximum L/D ratio occurs at AOA 15

for both flow regimes i.e. at
Mach 0.7 & 2.5.
4. Overall it is concluded that, the G16 fins shows better performance at higher AOA and at higher speeds since, reduction in drag
levels at Mach 2.5. However, the lift is to be improve at supersonic speeds.

REFERENCES:

[1] Scott, Jeff, ―Missile Grid Fins‖ and ―Missile Control Systems‖, URL:http://www.aerospaceweb.org/questions/weapons/.

[2] Zaloga, Steve (2000). The Scud and Other Russian Ballistic Missile Vehicles. New Territories, Hong Kong: Concord
Publications Co.

[3] Washington, W. D., and Miller, M. S., ―Experimental Investigations of Grid Fin Aerodynamics: A Synopsis of Nine Wind
Tunnel and Three Flight Tests,‖ Proceedings of the NATO RTO Applied Vehicle Technology Panel Symposium on Missile
Aerodynamics, RTO-MP-5, NATO Research and Technology Organization, Cedex, France, Nov. 1998.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


135
www.ijergs.org

[4] Salman Munawar., "Analysis of grid fins as efficient control surface in comparison to conventional planar fins," 27th
International Congress of the Aeronautical Sciences, 2010.

[5] ANSYS 14.5 CFX, Help PDF: Solver Modeling Guide





















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


136
www.ijergs.org

Design, fabrication and Performance Evaluation of Polisher Machine of Mini
Dal Mill
Sagar H. Bagade
1
, Prof. S. R. Ikhar
2
, Dr. A. V. Vanalkar
3
1
P.G. Student, Department of Mechanical Engg, KDK College of Engineering, Nagpur, R.T.M. Nagpur University, Maharashtra,
shbpro1@gmail.com Tel.:- +919673702322
2
Asst. Professor, Department of Mechanical Engg KDK College of Engg. Nagpur, RTM Nagpur University, Maharashtra, India.
sanjay_ikhar@rediffmail.com
3
Professor, Department of Mechanical Engg KDK College of Engg. Nagpur, RTM Nagpur University, Maharashtra, India.
avanalkar@yahoo.co.in

Abstract— This paper describes the detail information of design procedure of polisher machine. Pictorial views of fabricated
machine is given. The processed dal sample is tested for reflectivity. Schematic of test apparatus is given. Apparatus consist of LDR ,
which detects incoming light in the form of resistance. Three Dal samples are tested. Surface of Polished Dal samples found more
reflective than unpolished Dal sample.
Keywords— design, polishing, pigeonpea, mini dal mill, fabrication, experimentation, test setup.
1.0 INTRODUCTION
The cotyledon of dry seeds excluding seed coat is called dal. In India and many Asian countries, Pigeonpea is mainly consumed as
dhal acceptable appearance, texture, palatability, digestibility, and overall nutritional quality. The polishing is one of the important
value addition steps in Dal processing. The polishing is done to improve the appearance of the Dal, which helps in fetching premium
price to the processor. Whole pulses such as pea, black gram, green gram, and splits(dal) are polished for value adding. Some
consumers prefer unpolished dal, whereas others need dal with attractive colour( polished dal). Accordingly, dal is polished in
different ways such as nylon polish, oil-water polish, colour polish and so on. Polishing is a process of removal of outer layer from a
surface. The cylindrical roller mounted with hard rubber, leather or emery cone polisher and roller mounted with brushes are used for
the purpose. The powder particles are removed by rubbing action. Speed and sizes of these types of polisher are similar to those of the
cylindrical dehusking roller. Another type of machinery provided for this purpose is a set of screw conveyors arranged in battery for
repeated rubbings. The flights and shaft are covered with nylon rope or velvet cloth. The speed of each screw conveyor varies. The
repeated rubbing adds to the luster of the dal, which makes it more attractive. These polishers are commonly known as nylon polisher
or velet polisher, depending on material used and are available in a set of 2, 3, 4 or 5 screw conveyors. The splitting and polishing is
done to increase the shelf life of pigeon pea. The Dal mills are used for splitting of pulse into two cotyledons followed by polishing.
Seed treatment to reduce storage losses is becoming increasingly important.

2.0 DESIGN OF POLISHER MACHINE
This chapter gives the design calculation of major components of the machine, e.g. design calculation of shaft, belt and pulley drive.
2.1 DESIGN CONSIDERATION
Objective is to clean the surface of dal i.e. polish dal grains.
The value of force required to break dal grain is called as bio yield force. The machine is designed considering the bio-yield force of
dal grains. From literature it is found that this force is different for length, breadth and thickness. Among three minimum is at length
i.e. 81.06N.
F = 78.74 N
This much force is imparted on grains against the lower half inner periphery.
Hence,
1hp motor is selected.
2.2 Drive selection
Motor speed, N1 = 1440 rpm
Velocity, Vr = 8
Hence V-belt drive is selected for power transmission.
2.3 Design of V-belt drive
D1 = 58.8mm
Hence,1440 / 180 = D2 / 58.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


137
www.ijergs.org

D2 = 406mm
Checking,
Vp = π D1 N1 / 60
= 3.83 m/sec
The peripheral velocity(Vp) for de-hulling i.e. splitting is recommended 10 m/s.
1] Power per belt = (F
w
– F
c
)x (e
μu/sin(α/2)
– 1)x V
p
= 2.378 kW
e
μu/sin(α/2)

2] Number of belts , N = Pd / power peer belt = 0.345, N = 1
3] Length of belt , L = π/2(D2+D1) + 2C+ (D2 – D1 )
2
/ 4C = 1988 mm
Standard length of belt selected ,L = 77 inch
2.4 Design of bigger pulley
1) Width of pulley w = (n-1) e + 2f
For section-A, W = (n-1) e + 2 f = 21mm
2) Pitch diameter ,Dp
For v-groove details , α = 38 degree, Recommended Dp = 200mm
3) Type of construction
According to diameter of pulley , D2 = 406mm
Arm construction type selected
No. Of arms = 1,No. Of sets = 1
4) Rim thickness
T = 0.375 √D + 3 = 11mm
2.5 DESIGN OF MAIN SHAFT
1) DESIGN TORQUE
T
d
= 60xPxK
l
/(2xπxN )------ [K
l
=1.75 for electrical motor and line shaft]
T
d
= 69.259 N-m
2) Forces on belt drive
T
d
= (T
1
– T
2
)D
2
/2
(T
1
-T
2
) = 341.177 N-m -------------------------------(1)
T
1
/T
2
= e
μu

Where ,Coefficient of friction, μ= 0.3
Angle of lap on smaller pulley, u= 2.364
T
1
/T
2
= e
0.3x 2.364
T
1
= 2.032 T
2
--------------------------------(2)
From equation (1) and (2)
2.032 T
2
– T
2
= 341.177
T
1
= 671.77 N , T
2
= 330.6 N
2.6 FORCE CALCULATION ON MAIN SHAFT

Fig 2.1 Vertical shear force diagram
Weight of pulley , W
pa
= 5.5kg = 54N
Weight of main shaft with rotor , W
sh
= 15kg = 147.15N
At static equilibrium, Σ F = 0
R
vb
+ R
vd
= 1203.62-----------------------------(1)
Taking moment at point B,
Hence, ΣMb = 0
R
vd
= 144.1N
Substituting above value in equation (1)
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


138
www.ijergs.org

Hence , R
vb
= 1.59N

Fig 2.2 vertical bending moment
M
a
= 0, M
b
= - 160.58 N-m , M
c
= -159.45 N-m, M
d
= 0
Selecting maximum moment on shaft
M = 160.58N-m
Selecting shaft material SAE 1030
S
yt
= 296Mpa, S
yt
= 296/2 = 148 Mpa, T
max
=0.3 x S
yt
= 44.4 Mpa
S
ut
= 527Mpa, S
ut
= 527/2 = 263.5Mpa, T
max
= 0.18 xS
ut
= 47.43Mpa
Selecting T
max
= 44.4Mpa
For rotating shaft Selecting gradually applied load , K
b
= 1.5, K
t
=1
For diameter of shaft
T
max
= 16 x 10 x
3
√(Kbx M)
2
+ (Ktx Td)
2

πx D
sh
3

Hence, D
sh
= 30.63mm
Selecting standard diameter of shaft , D
sh
=32mm
Hub diameter, D
h
= 1.5 D
sh
+ 25 = 73mm
Hub length , L
h
= 1.5 xD
sh
= 42mm
3.0 Fabrication
3.1 Mechanical Components
1) Roller with shaft 2) Upper Half Casing With Hopper

Fig.no 3.1Top View of Roller with Pulley Fig.no.3.2 Upper Half Casing With Hopper

3) Lower Half Casing With Frame 4) Polisher Machine
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


139
www.ijergs.org


Fig.no.3.3 Lower Half with Velvet Bed Fig.no.3.4 Assembled Machine


4.0 EXPERIMENTATION
4.1 MATERIALS AND METHODS
Two samples are selected for testing. The first sample selected for testing is output product of a mini dal mill. This sample is dehusked
and split tur dal. It contains split unhusked grains, split husked grains, broken grains husk and dust. It is processed through oil mixing,
sun drying and dehusking.

Fig. Prepared Dal Samples
4.2 TESTING
4.2.1 TEST APPARATUS
Photo-conductive cell with a potentiometer is used to compare the shine of surface. Reflections of light from a surface of grains is
measured indirectly.

Fig.no.4.1 schematic of test apparatus Fig. no.4.2Test Apparatus
Principle of Working:
‗When light strikes semi-conductor material, there is decrease in cell resistance‘.
4.2.3TESTING RESULT
Three samples are tested for light reflectivity.10 tests are done on each sample and mean values are tabulated.
Results of testing are tabulated as below:
Sr. Dal Sample LDR 1 LDR 2
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


140
www.ijergs.org

no. (KΩ) mean value (KΩ) mean value
1 Unpolished 0.2 0.92
2 Oil polished 0.12 0.74
3 Polished without oil 0.12 0.6
Table.no.4.3 Dal Sample Testing

Fig .4.4 Dal sample vs LDR readings

Decrease in resistance indicates increase in intensity of light striking LDR (LIGHT DEPENDENT RESISTOR i.e. Photo-Conductive
cell). Above table clearly indicates that grains samples of Tur Dal processed through Polisher Machine have better shine.
5.0 CONCLUSION
Decrease in resistance indicates increase in intensity of light striking LDR (LIGHT DEPENDENT RESISTOR i.e. Photo-
Conductive cell). Above table clearly indicates that grains samples of Tur Dal processed through Polisher Machine have better
shine. From above study, results shown that there is improvement in the textur of Tur dal.

REFERENCES:
[1] Mutalubi Aremu Akintunde ,‖Development of a Rice Polishing Machine ― , AU J.T. 11(2): 105-112 (Oct. 2007)

[2] Gbabo Agidi, Liberty J.T., Eyumah A.G., ―Design, Construction and Performance Evaluation of aCombined Coffee Dehulling and
Polishing Machine‖, International Journal of Emerging Technology and Advanced Engineering, Volume 3, Issue 11, November 2013

[3] Oduma O., Femi1 P.O. and Igboke M.E, ―assessment of mechanical properties of pigeon pea (cajanus cajan (l)millsp) under
compressive loading‖ , International Journal of Agricultural Science and Bioresource Engineering Research Vol. 2(2), pp. 35-46,
October, 2013

[4] Shirmohammadi Maryam, Yarlagadda P. K.D.V., Gudimetla P., Kosse V., ―Mechanical Behaviours of Pumpkin Peel under
Compression Test‖, Advanced Materials Research, 337(2011), pp. 3-9

[5] N. V. Shende,‖Technology adoption and their impact on formers : A Case study of PKV Mini Dal Mill in Vidarbha Region―Asian
Resonance , VOL.-II, ISSUE-IV, OCTOBER-2013

[6] Singh Faujdar andDiwakar B.,‖ Nutritive Value and Uses ofPigeonpea and Groundnut‖ ICRISAT, 1993

[7] Mangaraj S, Kapoor T,‖ Development and Evaluation of a Pulse Polisher‖, Agricultural EngineeringTodayYear : 2007, Volume :
31

[8] Mangaraj S. and SinghK. P.,‖ Milling Study of Multiple Pulses Using CIAE Dhal Mill for Optimal Responses―,J Food Process
Technol, Volume 2 • Issue 21 0.4172/2157-7110.1000110

[9] Kurien P. P., ―Advances in Milling Technology of Pigeonpea‖, Proceedings of the InternationalWor k shop onPigeonpeas, Volume
1, 15-19 December 1980

[10] Nwosu J. N, Ojukwu M, Ogueke C. C, Ahaotu I, and Owuamanam C. I., ―The Antinutritional Properties and Ease of Dehulling
0
0.2
0.4
0.6
0.8
1
UNPOLISHED OIL POLISHED POLISHED
WITHOUT OIL
LDR 1
LDR 2
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


141
www.ijergs.org

on theProximate Compositionof Pigeon pea (Cajanuscajan) as Affected by Malting‖, International Journal of Life Sciences Vol.2.
No.2. 2013. Pp. 60-67

[11] Opoku A., Tabil1 L., Sundaram J., Crerar W.J. and Park S.J.,‖Conditioning and Dehulling of Pigeon Peas and Mung
Beans‖,CSAE/SCGR 2003, Paper No. 03-347

[12] Ghadge P N , Shewalkar S V, Wankhede D B,‖ Effect of processing methods on qualities of instant whole legume: Pigeon
pea(CajanuscajanL.)‖, Agricultural Engineering,International: the CIGR Ejournal. Manuscript FP 08 004. Vol. X. May, 2008.

[13] Shiwalkar B.D.,‘‘Design data for machine elements‘‘, 2010 Denett& Company.

[14] Rattan S.S, ‘‘Theory of machine‘‘, edition 2012, S.Chand Publication.

[15] Bhandari V.B., ‗‘Design of machine elements‘‘.3
rd
edition,2010 the Tata McGraw Hill Education Private Limited.

[16] Chakraverty A., MujumdarA.S, Ramaswamy H.S.,‖Handbook of Post-harvest Technology‖,2013 Marcel Dekker Inc.

[17] Kumar D.S. ,‖Mechanical Measurement‖, 5
th
edition,2013, Metropolitan Book Co. Pvt.Ltd















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


142
www.ijergs.org

Design of 16-bit Data Processor Using Finite State Machine in Verilog
Shashank Kaithwas
1
, Pramod Kumar Jain
2

1
Research Scholar (M.Tech) SGSITS
2
Associate Professor, SGSITS
E-mail- shashankkaithwas09@gmail.com
Abstract— This paper presents design concept of 16—bit Data processor. Design methodology has been changing from
schematic to Hardware Descriptive Language (HDL) based design. Data processor has been proposed using Finite State Machine
(FSM). The state machine designed for the Data Processor can be started from any state and can jump on any state in between. The
key architecture elements of Data processor such as, Arithmetic Logic Unit (ALU), Control Unit and Data-path are being
described. Functionalities are validated through synthesis and simulation process. Besides verifying the outputs, the timing diagram
and interfacing signals are also track to ensure that they adhere to the design specification. The Verilog Hardware Descriptive
Language gives access to every internal signal and designing Data Processor using this language fulfils the needs for different high
performance applications.
Keywords— HDL-Hardware Descriptive Language, FSM-Finite State Machine, ALU-Arithmetic Logic Unit,
Control Unit, Data-path, Data Processor, Verilog Hardware Descriptive Language.
INTRODUCTION
Processors are the heart of all ―smart‖ devices, whether they be electronic devices or otherwise. Their smartness comes as a direct
result of the decisions and controls that processors makes. There are generally two types of processor: general purpose processors and
dedicated processors. General-purpose processors such as the Pentium CPU can perform different tasks under the control of software
instructions. General purpose processors are used in all personal computers. Dedicated processors also known as application-specific
integrated circuits (ASICs) and are designed to perform just one specific task. For example, inside the cell phone, there is a dedicated
processor that controls its entire operation. The embedded processor inside the cell phone does nothing, but controls the operation of
the phone. Dedicated processors are therefore, usually much small and not as complex as general purposes processors.
The different parts and components fit together to form the processor. From transistor, the basic logic gates are built. Logic gates are
combined together to form either combinational circuits or sequential circuits. The difference between these two types of circuits is
only in the way the logic gates are connected together. Latches and flip-flops are the simplest forms of sequential circuits, and they
provide the basic building blocks for more complex sequential circuits. Certain combinational circuits and sequential circuits are used
as standard building blocks for larger circuits, such as the processor. These standard combinational and sequential components usually
are found in standard libraries and serve as larger building blocks for the processors. Different combinational and sequential
components are connected together to form either the data path or the control unit of a processor. Finally, combining the data path and
the control unit together will produce the circuit for either a dedicated or general processor.
However, they are used in every smart electronic device such as the musical greeting cards, electronic toys, TVs, cell phones,
microwave ovens and anti-lock break systems in car. Although the small dedicated processors are not as powerful as the general-
purpose processors, they are being sold and used in a lot more places then the powerful general-purpose processors that are used in
personal computers.
DESIGN OF MODULE
This contains designing of important processor module such as ALU, Data path and Control circuit.


ARITHMETIC AND LOGICAL UNIT (ALU)

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


143
www.ijergs.org

The arithmetic-logic unit (ALU) performs basic arithmetic and logic operation which are controlled by the opcode. The result of the
execution of the instruction is written to the output. Designing of ALU is done for arithmetic operation such as addition, subtraction,
multiplication, increment, decrement etc. The inputs are 16-bit wide with type unsigned. Figure 1 shows ALU Block Diagram:-

Fig 1. Block Diagram ALU
CONTROL CIRCUIT
The control unit is a sequential circuit in which its outputs are dependent on both its current and past inputs. This history of past inputs
is stored in the state memory and is said to represent the state of the circuit. Thus, the circuit changes from one state to next when the
content of memory changes. Depending on the current state of the circuit and the input signals, the next-state logic will determine
what the next state ought to be by changing the content of the state memory. Hence, a sequential circuit executes by going through a
sequence of states. Since the state memory is finite, therefore the total number of different states that the circuit can go to is also finite.
This is not to be confused with the fact that the sequence length can be infinitely long. However, because of the reason of having only
a finite number of states, a sequential circuit is also referred to as a Finite State Machine (FSM). . Figure 2 shows the block diagram of
the control unit.

Fig 2. Block Diagram of Control Circuit
DATAPATH
The design of functional units for performing single, simple data operations, such as the adder for adding two numbers or the
comparator for comparing two values is described. However, for adding a million numbers, there is no need to connect a million
minus one adder together. Instead, take a circuit with just one adder and to use it a million time. A data path circuit allows to do just
that, i.e., for performing operations involving multiple steps. Figure 3 shows a simple data path using one adder to add as many
numbers as desired. In order for this to be possible, a register is needed to store the temporary result after each addition. The
temporary result from the register is feed back to the input of the adder so that the next number can be added to the current sum.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


144
www.ijergs.org


Fig 3. Block Diagram of Datapath
PROCESSING UNIT
Datapath and control unit together makes processing unit. Control circuit provides essential control signals to the Datapath unit for
required operations. And the Datapath is the part concerning the flow of data, to be manipulated, transmitted or received. Functional
Diagram of Processing Unit is shown in figure 4.

Fig 4. Functinal Diagram of Processing Unit
STATE MACHINE DIAGRAM
Figure 5 and 6 shows the State Diagram for processor having four states and eight states respectively. This paper presents the 16
states data processor which can be designed easily sighting these two state diagrams. The two state machines starts from State0 if the
RESET is set to logic 1 and if the RESET is forced to logic 0 then depending on the value of START the state machines changes to
next states also we can switch to any state from the present state of the state machine. If the value of START does not change then the
machine will remain on the present state. For each state a particular opration is assigned.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


145
www.ijergs.org


Fig 5. State Machine Diagram having 4 states

Fig 6. State Machine Diagram having 8 states
DATA PROCESSORS SPECIFICATIONS
Table I:- shows the various specifications according to the opcodes
OPCODE OPERATION SPECIFICATION
0000 a + b zout is assigned the value of a+b
0001 a - b zout is assigned the value of a-b
0010 a + 1 Incremented value of a assign to zout
0011 a - 1 Decremented value of a assign to zout
0100 a OR b zout is assigned the value of a or b
0101 a AND b zout is assigned the value of a and b
0110 NOT a zout is assigned the value of not a
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


146
www.ijergs.org

0111 NOT b zout is assigned the value of not b
1000 a NAND b zout is assigned the value of a nand b
1001 a NOR b zout is assigned the value of a nor b
1010 a XOR b zout is assigned the value of a xor b
1011 a XNOR b zout is assigned the value of a xnor b
1100 a << 1 Shifted left value of a assign to zout
1101 a >> 1 Shifted right value of a assign to zout
1110 b << 1 Shifted left value of b assign to zout
1111 b >> 1 Shifted right value of a assign to zout

RTL (Register-transfer Level ) GENERATION
The RTL (Register-transfer Level) view of the Verilog code is shown in figure 7:-

Fig 7 RTL view of the Data Processor
SIMULATION RESULTS
Design is verified through simulation, which is done in a bottom-up fashion. Small modules are simulated in separate testbenches
before they are integrated and tested as a whole. The results of operation on the test vectors are manually computed and are referred to
as expected results
By simulation for a and b where a=4 and b=2, zout gives following results shown in figure 8:
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


147
www.ijergs.org


Fig 8. Simulation Results
ACKNOWLEDGMENT
We gratefully acknowledge the Almighty GOD who gave us strength and health to successfully complete this venture. We wish to
thank lecturers of our college for their helpful discussions. We also thank the other members of the Verilog synthesis group for their
support.
CONCLUSION
In this paper, we have proposed efficient Verilog coding verification method. We have also proposed several algorithms using
different design levels. Our proposal have been implemented in Verilog using Xilinx 9.2a and Altera‘s Modelsim simulator, the RTL
is generated in Xilinx 9.2a and the functionality has been checked in Modelsim simulator. the Data Processor design using Verilog is
successfully designed, implemented and tested. Currently, we are conducting further research that considers the further reductions in
the hardware complexity in terms of synthesis. Finally the code has been downloaded into Altera SPARTAN-3E: FPGA chip on LC84
package for hardware realization. Figure 9 shows the FPGA implementation of the design:-
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


148
www.ijergs.org


Figure 9. FPGA Implementation of the Design

REFERENCES:
[1] Development and Directions in Computer Architecture Lipovski, G.J. Doty, K.L. University of Texas Aug. 1978 Volume: 11,
Issue: 8. (Pp.54-67)
[2] Andrei-Sorin F., Corneliu B., 2010 ―Savage 16-16 bit RISC Architecture General Purpose Microprocessor‖ in Proc, IEEE
Journal. (Pp.3-8)
[3] Venelin Angelov, Volker L., 2009 ―The Educational Processor Sweet-16‖ in Proc. IEEE Conference. (Pp. 555-559)
[4] J. Eyre and J. Bier, DSP Processors Hit the Mainstream, IEEE Micro, August 1988. [5] Gordon Bell, ―RISC: Back to the
Future?‖ , Datamation, Vol. 32, No. 11, June 1, 1986 (Pp.96-108)
[6] Gin-der Wu Kuei-Ting Kuo, ―Dual-ALU structure processor for speech reorganization‖ Publication Date: 24-26, April 2006
[7] Tseyun Feng, Dharma P. Agrawal, ―A Microprocessor-controlled asynchronous circuit switching network‖, 1979 (Pp.202-215)
[8] Xiao Tiejun, Liu Fang, 2008 ―16-bit Teaching Microprocessor Design and Application‖ in Proc. IEEE International Symposium
on It in Medicine and Education. (Pp.160-163)
[9] Cross, J.E. and Soetan, R. A., 1988 ―Teaching Microprocessor Design using the 8086 Microprocessor‖ in Proc. IEEE Journal.
[10] J. Bhaskar, Verilog HDL Synthesis, A Practical primer
[11] Douglas j. Smith, HDL Chip Design: A Practical guide for Designing, Synthesizing and Simulation ASICs and FPGAs using
VHDL or Verilog. JUNE 1996.
[12] James M. Lee, Verilog Quickstart. Hardcover Published by Kluwer Academic Pub. MAY 1997.
[13] Fraunhofer IIS, ―From VHDL and Verilog to System‖.www.iis.fraunhofer.de/bf/ic/icdds/arb_sp/vhdl.
[14] Bannatyne, R, 1998 ―Migrating from 8 to 16-bit Processor‖ in proc. Northcon/98 conference. (Pp. 150-158)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


149
www.ijergs.org

Therapeutic Properties of Ficus Regligiosa
Shailja Singh
1
, Shalini Jaiswal
1

1
Amity group of Institutions, Greater Noida, U.P 201308
E-mail- shailjadu@gmail.com
Abstract
Medicinal plants have played a vital role in maintaining and improving human health from past thousands of years. History of human
civilization and discovery of herbal medicines are running parallel from ancient time till date. Among hundreds of medicinal plants,
Ficus tree has a significant role in promoting health and alleviate illness. Ficus religiosa commonly known as Peepal tree is regarded
as sacred tree to both Hindus and Buddhists. Itcontains enormous range of pharmacological activities like ant diabetic, antimicrobial,
analgesic, wound-healing etc. The present review describes the morphological, phytochemical and pharmacological aspects of F.
religiosa.

Key words
Medicinal plants, Ficus religiosa, antimicrobial, morphological, phytochemical, Parmacological, Peepal.

I. INTRODUCTION
Plants have been used in treating human diseases for past thousands of years.
1
Since prehistoric times, men and women of
Eurasia and the Americas acquired a tremendous knowledge of medicinal plants.
2
All of the native plant species discussed
in detail in this work was used by native people in traditional medicine. Medicinal plants have curative properties due to
the presence of various complex chemical substances of different composition, which are found as secondary plant
metabolites in one or more parts of these plants. Herbal medicine is based on the principle that plants contain natural
substances that can promote health and alleviate illness. In recent times, focus on plant research has increased all over the
world and a large body of evidence has collected to show immense potential of medicinal plants used in various
traditional systems. Today, we are witnessing a great deal of public interest in the use of herbal remedies.
This review emphasizes the traditional use and clinical potentials of F. religiosa. F. religiosa Linn is commonly known as
Peepal belongs to family Moraceae.
3-5
Six parts of the trees (i.e., seeds, bark, leaves, fruit, latex and roots) are valued for
their medicinal qualities. The only one part not used for therapeutic purposes is the wood because it is highly porous. In
India, since ancient times it has got great mythological, religious, medical importance and considered as the oldest tree in
Indian art literature.
6-8

It is known by several vernacular names, the most commonly used ones being Asvatthah (Sanskrit), Sacred fig (Bengali),
Peepal (Hindi), Arayal (Malayalam), Ravi (Telgu) and Arasu (Tamil).
9
Moreover, the barks of F. religiosa is an important
ingredient in many Ayurvedic formulations, such as Nalpamaradi tailam, Chandanasavam, Nyagrodhadi churna and
Saribadyasavam.
10,11
In medicinal field, F. religiosa is gaining great attention because it has many compounds which are
beneficial in treatment of many diseases like diabetes, skin diseases, respiratory disorders, central nervous system
disorder, gastric problems etc.
12,13
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


150
www.ijergs.org


1. Classification
Domain: Eukaryota
Kingdom: Plantae
Subkingdom: Viridaeplantae
Phylum: Tracheophyta
Subphylum: Euphyllophytina
Infraphylum: Radiatopses
Class: Magnoliopsida
Subclass: Dilleniidae
Superorder: Urticanae
Order: Urticales
Family: Moraceae
Tribe: Ficeae
Genus: Ficus
Specific epithet: Religiosa Linnaeus
Botanical name : Ficus religiosa
2. Vernacular names
Sanskrit: Pippala
Assamese: Ahant
Bengali: Asvattha, Ashud, Ashvattha
English: Pipal tree
Gujrati: Piplo, Jari, Piparo, Pipalo
Hindi: Pipala, Pipal
Kannada: Arlo, Ranji, Basri, Ashvatthanara, Ashwatha, Aralimara, Aralegida, Ashvathamara, Basari, Ashvattha
Kashmiri: Bad
Malayalam: Arayal
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


151
www.ijergs.org

Marathi: Pipal, Pimpal, Pippal
Oriya: Aswatha
Punjabi: Pipal, Pippal
Tamil: Ashwarthan, Arasamaram, Arasan, Arasu, Arara
Telugu: Ravichettu
Morphological characters
F. religiosa (L.) is a large perennial tree, glabrous when young, found throughout the plains of India upto 170m altitude in
the Himalayas.The stem bark and leaves of F. religiosa are reported phytoconstituents of phenols, tannins, steroids,
lanosterol, stigmasterol, lupen-3-one.The active constituent from the root bark F. religiosa was found to be β-sitosteryl-D-
glucoside, The seeds contain phytosterolin, β-sitosterol, and its glycoside, albuminoids. The fruit of F. religiosa contained
appreciable amounts of total phenolic contents, total flavonoid.
14


3. Botanic description
F. religiosa is a large deciduous tree with or no aerial roots which is commonly found in India. It is native from India to
Southeast Asia which grows up to 5000ft with the trunk which reaches up to 1 meter. Bark is grey with brownish specks,
smooth, exfoliating inirregular rounded flakes.
Leaves alternate, spirally arranged and broadly ovate, glossy, coriaceous(leathery), dark green leaves, 10-18 by 7.5-10 cm,
with unusual tail-liketips, pink when young, stipulate, base-cordate. Petioles is slender and 7.5-10 cm long. Galls on
leaves.
Flowers axillary sessile, unisexual.
Fruits are circular in shape called as Figs which is enclosed in floresences. When fruits are raw, they are green in colour
during summer but after ripening they turn black through rainy season.
15
The specific epithet ‗religiosa‘ alludes to the
religious significanceattached to this tree. The prince Siddhartha is said to have sat andmeditated under this tree and there
found enlightment from which time hebecame a Buddha. The tree is therefore sacred to Buddhists and isplanted beside
temples.

4. Phytochemical analysis
Phytochemistry can be defined as the chemistry of those natural products which can be used as drugs or plant parts with
the emphasis on biochemistry. Preliminary phytochemical screening of F. religiosa barks, showed the presence tannins,
saponins, flavonoids, steroids, terpenoids and cardiac glycosides.
16,17
The barks of F. religiosa showed the presence of
bergapten, bergaptol, lanosterol, β-sitosterol, stigmasterol, lupen-3-one, β-sitosterol-d-glucoside (phytosterolin), vitamin
k1.
18-21
Apart from this, tannin, wax, saponin, β-sitosterol, leucocyanidin-3-0-β-D-glucopyrancoside, leucopelargonidin-3-
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


152
www.ijergs.org

0-β-D-glucopyranoside, leucopelargonidin-3-0-α-L- rhamnopyranoside, lupeol, ceryl behenate, lupeol acetate, α-amyrin
acetate, leucoanthocyanidin and leucoanthocyanin are also found in bark.
22

HO
O
O
O
O
O
O
O
OH
Lanosterol
Bergapten
Bergaptol


HO HO
H
sitosterol
cadinene
Stigmasterol
o
|
Hentricontane


Figure 1: Active components of F. religiosa
The fruit of F. religiosa comprises asgaragine, tyrosine, undecane, tridecane, tetradecane, (e)-β-ocimene, α-thujene, α-
pinene, β-pinene, α-terpinene, limonene, dendrolasine, dendrolasine α-ylangene, α-copaene, β-bourbonene, β-
caryophyllene, α-trans bergamotene, aromadendrene, α-humulene, alloaromadendrene, germacrene, bicyclogermacrene, γ-
cadinene and δ-cadinene.
23
Leaves contain campestrol, stigmasterol, isofucosterol, α-amyrin, lupeol, tannic acid, arginine,
serine, aspartic acid, glycine, threonine, alanine, proline, tryptophan, tryosine, methionine, valine, isoleucine, leucine, n-
nonacosane, n-hentricontanen, hexa-cosanol and n-octacosan.
20-22
Alanine, threonine, tyrosine have been reported in seeds
of F. religiosa.
24
The crude latex of F. religiosa shows the presence of a serine protease, named religiosin. The structures
of active components are exhibited in figure 1.All six parts of the tree i.e., seeds, bark, leaves, fruit, latex and roots are
highly useful for their medicinal properties except wood because of its highly porous nature (Table 1).
Plant parts Traditional uses (as/in)
Bark Diarrhoea, dysentery, anti-inflammatory, antibacterial, cooling, astringent,
gonorrhoea, burns
Leaves Hiccups, vomiting, cooling, gonorrhoea
shoots Purgative, wounds, skin disease
Leaf juice Asthma, cough, diarrhoea, gastric problems
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


153
www.ijergs.org






Table 1: Medical use of different parts of F. religiosa
5. Pharmacological activities present in F. religiosa

The Whole parts of the plant exhibit wide spectrum of activities such as anticancer, antioxidant, antidiabetic,
antimicrobial, anticonvulsant, anthelmintic, antiulcer, antiasthmatic, anti amnesic etc. as shown in figure 2.
Antimicrobial activity: The antimicrobial activity of ethanolic extracts of F. religiosa (leaves) was studied using the agar
well diffusion method. The test was performed against four bacteria: Bacillus subtilis (ATCC 6633), Staphylococcus
aureus (ATCC 6538), Escherichia coli (ATCC 11229), Pseudomonas aeruginosa (ATCC 9027) and against two fungi:
Candida albicans (IMI 349010) and Aspergillus niger (IMI 076837). The results showed that 25mg/ml of the extract was
active against all bacterial strains and effect against the two fungi was comparatively much less.
25

Iqbal et al. explored that F. religiosa bark methanolic extract was 100% lethal for Haemonchus. contortus worms during
in vitro testing.
26
The acetone extracts of seven plant species Tamarindus indica, F. indica, F. religiosa,
Tabernaemontana livaricate, Murraya koenigii, Chenopodium album and Syzygium cuminii were evaluated for their
ovicidal activity. Murraya, Tabernaemontana and Chenopodium showed 70%, 75% and 66.6% ovicidal action at 100%
dose level whereas at the same dose level T. Indica, F. indica, F. religiosa and S. cuminii showed 48.3%, 41.6%, 13.3%,
53.3% ovicidal action respectively.
27
According to Uma et al different extracts (methanol, aqueous, chloroform) of the
bark of F. religiosa has inhibitory effect on the growth of three enteroxigenic E. coli, isolated from the patients suffering
from diarrhoea.
28

5. Wound healing activity: This activity was explored by incision and excision wound models using F. religiosa
leaf extracts which was prepared as lotion (5 and 10%) were applied on Wistar albino strain rats. Povidine iodine 5%
was used as Standard drug. High rate of wound contraction, decrease in the period for epithelialisation, high skin
breaking strength were detected in animals treated with 10% leaf extract ointment when compared to the control group of
animals. It has been reported that tannins possess ability to increase the collagen content, which is one of the factor for
promotion of wound healing.
29, 30

Dried fruit Fever, tuberculosis, paralysis
Fruit Asthma, digestive
Seeds Refrigerant, laxative
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


154
www.ijergs.org


Figure 2: Pharmacological activities of Ficus religiosa
7. Anti-amnesic activity: The anti-amnesic activity was investigated using F. religiosa methanol extract of figs of F.
religiosa. Figs are known to comprise a high serotonergic content and modulation of serotonergic neurotransmission
which plays a crucial role in the pathogenesis of amnesia.
31
Scopolamine (1mg/kg, i.p.) was administered before training
for induction of anterograde amnesia and before retrieval for induction of retrograde amnesia in both models. TL in the
EPM, step down latency (SDL), number of trials, and number of mistakes in the MPA were determined in vehicle control,
F.religiosa figs treated (10, 50, and 100mg/kg, i.p.) and standard groups (piracetam 200mg/kg, i.p.).
32


8. Analgesic activity: Sreelekshmi et al. found the analgesic activity of stem bark of F. religiosa using the acetic acid-
induced writhing (extension of hind paw) model in mice using Aspirin as standards drug.
33
It showed dropping in the
number of writhing of 71.56 and 65.93%, respectively at a dose of 250 mg/kg and 500 mg/kg. Thus, it can be concluded
that extract showed the analgesic effect probably by inhibiting synthesis or action of prostaglandins.

9. Antidiabetic activity: Aqueous extract of F. religiosa in doses of 50 and 100 mg/kg exhibited pronounced reduction in
blood glucose levels. This nature of effect was related with the hypoglycaemic drug glybenclamide. It has been also
proved that F. religiosa significantly increases serum insulin, body weight, glycogen content in liver. Bark of F. religiosa
shows similar effects and exhibits maximum fall of the blood sugar level.
34

10. Anticonvulsant activity
Figs of the plant F. religiosa have been reported to contain highest amount of Serotonin which is responsible for its
anticonvulsant effect.
35
Further, Singh and Goel investigated the anticonvulsant effect of methanolic extract of F. religiosa
figs on Maximal electroshock-induced convulsions (MES), Picrotoxin-induced convulsions, and
pentylenetetrazoleinduced convulsions (PTZ).
7
In Ayurveda it is claimed that leaves of F. religiosa also possess
F.
religio
sa
Antimic
robial
Wound
healing
Anti-
amnesic
Analges
ic
Antidia
betic
Anticon
vulsant
Antiulc
er
Anti-
inflam
matory
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


155
www.ijergs.org

anticonvulsant activity.
36
The anticonvulsant effect of the extract obtained from the leaves ofF. Religiosa was evaluated
against PTZ (60mg/kg, i.p) induced convulsion in albino rats. The study revealed 80 to 100 % protection against PTZ-
induced convulsions when given 30-60minutes prior to induced convulsion, respectively. Patil et al demonstrated that the
anticonvulsant effect of the aqueous aerial root extract of F. religiosa is effective in management of chemically-induced
seizures in rats.
37
The extract was evaluated against strychnine-induced convulsions and pentylenetetrazole-induced
convulsions animal models.
11. Antiulcer activity
F. religiosa is one of the plants that have been traditionally used in the India and Malays folklore medicine to treat gastric
ulcer.
38
The ethanol extract of stem bark showed potential antiulcer activity. The antiulcer activity was evaluated in vivo
against indomethacin and cold restrained stress induced gastric ulcers and pylorus ligation assay. The extract (100, 200 &
400 mg/kg) significantly reduced the ulcer index in all assay used.
39
Administration of F. religiosa significantly reduced
the ulcer index.
40
The hydroalcoholic extract of leaves also presented antiulcer activity. The activity of extract was
evaluated against pylorus ligation-induced ulcers, ethanol-induced ulcers and aspirin-Induced ulcers. Determination of
antiulcer effect was based upon ulcer index and oxidative stress.

12. Anti-inflammatory activity
F. religiosa has found to be potential anti-inflammatory & analgesic property. The mechanism underlying the effect is the
inhibition of PG‘s synthesis. It was found that the leaf extract of F. religiosa has potential anti-inflammatory activity
against carrageenan induced paw oedema. The inhibitory activity was found due to inhibition of release of histamine,
serotonin (5HT), Kinins and PG‘s.
41

The methanol extract of stem bark of F. religiosa has inhibitory effect on carrageenan-induced inflammation in rats due to
the inhibition of the enzyme cycylooxygenase (COX) leading to inhibition of PG‘s synthesis. Further, various studies
revealed that tannin present in the bark possess anti-inflammatory effect.
33
Moreover, it has been shown that methanolic
extract of stem bark of F. religiosa is known to suppress inflammation by reducing both 5-HT & bradykinin
(BK).Mangiferin isolated from drug has anti-inflammatory activity against carrageenan-induced paw oedema.
42
Figure (3)
indicates the activity of various extracts of Ficus religiosa on inflammation. Viswanathan et al investigated the anti-
inflammatory and mast cell proliferative effect of aqueous extract of bark of F. religiosa.
43
The anti-inflammatory effect
was evaluated against acute (carrageenaninduced hind paw oedema) and chronic (cotton pellet implantation) models of
inflammation.
13. Conclusion
Presently enormous research group are showing curiosity and interest in the medicinal properties ofF. religiosa. Although
scientific studies have been carried out on a large number of Indian botanicals, a considerably smaller number of
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


156
www.ijergs.org

marketable drugs or phytochemical entities have entered the evidence-based therapeutics. Efforts are therefore needed to
establish and validate evidence regarding safety and practices of Ayurvedic medicines.
II. Acknowledgement
SS and SJ are thankful to Amity group of Institutions, greater Noida campus for help and support.

REFERENCES:
1. http://www.agr.gc.ca/eng/science-and-innovation/science-publications-and resources/resources/ canadian-
medicinal-crops/general-references/?id=1300823047797
2. M.Shankar,T. Lakshmi Teja, B.Ramesh, D. Roop kumar, D.N.V. Ramanarao, M.Niranjan Babu, ―Phytochemical
investigation and antibacterial activity of Hydroalcoholic extract of terminalia bellirica leaf‖, Asian Journal of
Phytomedicine and Clinical Research, 2(1): 33-39, 2014.
3. EJH Corner,―Check List of Ficusin Asia and Australasia with keys to identification‖, Gard. Bull. Singapore, 21:1-
186, 1965.
4. C.C. Berg,―Classification and Distribution of Ficus, Experientia‖, 45(7):605-611, 1989.
5. C.C. Berg,EJH Corner, ―Moraceae-Ficus,Flora Malesiana Series I (Seed Plants)‖, 17: 1-730, 2005.
6. A.Ghani, ―Medicinal plants of Bangladesh with chemical constituents and uses‖, Asiatic Society of Bangladesh,
Dhaka, 236, 1998.
7. DamanpreetSingh, Rajesh KumarGoel, ―Anticonvulsant effect of Ficus religiosa: role of serotonergic pathways‖,
J. Ethanopharmacol., 123: 330-334, 2009.
8. P.V.Prasad, P.K.Subhaktha, A.Narayana, M.M. Rao,―Medico-historical study of ―Asvattha‖ (sacred fig
tree)‖,Bull. Indian Inst. Hist. Med.Hyderabad, 36: 1-20, 2006.
9. P.K.Warrier, ―Indian medicinal plants-A compendium of 500 species‖, Orient Longman Ltd., Chennai, Vol. III,
38-39, 1996.
10. V.V.Sivarajan, I.Balachandran, ―Ayurvedic drugs and their sources‖, Oxford & IBH Publishing Co. Pvt. Ltd.,
New Delhi, 374-376, 1994.
11. KRG Simha, V.Laxminarayana, ―Standardization of Ayurvedic polyherbal formulation‖, Indian J. Trad. Know.,6:
648-652, 2007.
12. N.Sirisha, M.Sreenivasulu, K.Sangeeta, C.M.Chetty,―Antioxidant Properties of Ficus Species-A review‖,
International Journal of PharmTech Research, 3:2174-2182, 2010.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


157
www.ijergs.org

13. B.Vinutha, D.Prashanth, K.Salma, S.L.Sreeja, D.Pratiti, R.Padmaja, S.Radhika, A.Amit, K.Venkateshwarlu,
M.Deepak, ―Screening activity of selected Indian medicinal plant for acetylcholinesterase inhibitory activity‖,
Journal of Ethnopharmacology, 109: 359-363, 2007.
14. Ayurvedic pharmacopoeia of India, Ministry of health and family welfare, department of Ayush,New Delhi, 17-
20, (2001).
15. C.Orwa, A.Mutua, R.Kindt,R.Jamnadass, S.Anthony, Agroforestry Database 4.0, Ficus religiosa, 1-5, 2009.
16. K.Babu, S.G.Shankar, S.Rai, Turk,―Comparative pharmacognostic studies on the barks of four Ficus species”, J.
Bot., 34: 215-224, 2010.
17. S.A.Jiwala, M.S.Bagul, M.Parabia, M.Rajani,―Evaluation of free radical scavenging activity of an ayurvedic
formulation‖, Indian J. Pharm. Sci., 70, 31-35, 2008.
18. K.D.Swami, N.P.S.Bisht, ―Constituents of Ficus religiosa and Ficus infectoria and their biological activity‖, J.
Indian Chem. Soc.,73: 631, 1996.
19. K.D.Swami, G.S.Malik, N.P.S.Bisht,―Chemical investigation of stem bark of Ficus religiosa and Prosopis
spicigera", J. Indian Chem. Soc., 66: 288-289, 1989.
20. B. Joseph, S.R. Justin, ―Phytopharmacological and Phytochemical Properties of three FicusSpecies-an overview‖,
International Journal of Pharma and Bio Sciences, 1(4): 2010.
21. B.C.G.Margareth, J.S.Miranda, ―Biological Activity of Lupeol, International journal of biomedical and
pharmaceutical sciences‖, 46-66, 2009.
22. A.Husain, O.P.Virmani, S.P.Popli, L.N.Misra, M.M.Gupta, G.N.Srivastava,Z.Abraham, A.K.Singh, Dictionary of
Indian Medicinal Plants, CIMAP, Lucknow, India, , 546 (1992).
23. L.Grison, M.Hossaert, J.M.Greeff,J.M.Bessiere, ―Fig volatile compounds: basis for the specific Ficus-wasps
interactions‖, Phytochemistry, 61: 61-71, 2002.
24. M.Ali, J.S.Qadry, ―Amino acid composition of fruits and seeds of medicinal plants‖, J. Indian Chem. Soc., 64:
230-231, 1987.
25. G.P.Choudhary,―Evaluation of ethanolic extract of Ficus religiosabark on incision and excision wounds in rats‖,
Planta Indica, 2(3):17-19, 2006.
26. Z.Iqbal, Q.K.Nadeem, M.N.Khan, M.S.Akhtar, F.N.Waraich, Int. J. Agr. Biol., 3: 454-457, 2001.
27. S.C.Dwivedi, Venugopalan, ―Evaluation of leaf extracts for their ovicidal action against Callosobruchus chinensis
(L.)‖,Asian J. Exp. Sci., 16: 29-34, 2001.
28. B.Uma, K.Prabhakar, S.Rajendran,―In vitro antimicrobial activity and phytochemical analysis of Ficus religiosa
L. and Ficus bengalensis L. against diarrhoeal enterotoxigenic E. Coli, Ethnobotanical Leaflets‖, 13:472-474,
2009.
29. R.M.Charde, H.J.Dhongade, M.S.Charde, A.V.Kasture, ―Evaluation of antioxidant, wound healing and anti-
inflammatory activity of ethanolic extract of leaves of F. religiosa”,Int. J. Pharm. Sci. Res., 1: 72- 82, 2010.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


158
www.ijergs.org

30. K.Roy, H.Shivakumar,S.Sarkar, ―Wound Healing Potential of Leaf Extracts of F.religiosa on Wistar albino strain
rats‖.Int. J. Pharm. Tech. Res., 1: 506-508, 2009.
31. D.C.Williams,―Proteolytic activity in the genus Ficus, Plant Physiology‖, 43:1083-1088, 1968.
32. H.Kaur, D.Singh, B.Singh, R.K.Goel,―Anti-amnesic effect of Ficus religiosain scopolamine induced anterograde
and retrograde amnesia‖, Pharmaceutical Biology; 48:234-240, 2010.
33. R.Sreelekshmi, P.G.Latha, M.M.Arafat, S.Shyamal, V.J.Shine, G.I.Anuja, S.R.Suja, S.Rajasekharan,―Anti-
inflammatory, analgesic and anti-lipid peroxidation studies on stem bark of Ficus religiosaLinn.‖, Natural
Product Radiance, 6(5): 377-381, 2007.
34. R.Pandit, A.Phadke, A.Jagtap,―Antidiabetic effect of Ficus religiosaextract in streptozotocin induced diabetic
rats, Journal of Ethnopharmacology‖, 128:462-466, 2010.
35. J.N.Bliebtrau,―The Parable of the Beast‖, Macmillan Company, New York, 74, 1968.
36. N.S.Vyawahare, A. R.Khandelwal, V.R.Batra,A.P.Nikam,―Herbal anticonvulsants‖, Journal of Herbal Medicine
and Toxicology, 1(1): 9-14, 2007.
37. M.S.Patil, C.R.Patil, S.W.Patil, R.B.Jadhav,―Anticonvulsantactivity of aqueous root extract of Ficus religiosa”, J.
Ethanopharmacol, 133: 92-96, 2011.
38. B.Ravishankar, V.Shukla, J. Indian Systems of Medicine: a brief profile, African Journal of Traditional,
Complementary and Alternative Medicines, 4: 319-337, 2007.
39. M.S.A.Khan, S.A.Hussain, A.M.M.Jais, Z.A.Zakaria, M.Khan,―Anti-ulcer activity of Ficus religiosastem bark
ethanolic extract in rats‖, J Med Plants Res., 5(3): 354-359, 2011.
40. S.Saha, G.Goswami,―Study of anti-ulcer activity of Ficus religiosaL. on experimentally induced gastric ulcers in
rats‖, Asian Pacific Journal of Tropical Medicine, 791-793, 2010.
41. R.M.Charde, H.J.Dhongade, M.S.Charde,A.V.Kasture,―Evaluation of antioxidant, wound healing and anti-
inflammatory activity of ethanolic extract of leaves of Ficus religiosa”, International Journal of Pharma Sciences
and Research, 1: 73-82, 2010.
42. N.Verma, S.Chaudhary, V.K.Garg, S.Tyagi,―Antiinflammatory and analgesic activity of methanolic extract of
stem bark of Ficus religiosa”, International Journal of Pharma Professional's Research, 1: 145-147, 2010.
43. S.Viswanathan, P.Thirugnanasambantham, M.K.Reddy, S.Narasimhan, G.A.Subramaniam,―Anti-inflammatory
and mast cell protective effect of Ficus religiosa”,Ancient Sci Life.; 10: 122-125, 1990





International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


159
www.ijergs.org

Remote Power Generating Systems WHIT Using Low Frequency
Transmission
Mohammad Ali Adelian,Narjes Nakhostin Maher, Farzaneh Soorani
Ma_adelian@yahoo.com 00917507638844, Maher_narges@yahoo.com , ferisoorani@gmail.com

Abstract— the goal of this research is to evaluate alternative transmission systems from remote wind farms to the main grid using
low-frequency AC technology. Low frequency means a frequency lower than nominal frequency (60/50Hz). The low-frequency AC
network can be connected to the power grid at major substations via cyclo-converters that provide a low-cost interconnection and
synchronization with the main grid. Cyclo-converter technology is utilized to minimize costs which result in systems of 20/16.66 Hz
(for 60/50Hz systems respectively). Low frequency transmission has the potential to provide an attractive solution in terms of
economics and technical merits. The optimal voltage level selection for transmission within the wind farm and up to the
interconnection with the power grid is investigated. The proposed system is expected to have costs substantially lower than HVDC
and conventional HVAC systems. The cost savings will come from the fact that cyclo-converters are used which are much lower in
cost than HVDC. Other savings can come from optimizing the topology of the wind farms. Another advantage of the proposed
topologies is that existing transformers designed for 60 Hz can be used for the proposed topologies (for example a 345kV/69 kV,
60Hz transformer can be used for a 115 kV/23kV, 20 Hz system). The results from this research indicate that the use of LFAC
technology for transmission reduces the transmission power losses and the cost of the transmission system.

Keywords— Low frequency, Cyelo Converter, Wind Farm Connections, wind frame topology, wind system configuration, series
and parallel wind frame, voltage level selection.
INTRODUCTION
Renewable sources of energy are widely available and proper utilization of these resources leads to decreased dependence on the fossil
fuels. Wind is one such renewable source available in nature and could supply at least a part of the electric power. In many remote
locations the potential for wind energy is high. Making use of the available wind resources greatly reduces the dependence on the
conventional fuels and lowers the emission rates. There are a few problems associated with the wind which makes the wind energy
more expensive than other forms of electric power generation. The two main issues are: (a) Large wind farms are located in remote
locations which make the cost of transmission of wind power costly, and (b) the intermittent supply of power due to the
unpredictability of the wind that results in lower capacity credits for the operation of the integrated power system. These issues are
addressed by designing alternative topologies and transmission systems operating at low frequency for the purpose of decreasing the
cost of transmission and making the wind farm a more reliable power source. The use of DC transmission within the wind farm
enables the output of wind generators to be rectified via a standard transformer/rectifier arrangement to DC of appropriate kV level.

Research Objectives

 Literature study of previous research on low frequency AC transmission and wind farm topologies.
 Design of alternate topologies.
 Calculation of optimal transmission voltage levels for different topologies.
 Modeling the system using WinIGS-F software.

Technologies for Wind Farm Power Transmission
The possible solutions for transmitting power from wind farms are HVAC, Line commutated HVDC and voltage source based HVDC
(VSC-HVDC). Low frequency AC transmission (LFAC) is particularly beneficial in terms of cost savings and reduction of line losses
[4] in cases where the distance from the power generating stations to the main power grid is large. The use of fractional frequency
transmission system (FFTS) for offshore wind power is discussed in [6]. The author proposes LFAC as an alternative to HVAC and
HVDC technologies for a short and intermediate transmission distances. HVAC is more economical for short transmission distances.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


160
www.ijergs.org

For longer distances, HVAC has disadvantages like increase in the cable cost, terminal cost and charging. HVDC transmission
systems and wind farm topologies are discussed in [12]. HVDC being a matured technology is used for longer distances. Compared to
HVDC, the LFAC system reduces the usage of an electronic converter terminal which reduces the investment cost. HVDC technology
is used only for point-to-point transmission [11], and LFAC can be used for similar networks as AC transmission. Further, VSC-
HVDC replaces the thyristors with IGBTs and is considered to be the most feasible solution for long distance transmission. However,
addition of converter stations on both sides of the transmission line increase the investment cost of the VSC-HVDC system [7]
compared to LFAC. Hence, due to the limitations of the HVAC and HVDC the proposed LFAC is used in the design of transmission
systems. The use of LFAC can be extended to long transmission distances. Cyclo converter technology is used for converting the AC
of nominal frequency to AC of one third frequency i.e. 16.67 Hz/20 Hz for a 50 Hz/ 60 Hz transmission system`. Several advantages
of the LFAC are identified. The transmission system used for conventional AC system can be used for LFAC without any
modifications and the LFAC system increases the transmission capacity.

Wind system configuration 1: AC wind farm, Nominal frequency, Network connection:Two different types of AC wind
farms referred in this thesis are radial and network connections. Radial wind farms are suitable for small wind farms with a short
transmission distance. In a small AC wind farm, the local wind farm grid is used both for connecting all wind turbines in a radial
fashion and transmitting the generated power to the wind farm grid interface. Network connected wind farms are usually large AC
wind farms where the local wind farm grid has a lower voltage level than the transmission system. The wind system configuration 1
shown in figure 3.2.1 has network connection of wind turbines and AC power collection system.
Wind System Configuration 2: AC Wind Farm, AC/DC Transmission, And Network Connection: The wind system
configuration 2 shown in figure 3.2.2 is similar to the wind system configuration 1 except for the transmission part from the collector
substation to the main power grid. AC transmission is replaced by DC transmission in this configuration. Nominal frequency
transmission is adopted within the wind farm. This wind farm is referred to as AC/DC wind farm. This type of system does not exist
today, but is frequently proposed when the distance to main grid is long.


Figure 3.2.1: Wind system configuration1 Figure 3.2.2: Wind system configuration 2

Wind system configuration 3: Series DC Wind farm, Nominal frequency, Network connection: The wind system
configuration 3 has a DC power collection system. Wind turbines are connected in series and each set of series connected array is
connected to the collection point. Using DC/AC converters, AC of suitable voltage level and nominal frequency is generated. Voltage
is stepped up and the power is transmitted to the interconnection point at the power grid by a high voltage transmission line.
Wind System Configuration 4: Parallel DC Wind Farm, Nominal Frequency, Network Connection: Wind system
configuration 4 differs from the wind system configuration 3 in the local wind farm design. Here a number of wind turbine systems
are connected in parallel and each set of parallel connected wind turbines are connected to a collection point. Using DC/AC
converters, AC of suitable voltage level and nominal frequency is generated. At the collection point voltage is stepped up by means of
a transformer and the power is transmitted to the interconnection point at the power grid by a high voltage transmission line. Two
small sized wind farms are interconnected via a transmission line to ensure reliable supply of power to the main grid in the event of
fault or maintenance shut down in any one of the wind farms by transferring power generated from the other wind farm.


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


161
www.ijergs.org


Wind system configuration 3: Series DC Wind farm Wind system configuration 4: Parallel DC Wind farm

Wind System Configuration 5: Series DC Wind Farm, Low Frequency Radial AC Transmission: The wind system
configuration shown in figure 3.2.5 has a DC wind farm. Here a number of wind turbine systems are connected in series and each
series string is connected to a collection point. An inverter is used to convert DC to AC of low-frequency preferably one third the
nominal power frequency at the collection point. The voltage is raised to higher kV levels by means of a transformer (standard
transformers are used with appropriately reduced ratings for the low frequency). The power is transmitted to the main power grid via
lines operating at low frequency. Using cyclo-converters the low frequency is converted to power frequency before connecting to the
main power grid.
Wind System Configuration 6: Parallel DC Wind Farm, Low Frequency, Radial Transmission: Wind system
configuration 6 is similar to the wind system configuration 5. Here the difference is that the wind turbines are connected parallel to
each other and to the collection point. Parallel connection of wind turbines leads to same voltage across the terminals of all the wind
turbine systems. The generated power is converted to low-frequency AC using inverter and transmitted over long distances to the
power grid. Cyclo-converter technology is used to convert the low frequency to nominal frequency before connecting the system to
the main power grid.

Wind system configuration 5: Series DC wind farm Wind system configuration 6: Parallel DC Wind Farm
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


162
www.ijergs.org


Wind system configuration 7: Series DC wind Farm Wind system configuration 8: Parallel DC wind Farm
Wind System Configuration 7: Series DC Wind Farm, Low Frequency AC Transmission Network: Here a number of
wind turbine systems are connected in series and each set of series connected array is connected to a collection point. At the collection
point DC is converted to low frequency AC by means of inverters. The transmission of power up to the main power grid is by means
of a network of transmission lines operated at low frequency. The low frequency AC system is connected to the power grid by means
of cyclo-converters.
Wind System Configuration 8: Parallel DC Wind Farm, Low Frequency AC Transmission Network: Wind system
configuration 8 has a number of wind turbine systems connected in parallel and each set of parallel connection of wind
turbine systems are connected to a collection point. From the collection point to power grid system is identical to wind
system configuration 7.
VOLTAGE LEVEL SELECTION: This section provides analysis and results that determine the optimal transmission voltage
used in the alternative wind transmission systems up to the main DC bus. The optimal kV level for transmission within the wind farm
is selected by evaluation of the total costs consisting of operational costs (mainly losses) and annualized investment cost. The cost of
the auxiliary equipment is not considered. The optimal kV level is selected on the basis of minimal total cost consisting of operating
costs (mainly transmission loss) and investment cost.
Voltage calculation-Wind system configuration 5: Series DC wind farm, Low frequency radial AC transmission:
Wind system configuration 5 has a series DC wind farm as shown in figure 4.1 where mi wind turbines are connected in series to
obtain the suitable transmission voltage. The wind turbine systems are assumed to be identical, thus resulting in same voltage and
current through them. A wind farm rated 30 MW consisting of 10 wind turbines each rated 3 MW is considered. The transmission
voltage for calculation purpose is selected as 35 kV. Thus, the nominal high side transformer voltage for the wind turbines is 3.5 kV.
The optimal transmission voltage is obtained by plotting the values obtained from the calculation of loss and the annual investment
cost of the cable and converter for different values of the transmission voltage. The resistance of the chosen cable is approximately
0.0153 ohm/ 1000 ft.

Figure 4.1: Wind farm configuration 1: Series DC wind farm, radial connection

Calculation of transmission loss (up to the main DC bus) ($/yr): The following equations are used to determine the
transmission loss with in the wind farm.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


163
www.ijergs.org


This formula assumes that the wind farm operates continuously at maximum power which is unrealistic. The capacity factor of a wind
turbine is approximately 30% [1]. Hence the resultant Loss in $/yr is multiplied by 0.3. Therefore, Loss = $ 30,110 /yr.
Calculation of Cost of cable and the converter equipment in $/yr:The acquisition cost of the cable is $ 18.5 /ft. To calculate
the cost of cable required for the entire wind farm, the length of the cable is calculated. Multiplying the acquisition cost of cable by the
total length of the cable gives the acquisition cost of the cable for the entire wind farm in $.

Therefore, acquisition cost of
the cable, CCost = $ 188,700/yr.
The cost of converter is given by
The above calculation gives the acquisition cost of converters to
be $238,907. Assuming an interest rate of 6% and the life time of the cable and the converter to be 20 years, the annual amortization is
calculated. Acquisition cost of the cable and the converters is $ 427,607.
Thus, the annual investment for
cable and converter is $ 37,244 /yr. For determining the optimal operating voltage, Vdc vs. Loss ($/yr) and Vdc vs. Annual investment
for cable and converter ($/yr) are plotted. The optimal voltage level is determined by the lowest point of the curve obtained by adding
the Loss ($/yr) and Annual investment cost ($/yr) as shown in figure 4.2.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


164
www.ijergs.org


Figure 4.2: Plot of voltage at Che main DC bus vs. total cost for mi=10, Pt = 3MW

From the plot in figure 4.2 it can be seen that for the wind farm rated 30 MW having 10 wind turbines the optimal voltage would be
around 35kV. As the voltage level further increases the transmission power loss decreases but the cost of the cable and the converter
increases. The plots of annual investment cost vs. Vdc and loss vs. Vdc intersect 23 at 32 kV, from that point the annual investment
cost goes on increasing. The optimal voltage is obtained by determining the lowest point on the graph obtained by adding the annual
investment cost and the loss in $/yr. The x coordinate of the point is the optimal transmission voltage which is 35 kV in this case. For
different wind turbine ratings, cable size and the wind farm size the optimal level of voltage is calculated in a similar fashion as shown
above.
STEADY STATE ANALYSIS
Wind Farm Modeling
Performance of a multiphase system under steady state conditions is analyzed using WinIGS-F program. Wind system configurations
1, 4 and 8 are modeled to analyze the system performance.
Wind system configuration 1_ model 1: In the system shown in figure 5.1 the wind farm is connected to a transmission line 54
miles long. Wind farm consists of 3 radial feeders with 4 wind turbines on each radial feeder. All the wind turbines are identical and
rated 2.7 MW each. A three phase two winding transformer rated 3 MVA is connected to each wind turbine to raise the voltage to 25
kV. The power generated at each wind turbine is collected at the collector substation. A transformer rated 36 MVA with primary
voltage of 25 kV and secondary voltage of 115 kV is installed at the collector substation. At the end of the transmission line a three
phase constant electric load and a slack generator are connected.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


165
www.ijergs.org

Figure 5.1 Wind system configuration 1 (60 Hz transmission)

Total power loss during transmission is obtained by running the model and it is 1.1925 MW for this case.

Wind system configuration 4_ model 2: The wind system configuration 4 is modeled as shown is figure 5.3. It has two small
wind farms located at a distance away from each other. This model is similar to a scenario where there are two wind farms in
different geographical areas and power is collected at the collector substation and transferred over long distances to the main grid.
Under any disturbance to the generation of power in one of the wind farms the other wind farm supplies the power.

Figure 5.3 Wind system configuration 4 (60 Hz transmission)
Each wind turbine is rated 2.7 MW with 25 kV operating voltage within the wind farm. The operating voltage of the long distance
transmission line from the collector substation to the main grid considered here is 69 kV.
Wind system configuration 8 _ model 3: The wind system configuration 8 is modeled as shown in figure 5.4. A 20 Hz
transmission line, 54 miles long and operating at 69 kV is modeled in this case.

Figure 5.4 Wind system configuration 8 (20Hz transmission)
the transmission line parameters used for 60 Hz transmission can be used for 20 Hz transmission.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


166
www.ijergs.org


Table 5.3 Transmission power loss for wind system configuration 8 (20 Hz transmission)
ACKNOWLEDGMENT
I want to say thank you to my family, specially my mother for supporting me during my study in M tech and my entire friend which
help me during this study. I have to also thanks for my college to support me during my m tech in electrical in Bharati Vidyapeeth
deemd university college of engineering.

CONCLUSIONS
Geographical locations that are suitable for wind farm development are in remote locations far from the main transmission grid and
major load centers. In these cases, the transmission of wind power to the main grid is a major expenditure. The potential benefit of the
LFAC technology presented in this study is the reduction in the cost of the transmission system. This makes the economics of the wind
energy favorable and increases the penetration of wind power into the system. LFAC technology is used for transmission from the
collector substation to the main power grid. The thesis presents alternate topologies suitable for various geographical locations and
configurations of the wind farm. The optimal operating voltage of the transmission lines within the wind farm is calculated for all the
cases. The optimal voltage is computed considering the cost of the cable, converter equipment and the power loss due to transmission.
The preliminary study results show that higher the operating voltage lower will be the transmission losses and with the increase of the
transmission distance the transmission losses on the line increase. The results obtained by modeling the wind system configurations
point towards higher transmission losses in 60 Hz transmission compared to 20 Hz transmission.

REFERENCES:
[1] X. Wang, H. Dai, and R. J.Thomas, ―Reliability modeling of large wind farms and associated electric utility interface systems‖
IEEE Transactions on Power Apparatus and Systems, Vol. PAS-103, no. 3, March, 1984, pp. 569-575.
[2] R.J.Thoma, Phadke, A.G. Phadke, C. Pottle, ―Operational Characteristics of A Large Wind-Farm Utility- System with A
Controllable AC/DC/AC Interface‖ IEEE Transaction on Power Systems, Vol. 3, No.1. February 1988.
[3] An-Jen Shi, Thorp J., Thomas R., ―An AC/DC/AC Interface Control Strategy to Improve Wind Energy Economics‖, IEEE
Transactions on Power Apparatus and Systems, Vol. PAS-104, No. 12. December 1985.
[4] T. Funaki, ―Feasibility of the low frequency AC transmission,‖ in Proc. IEEE PES Winter Meeting, Vol. 4, pp. 2693–2698, 2000.
[5] W. Xifan, C. Chengjun, and Z. Zhichao, ―Experiment on fractional frequency transmission system,‖ IEEE Trans. Power Syst., Vol.
21, No. 1, pp. 372–377, Feb. 2006.
[6] N. Qin, S. You, Z. Xu, and V. Akhmatov, ―Offshore wind farm connection with low frequency AC transmission technology,‖ in
Proc. IEEE PES General Meeting, Calgary, Alberta, Canada, 2009.
[7] S. Lundberg, ―Evaluation of wind farm layouts,‖ in Nordic Workshop on Powerand Industrial Electronics (NORPIE 2004),
Trondheim, Norway, 14-16 June, 2004.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


167
www.ijergs.org

[8] S. Lundberg, "Wind farm configuration and energy efficiency studies series DC versus AC layouts," Thesis, Chalmers University
of Technology 2006.
[9] N. Kirby, L. Xu, M. Luckett, and W. Siepmann, ―HVDC transmission for large offshore wind farms,‖ Power Engineering Journal,
vol. 16, no. 3, pp. 135 –141, June 2003.
[10] C. Skaug and C. Stranne, ―HVDC wind park configuration study,‖ Diploma thesis, Chalmers University of Technology,
Department of Electric Power Enginering, G¨oteborg, Sweden, October 1999.
[11] Lazaros P. Lazaridis, ―Economic comparison of HVAC and HVDC solutions for large offshore wind farms under special
consideration of reliability,‖ Thesis, KTH.
[12] F. Santjer, L.-H. Sobeck, and G. Gerdes, ―Influence of the electrical design of offshore wind farms and of transmission lines on
efficency,‖ in Second International Workshop on Transmission Networks for Offshore Wind Farms, Stockholm, Sweden, 30-31
March, 2001.
[13] R. Barthelmie and S. Pryor, ―A review of the economics of offshore wind farms,‖ Wind enginering, vol. 25, no. 3, pp. 203–213,
2001.
[14] J. Svenson and F.Olsen, ―Cost optimising of large-scale offshore wind farms in the danish waters,‖ in 1999 European Wind
Energy Conference, Nice, France, 1-5 March, 1999, pp. 294–299



















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


168
www.ijergs.org

Impact of Network Size & Link Bandwidth in Wired TCP & UDP
Network Topologies


Mrs. Meenakshi.
Assistant Professor, Computer Science & Engineering Department,
Nitte Meenakshi Institute of Technology, Bangalore 560064
kmeenarao@gmail.com

Abstract—The transmission of information in a network relies on the performance of the traffic scenario (application traffic agent
and data traffic) used in a network. The traffic scenario determines the reliability and capability of information transmission, which
necessitates its performance analysis.
The objective of this paper is to calculate and compare the performance of TCP/FTP and UDP/CBR traffic in wired
networks. Study has been done using NS-2 and AWK scripts. Exhaustive simulations have been done to analyze results, which are
evaluated for performance metrics, such as link throughput, and packet delivery ratio. The effect of variations in link bandwidth,
number of nodes on the network performance is analyzed over a wide range of their values. Results are shown in terms of graphs and
tables.

Keywords—protocol stack, TCP, UDP, NS-2, agent, performance metrics, throughput, packet delivery ratio, bandwidth.

I. INTRODUCTION
Introduction section gives brief knowledge about TCP/IP protocol stack, feature, application, advantages and disadvantages of TCP
and UDP protocols respectively.

1.1TCP/IP Protocol Stack
It is based on the two primary protocols, namely, TCP and IP, is used in the current Internet [1]. These protocols have proven very
powerful, and as a result have experienced widespread use and implementation in the existing computer networks. Figure1 is TCP/IP
Protocol Stack.



















Figure1. TCP/IP Protocol Stack


Figure2. TCP and UDP headers

1.2. Transmission Control Protocol (TCP)
Application
TCP
IP
Hardware
interface
OSI 5-7
OSI 4
OSI 3
OSI 1-2
Application
UDP
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


169
www.ijergs.org

TCP is a connection-oriented protocol [2]. Function as a message makes its way across the internet from one computer to another.
This is connection based. TCP is suited for applications that require high reliability, and transmission time is relatively less critical.
TCP is usedby other protocols like HTTP, HTTPs, FTP, SMTP, and Telnet. TCP rearranges data packets in the order specified. The
speed for TCP is slower than UDP.

TCP is reliable in the sense there is absolute guarantee that the data transferred remains intact and arrives in the same order in which it
was sent. TCP header size is 20 bytes as shown in Figure2 namely TCP and UDP headers. Data is read as a byte stream, no
distinguishing indications are transmitted to signal message (segment) boundaries. TCP is heavy-weight. TCP requires three packets to
set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control. TCP does error checking.
SYN, SYN-ACK and ACK are three handshake related messages.

1.3. User Datagram Protocol or Universal Datagram Protocol (UDP)
UDP is a connectionless protocol. UDP is also a protocol used in message transport or transfer. This is not connection based which
means that one program can send a load of packets to another and that would be the end of the relationship.
-
UDP is suitable for applications that need fast, efficient transmission, such as games. UDP's stateless nature is also useful for servers
that answer small queries from huge numbers of clients. DNS, DHCP, TFTP, SNMP, RIP and VOIP protocols use UDP. UDP has no
inherent order as all packets are independent of each other. If ordering is required, it has to be managed by the application layer. UDP
is faster because there is no error-checking for packets. There is no guarantee that the messages or packets sent would reach at all.

UDP Header size is 8 bytes as in Figure 1.2: TCP and UDP headers [3]. Source port, Destination port and Check Sum are common in
both TCP and UDP. Packets are sent individually and are checked for integrity only if they arrive. Packets have definite boundaries
which are honored upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent.

UDP is lightweight. There is no ordering of messages and no tracking connections. It is a small transport layer designed on
top of IP. UDP does not have an option for flow control. UDP does error checking, but no recovery options. No Acknowledgment and
also no handshake since it is connectionless protocol.
1.4. TCP/IP Application Protocols
FTP (File Transfer Protocol), HTTP (Hyper Text Transfer Protocol), NNTP (Network News Transfer Protocol), Remote Login
(rlogin), Telnet, X Window System depends on TCP to guarantee the correct and orderly delivery of data across the network.
SNMP sends traffic through UDP because of its relative simplicity and low overhead.When NFS (Network File System) runs over
UDP the RPC implementation must provide its own guarantees of correctness. When NFS runs over TCP, the RPC layer can depend
on TCP to provide this kind of correctness.
DNS uses both UDP and TCP. It used UDP to carry simple queries and responses but depends on TCP to guarantee the correct and
orderly delivery of large amounts of bulk data (e.g. transfers of entire zone configurations) across the network.

II. MATERIAL AND METHODOLOGY
The network performance can be measured with many metrics. Following sections give brief about few of those metrics and
simulation setup of the experiment done in this paper.

2.1 Performance metrics
The performance of any system needs to be evaluated on certain criteria, these criteria then decide the basis of performance of any
system. Such parameters are known as performance metrics [4], [5], [6]. The different types of performance metrics used to evaluate
performance of any networks are described below:

2.1.1 Throughput
The throughput is the measure of how fast we can actually send data through the network. It is the measurement of number of packets
that are transmitted through the network in a unit of time. It is desirable to have a network with high throughput.
Throughput = ∑P
R
/ (∑t
sp
- ∑t
st
)
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


170
www.ijergs.org

P
R
– Received Packet Size,
t
st
– Start Time,
t
sp
– Stop Time.
Unit – Kbps (Kilobits per second)

2.1.2 Link Throughput
In computer technology, throughput is the amount of work that a computer can do in a given time period. In communication networks,
such as Ethernet or packet radio, network throughput is the average of successful message delivery over a communication channel.
Transmission Time = File Size / Bandwidth (sec).
Throughput = File Size / Transmission Time (bps).
Link throughput say from node S to D is given by the following formula:
√ = ∑ N
b
/ t
N
b
- Number of bits transmitted from node S to D
t - Observation duration.

2.1.3 Packet Delivery Ratio (PDR)
It is the ratio of number of packets received at the destination to the number of packets generated at the source. A network should
work to attain high PDR in order to have a better performance. PDR shows the amount of reliability offered by the network. The
greater value of packet delivery ratio means the better performance of the protocol.
PDR= (∑ N
R
/ ∑ N
G
) * 100
N
R
– Number of Received Packets,
N
G
– Number of Generated Packets,
Unit – Percentage ratio (%).

2.1.4 Average End – to – End Delay (AED)
This is the average time delay consumed by data packets to propagate from source to destination. This delay includes the total time of
transmission i.e. propagation time, queuing time, route establishment time etc. A network with minimum AED offers better speed of
communication.
AED = ∑ t
PR
-∑ t
PS

t
PR
– Packet Receive Time,
t
PS
– Packet Send Time,
Unit – milliseconds (ms).
2.2 Simulation
Simulation of wired as well as wireless network functions and protocols (e.g., routing algorithms, TCP, UDP) can be done using NS2
[7], [8], [9]. Network Simulator, Version-2, widely known as NS2, is simply an event-driven simulation tool that has proved useful in
studying the dynamic nature of communication networks. Figure 3.1 and 3.2 show simple network topologies used for experiments
carried out in this paper.


Figure3. A sample network topology: TCP

Figure4. A sample network topology: UDP
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


171
www.ijergs.org


In figure 3.1 and 3.2 N
1,
N
2 …
N
N
are nodes. FTP and CBR are applications of TCP and UDP respectively. N
x
and N
y
are nodes of
bottleneck link whereas N
y
is final destination of packets generated from all sources. Corresponding sender-agents have to be attached
to all sending nodes. TCP/Sink is agent to be attached to TCP-destination and Null agent is for UDP-receiver.



III. RESULTS AND CHARTS
A systematic study and analysis of all the aspects of wired networks is carried out by executing ns2 and AWK scripts. Comparison is
made for link throughput and packet delivery ratio. Followings are tables and graphs obtained from executing AWK scripts [10] for
which ns2 trace files were input. In first scenario, number of nodes was varying and corresponding changes in throughput had been
observed. In second scenario again number of nodes was varying and corresponding PDR had been noted down. In last scenario
bandwidth is the varying factor and different throughput had been tabled.

Nodes

tcp/tp(kbps) udp/tp(kbps)
5 4640.95 2188.05
25 4613.26 4882.98
50 4733.14 4882.98
100 4745.9 4882.98
200 4750.14 4882.98
300 4744.03 4882.98







Nodes

tcp/PDR udp/PDR
5 99.81 100
10 99.64 100
20 99.21 77.92
30 98.66 68.61
40 98.06 63.96
50 97.46 61.17
100 95.62 55.58



Bandwidth

tcp/tp(kbps) udp/tp(kbps)
0.5 487.809
488.314
1 954.95 976.596

1.5 1440.2 1464.88
2 1926.27 1953.16
2.5 2411.64 2187.55
3 2897.37 2187.57
3.5 3396.37 2187.58
4 3880.92 2187.59



Figure5. Node Vs Bottleneck link-throughput

0
2000
4000
6000
5 25 50 100 200 300
L
i
n
k

T
h
r
o
g
h
p
u
t
(
k
b
p
s
)
Number of Nodes
Node Vs Link Throughput
tcp/tp
udp/tp
0
20
40
60
80
100
120
5 10 20 30 40 50 100
P
a
c
k
e
t

D
e
l
i
v
e
r
y

R
a
t
i
o

%

Number of Nodes
Node Vs Packet Delivery Ratio(PDR)
pdr/tcp
pdr/udp
Table2. Nodes Vs packet delivery ratio

Table1. Node Vs Bottleneck-link
throughput

Table3. Bandwidth Vs Bottleneck-link
throughput
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


172
www.ijergs.org

Figure6. Node Vs Packet Delivery Ratio

Figure7. Bandwidth Vs Link Throughput
0
1000
2000
3000
4000
5000
0.5 1.5 2.5 3.5
L
i
n
k

T
h
r
o
u
g
h
p
u
t
(
k
b
p
s
)
Bandwidth(Mb)
Bandwidth Vs Link Throughput
tcp/tp
udp/tp
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


173
www.ijergs.org

IV.CONCLUSION
Bottleneck link throughput and packet delivery ratio have been calculated using ns2 and AWK scripts. Link bandwidth and nodes
are varying factors respectively. Packet delivery ratio is much better in TCP than of UDP. In case of link band width, TCP shows
better link throughput than that of UDP.
Depending on application requirement one has to decide the suitable protocols. This study can be extended for other traffic
generators namely exponential On/Off, Pareto On/Off and Traffic Trace. More over experiment can be carried out for
wireless networks as a future work.
ACKNOWLEDGEMENT
I would like to express my gratitude and appreciation toInternational Journal of Engineering Research and General Science team
who gave me the opportunity to publish this report. Special thanks to Nitte Meenakshi Institute of Technology Bangalore,
management, Computer Science HOD and all staffs whose stimulating suggestions and mainly encouragement helped me to write
this report.I would also like to acknowledge with much appreciation my family members especially my husband Mr.Ajith and my
sons Aadithya and Abhirama for their cooperation and support.

REFERENCES:
[1] Miss. SoniSamprati, ―Next Generation of Internet Protocol for TCP/IP Protocol Suite‖, International Journal of Scientific
and Research Publications, Volume 2, Issue 6, June 2012.
[2] Santosh Kumar and SonamRai, ―Survey on Transport Layer Protocols TCP & UDP‖, International Journal of Computer
Applications 46(7):20-25, May 2012.
[3] Fahim A. Ahmed Ghanem, Vilas M. Thakare, ―Optimization Of IPv4 Packet‘s Headers‖, IJCSI International Journal of
Computer Science Issues, Vol. 10, Issue 1, No 2, January 2013.
[4] YogeshGolhar ,R.K.Krishna and Mahendra A. Gaikwad, ―Implementation & Throughput Analysis of Perfect Difference
Network (PDN) in Wired Environment‖, IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 1, No 1,
January 2012.
[5] Performance Measurements and Metrics, http://webstaff.itn.liu.se/~davgu/tnk087/Fo_8.pdf.
[6] Computer Networks, Text Book byAndrew S. Tanenbaum.
[7] TeerawatIssariyakul and Ekram Hossain, Text Book ―Introduction to Network Simulator NS2‖, Second Edition.
[8] The ns manual, Kevin Fall and KannanVaradhan.
[9] Ns simulator, Wikipedia.org.
[10] AWK scripts, http: //wing.nitk.ac.in/ resources/ Awk.pdf











International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


174
www.ijergs.org

Design of Substrate Integrated Waveguide Antennas for Millimeter Wave
Applications
Y. Bharadwaja
1

1
Assistant professor,SreeVdyanikethanEngineerng College, E-mail:bharadwaja502@gmail.com,
E-mail- shailjadu@gmail.com
Abstract—The paper presents a new concept in antenna design, whereby a photo-imageable thick-film process is used to
integrate a waveguide antenna within a multilayer structure. This has yielded a very compact, high performance antenna working
at high millimeter-wave (mm-wave) frequencies, with a high degree of repeatability and reliability in antenna construction.
Theoretical and experimental results for 70 GHz mm-wave integrated antennas, fabricated using the new technique, are presented.
The antennas were formed from miniature slotted waveguide arrays using up to 18 layers of photo-imageable material. To enhance
the electrical performance a novel folded waveguide array was also investigated. The fabrication process is analyzed in detail and
the critical issues involved in the fabrication cycle are discussed. The losses in the substrate integrated waveguide have been
calculated. The performance of the new integrated antenna is compared to conventional metallic, air-filled waveguide antennas,
and also to conventional microstrip antenna arrays operating at the same frequencies.
Index Terms—Millimeter wave antenna arrays, substrate in- tegrated waveguides (SIW), photo-imageable fabrication, slotted
waveguide antenna arrays.
I. INTRODUCTION
Substrate integrated circuits (SICs) are a new concept for high-frequency electronics, which yields high perfor- mance from
very compact planar circuits [1]. The basic idea behind the technique is that of integrating nonplanar 3-D structures within a
multilayer circuit. However, existing integration techniques using precision machining cannot economically achieve the required
precision for millimeter-wave (mm-wave) components, particularly for mass production. In the last few years a number of papers
based on substrate integrated circuits and waveguides (SICs, SIWs) on planar microstrip substrates have appeared in the literature,
but only for frequencies up to X-band. Most of the integrated waveguides that have been reported used VIA fenced sidewalls,
realized using relatively elementary fabrication techniques. With these techniques the diameter and spacing of the individual VIAs
will affect the loss and bandwidth of the waveguide [2], [3]. Such integrated structures cannot be regarded as homogeneous
waveguide, but will be similar in performance to an artificial periodic waveguide.
However, there have been a number of successful attempts to form substrate integrated waveguides using micro-machining
techniques. McGrath et al. [4] formed an air-filled waveguide channel in silicon, and reported measured losses of around 0.02
dB/mm at 100 GHz. In [5], Digby et al., used a different micro-machining process to form a substrate integrated 100 GHz air-
filled waveguide. Their measured data, around 0.05 dB/mm at 100 GHz was slightly higher than that of McGrath, but it was
suggested by the authors that the high attenuation might have been due to some of the waveguide walls being only one skin depth
thick. A further variation of the air-filled SIW structure was reported by Collins et al. [6], who used a micro-machining approach
to form the waveguide trough on one substrate, and this was combined with a second substrate using a ―snap-together- technique,‖
to form the final enclosed waveguide. This was a somewhat simpler fabrication approach than that used by McGrath, and by
Digby, and this was reflected in the higher measured attenuation of around 0.2 dB/mm at 100 GHz. The key differences between
the present work, and that of authors using micro-machining, are that a very low cost technique was used to form dielectric-filled
waveguides, leading to structures that were inherently robust and cheap.
The primary objective of the present paper is to provide an in-depth analytical investigation of the fabrication techniques that
could be employed to integrate efficiently novel 3-D wave- guide structures within ceramic circuit modules. However, the
necessary inclusion of dielectric within the waveguide restricts the use of these circuits above 100 GHz, with this frequency limit
being mainly decided by the loss tangent of the integrated substrate material.
This paper describes the techniques for integrating mm-wave antennas within ceramic modules using a relatively new process,
namely photo-imageable thick-film [7], [8]. Since this type of process enables the circuit structure to be built up layer-by-layer, it
is ideal for forming 3-D structures. The work described in the paper demonstrates the viability and potential of photo-image- able
fabrication technology through the measured, practical performance of novel mm-wave integrated antennas arrays working around
70 GHz.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


175
www.ijergs.org

II. FABRICATION METHODOLOGY
Photo-imageable thick-film conductors and dielectric contain a
photo vehicle with the pastes. This enables layers of conductor or
dielectric to be printed and then directly imaged using UV
radiation. The system enables fine lines and gaps to be fabricated
with dimensions down to 10 µm. Moreover, because structures
can be built up layer-by-layer, it is easy to provide
interconnections between planar and non-planar circuits within a
single ceramic circuit. This scheme can be used to design low-
cost, high-performance passive circuits such as resonators, filters,
power dividers, etc. [9], [10]. A further advantage is that the
technology is compatible with many fabrication processes such as
thin film, HTCC, and LTCC. A particular advantage of photo-
imageable materials for the work being reported here is that the
sidewalls of the integrated waveguides can be made from
continuous metal, rather than using a VIA fence. The process of
making such a sidewall is simply to develop channels in the
dielectric layer, and then to subsequently fill them with metal.
a) Photo-Imageable Fabrication
The process mainly consists of four main steps as shown in the
Fig. 1.
Step 1) The thick film paste is screen printed on alumina
substrate, leveled at room temperature and dried at
80ºC for 45 min.
Step 2) The printed paste is exposed to UV through photo
patterned chrome masks and in the exposed region
the paste polymerizes and hardens.
Step 3) The unexposed material is removed by spraying the
circuit with developer, and finally dried with an air
spray.
Step 4) The circuit is fired at 850ºC for 60 min to burn off
the binders in the paste and leave the final pattern
of conductor or dielectric.

Unlike the conventional metal etching process the photo-
imageable fabrication does not require the intermediate photo-
resist spinning and developing as the photo-vehicle required for
UV exposure and hardening is contained in the material itself. The
advantage of using this fabrication is the ability of the process to
achieve the fine geometries demanded by mm-wave circuits.
b) Waveguide Integration
The 3-D waveguide structures were built up, layer-by-layer, using the photo-imageable thick-film process. The layers were
printed onto an alumina base to give rigidity to the final structure. A layer of silver conductor (Fodel 6778) paste was first printed
onto the alumina to form the bottom broad wall of the waveguide [Fig. 2(a)]. Next, a layer of dielectric (Fodel QM44F) is screen
printed, photo-imaged and fired to form vertical trenches. Conductor paste is then screen printed and photo-imaged to fill the
trenches, so forming the sidewalls of the waveguide [Fig. 2(b)]. These last two steps were repeated a number of times to build up
the required height of the waveguide. Finally, the top layer of conductor is printed, and radiating slots are photo-imaged and fired
to form the top wall of the waveguide [Fig. 2(c)]. The schematic view of the cross-section of the integrated waveguide is shown in
Fig. 2(d). It was found to be necessary to have the registration of intermediate layers accurate to within ±1µm, which required a
sophisticated mask aligner for exposing each layer. The uniformity of the sidewalls is a critical factor in the integration process, as
nonuniform sidewalls will lead to significant loss in the structure.
c) Fabrication Analysis
1) Fabrication Quality: Clearly, with antennas operating at very high frequencies, and consequently very small wave-
lengths,the quality and accuracy of the fabrication process is a key issue. In our case it was important that the radiating slots were

Fig. 1.Steps in a photo-imageable process (a) printing (b)
exposure (c) devel- oping (d) firing.

Fig. 2.Steps in a waveguide integration process, printing (a)
bottom wall (b) side walls (c) top wall and radiating slots. (d)
Cross-sectional view of the inte- grated waveguide on
alumina substrate.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


176
www.ijergs.org

being formed with precise dimensions and high quality edges. To demonstrate the quality of the fabrication process, an enlarged
version of one of the radiating slots is shown in Fig. 3(a). To further indicate the quality achievable with the photo-
imageableprocess,50-Ω GSG coplanar probe pads with 30µm spacing between the signal line and ground pads is shown in Fig.
3(b), and a fabricated miniature branch line coupler in Fig. 3(c).
2) Fabrication Issues:
a) Shrinkage: The main problem encountered with the photo-imageable fabrication process was shrinkage in the conductors
and dielectrics during firing. In particular, the amount of shrinkage was different for the conductors and dielectrics. Also, it was
found that the degree of shrinkage varied with the area of conductor or dielectric being fired. The difference in the rates of
shrinkage for conductors, dielectrics, and circuits of different geometries and areas are given in Table I. The significance of the
data in Table I is that it shows that it is not uniformly observed throughout the fabrication cycle and therefore cannot be taken into
account at the design stage. The shrinkage was a serious issue when trying to fill VIAs and trenches. An SEM picture of a trench
filled with conductor at an intermediate stage in the waveguide fabrication is shown in Fig. 3(d). After firing the inner conductor
shrinks creating spaces on either side of the wall.

Fig. 3. (a)–(c) Photographs showing the quality and capability of the fabrication process under careful control of the processing paramet ers. (d) SEM picture
showing the shrunk conductor strip inside the trench after firing.

TABLE I
RATE OFSHRINKAGEONCONDUCTORS, DI ELECTRI CS ANDCIRCUITS OFDIFFERENT GEOMETRIES


Itwasfoundthattheonlywaytoovercome fabricationis- suesrelatedtoshrinkageistocarefullycontroltheprocess.The
fabricationparameters(developmentand exposuretimes)need to be refined for different layers, and for different circuit geometries.
In the integration process described in this paper the shrinkage in VIAs and trenches was compensated in the Z-direction by
printing extra conductor layers. Correct compensation in the X-Y plane was achieved by increasing the exposure time and
decreasing the development time. To illustrate the effectiveness of this technique Fig. 4(a) shows the trenches filled before
compensation for shrinkage, and Fig. 4(b) shows the conductor - filled trenches after compensation.
Inordertoachievetherequireddegreeofinterlayer resolution,aQuintelQ7000maskalignerwasused.Toachieve
optimumresolutionitwasfoundthatsomecarewasneededinthe choiceofalignmentmarkstoensuretheywerecompatiblewith
themaskaligner beingused.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


177
www.ijergs.org


Fig. 4. Photographs of conductor surface to show (a) shrinkage after firing, (b) and after optimizing the fabrication process.

Fig. 5. The dielectric layer of thickness 60µm printed and dried without inter- mediate firing steps showing cracks at the corners after firing; (a) track corner;
(b) VIA corners.
b) ProcessingTime:Inthisstudy,250meshstainlesssteel screenswereusedtoprintthedielectric, givingapost firing
thicknessofaround15µm.Theconductorthickness,using325 meshscreensforprinting,wasaround8µmafterfiring. The
totalinnerheightoftheintegrated waveguideshowninFig.2. was60µm.Thiswasformedfromfourlayersofdielectric.
Inall,eightlayersofconductor wereneeded,includingtrench filling andcompensatingforshrinkage.Usingthistechnique,
thetimerequiredtofinish alayerwasoneday,themosttime consumingaspectbeingthefiringandcoolingin a single chamberfurnace.So
integratingawaveguidesectionof 60µm occupied aroundoneandahalfweek.Hence,itwasattractive
totryandsaveprocessingtimebyprintinganddryinganumber oflayersandthenco-firing inonestep.Ourexperiencewas thatcircuits
woulddevelop cracksatthecorners afterfiring. Theresultsofanunsuccessful attempttobuilda60µm-thick
dielectricpriortofiringareshowninFig.5,wheresignificant crackingisevident.

III. INTEGRATED ANTENNA DESIGNS
This section discusses the design, simulation and theoretical analysis of two different antenna topologies operating around75
GHz and integrated into a single ceramic structure. Two structures were considered:
1) A simple substrate integrated waveguide antenna consisting of a 2×4 array of slots.
2) A novel folded waveguide antenna array.

Fig.6.Schematicshowingtheantennastructure.
A. Simple Integrated Waveguide Antenna Arrays
1) Antenna Structure: Fig. 6 shows the structure of a simple integrated waveguide antenna, consisting of a
2×4array of radiating slots. The feed consisted of a 50- microstrip line with a tapered transition to provide impedance matching
between the microstrip and the integrated waveguide section [11]. The input power is split equally into two linear arrays each
having four slots, using a conventional side-fed H-plane divider [12], where the separation between the two inductive walls can be
adjusted for maximum coupling into both sections. This feeding technique introduces a phase difference of 180º between the two
linear arrays. Hence, the slots either side of the dividing wall array were positioned on opposite sides of the respective waveguides
to give a further 180º phase difference. This ensured that all the eight slots of the antenna radiated in phase.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


178
www.ijergs.org

The end slots were positioned a distance of

4 from the shorted ends of the waveguide, as shown in Fig. 6, with the
remaining slots separated by

2 , so that all the slots would be exited by maxima in the standing wave patterns. Thus the slot
positions ensured maximum radiation from the antenna.
The slot lengths were

2 to ensure good radiation, without causing end-to-end mutual coupling between adjacent slots. The
physical lengths ℓ

of the slots can be calculated from

=

2
=

0
2

+1

Where
0
is the free space wavelength which is the permittivity of the dielectric.
2) Antenna Dimensions: The dielectric waveguide antenna array with radiating slots was designed using conventional
dielectric waveguide theory [13], [14]. The design was then simulated and optimized using 3-D electromagnetic simulation
software HFSS to obtain maximum radiation. The optimized dimensions for a 2 3 slot array are shown in Fig. 7, where all
dimensions are in millimeters. The simulation results are shown in Fig. 8 for a representative SIW antenna; it should be noted that
the simulation was performed for at 76 GHz using Hybridas HD1000 thick-film dielectric, whereas our SIW antennas were
integrated using a similar but slightly different dielectric namely Dupont QM 44 F due to the unavailability of the earlier paste in
the market.
3) Experimental Results: The return loss and radiation pattern for the integrated waveguide 2 3 array are plotted in Fig. 9;
the return loss shows a good match at the design frequency, and there is a well-defined radiation pattern, with the cross-polar level
more than 20 dB down on copolar level.

Fig.7. Schematicshowingintegratedslottedwaveguideantenna dimensions: alldimensionsgiven hereareinmillimeters

Fig.8.(a)HFSSmodel,(b)fieldpattern,(c)returnloss(d)radiationpatternat E-planeandH-plane,fortheSIWantennaoptimizedfor76GHz andobtained fromsimulation.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


179
www.ijergs.org

B. Folded Waveguide Antenna Arrays
The concept of an antenna array using a folded waveguide was proposed to extend the substrate integration strategy to lower
frequencies [15]. The
10
mode in a folded waveguide resembles that of a conventional rectangular waveguide. As a result of
this folding, as shown in Fig. 10, the width (a) of the guide is reduced by 50%, and the height of the guide is doubled (b). But, the
height has got little effect on the propagation characteristic and can be set as small as required. So the overall effect is to reduce
the substrate area occupied by the antenna.

Fig. 9. Experimental results for a 2 2 3 antenna array (a) return loss (b) radiation pattern at 73.5 GHz.

Fig. 10. Folded Waveguide antennas—Basic Concept.

Fig. 11. Design dimensions of a four slot folded waveguide antenna, all dimensions are in millimeters.


Fig. 12. Integrated folded waveguide antenna (a) top conductor layer, (b) in-termediate conductor layer.
1) Antenna Structure and Dimensions: The dimensions of a74-GHz, 4-slot folded waveguide antenna were optimized using
HFSS and the results are shown in Fig. 11.
The antenna was fabricated using the photo imageable process that has been described previously. The photographs in Fig.
12(a) and (b) show the top and the intermediate layers during the fabrication of a folded waveguide antenna. As well as showing
the structure of the antenna, these photographs are a further indication of the quality of the photo imageable thick-film process. It
can be seen from Fig. 13 that the measured return loss of the back-to-back transition is very good, greater than 20 dB, in the
vicinity of the working frequency, showing that the transition as behaving as expected.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


180
www.ijergs.org

2) Experimental Results: The return loss and radiation pat-tern for a 4-slot folded waveguide antenna are shown in Fig. 14.
The antenna shows good cross-polar level and a good match close to the resonant frequency.


Fig. 13. Measured return loss of a back-to back folded waveguide transition.

Fig. 14. Folded Waveguide Antenna (a) return loss (b) radiation pattern.
IV. INTEGRATED WAVEGUIDE LOSS ANALYSIS
Since the antenna were fabricated using integrated wave guides, it was important to gain some insight into the practical losses of
the waveguide. To achieve this, waveguide lines of different lengths, but with the same cross sections, were fabricated and the line
loss measured using a vector network analyzer (HP 8510 XF), which had previously been calibrated using an on-wafer calibration
kit. The ends of the wave-guide sections were tapered to connect with the coplanar probing pads. Each tapered section had an axial
length of 2 mm. The return loss and insertion loss of integrated waveguides of length 1.9 mm and width 1.266 mm are plotted in
Fig. 15.
The results show that the integrated waveguide structure, including the tapered sections, has relatively low insertion loss up to
100 GHz, with a value of ~2dB at the antenna design frequency (74 GHz). The losses tend to increase with frequency due to
increase in dielectric loss and conductor surface losses. The losses in the tapered feeds, and also the probe-circuit mismatch losses,
were de-embedded by computing the difference in the insertion losses of two wave-guide structures of different lengths. After de-
embedding, the magnitude of the loss in the SIW was calculated and the loss is plotted as a function of frequency in Fig. 16. Fig.
17 shows the wave number and guided wavelength deduced from the measured phase data. The loss was calculated to be
~1

at 74 GHz. It can be seen that the losses are relatively small and this indicates that the integrated waveguide structure is
a usable interconnection technology up to high millimeter-wave frequencies.
Similar loss measurements were carried out for folded waveguides and it was found that the losses increased by around 20%.
This relatively small increase in loss, compared with the simple unfolded structure, indicates that the folded waveguide concept is
viable in practical situations where substrate area is at a premium. Fig. 18.shows images of the folded waveguide structures used
for the insertion loss measurements.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


181
www.ijergs.org


Fig. 15.S-parameters plotted for substrate integrated waveguide and a simple microstrip line.

Fig. 16. Loss plotted in dB/mm and dB/ g of a substrate-integrated waveguide of width 1.26 mm.

Fig. 17. The wave number and guided wavelength of a substrate in-tegrated waveguide plotted against frequency.
V. REPEATABILITY AND TOLERANCE ANALYSISOF THICK FILM PROCESS
The section details the repeatability and tolerance involved in the thick film fabrication process. The frequency response of the
substrate integrated waveguide antenna fabricated on three different supporting ceramic substrates is shown in Fig. 19. The plot
shows almost similar results for the same structure, which has gone through different printing and firing process and illustrates the
repeatability of the thick film process in constructing substrate integrated waveguide structures. Table II gives the 3-D tolerance
measured on the critical SIW dimensions. The percentage values shown in the table are calculated by measuring the dimensions of
the fabricated geometry after the process modifications to account for shrinkage. The results indicate that the geometrical
dimensions could be achieved within 5% under well-controlled process.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


182
www.ijergs.org


Fig. 18.Folded waveguide sections of different length for insertion loss measurement (dimensions in millimeters).

Fig. 19.The measured frequency response of three identical SIW antennas to illustrate the repeatability of thick-film processing.
TABLE II
THREE-DIMENSIONAL TOLERANCES MEASURED ON THE CRITICAL DIMENSIONS OF SIW AND SIW ANTENNAS AFTER PROCESS
MODIFICATION

TABLE III
PERFORMANCE COMPARISON TABLE FOR SIW AND A CONVENTIONAL METALLIC ANTENNA ARRAY AT 74 GHZ

VI. ANALYSIS OF INTEGRATED WAVEGUIDE PERFORMANCE
The primary aim of the current study was to establish the potential of photo imageable thick-film technology for fabricating
miniature mm-wave components. An antenna, using novel techniques, was chosen for the investigation because it was relatively
demanding in terms of the required quality of fabrication and also because of the small dimensions that were needed. A further
benefit of the choice of an antenna was that there was performance data available in the literature [16] for antennas fabricated
using other technologies, against which the performance of the integrated substrate approach could be compared.
Obviously, it was important to obtain some indication of the efficiency of the SIW antenna in comparison with the more
conventional microstrip patch array at mm-wave frequencies. For this efficiency analysis, the total loss (dielectric and conductor)
for a section of waveguide is compared to that of an equivalent microstrip line. Direct comparisons are difficult, because
microstrip interconnections normally have an impedance of 50 , whereas waveguide has a somewhat higher impedance How-
ever, if we compare microstrip having the same overall dimensions as the integrated waveguide, i.e., occupying the same substrate
area, then we find that the microstrip has a loss around 50% higher than that of the integrated waveguide. Moreover, for an array
giving similar radiation performance the total area of the substrate integrated waveguide antenna will be ~1 10 of that
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


183
www.ijergs.org

occupied by a microstrip [16]. Therefore, the substrate integrated waveguide structure will offer an advantage in terms of reduced
surface area and efficiency that will be significant for highly integrated millimeter-wave circuits, where substrate area is at a
premium.
The three-slot substrate integrated waveguide antenna performance has been compared with that from a conventional metallic
air-filled waveguide antenna, as shown by the data in Table III. In this table, the gain for a conventional metallic waveguide
antenna was calculated from [17] and the total loss is calculated from [18]. The minimum physical area was calculated for both
antennas, and for the SIW the physical area was reduced by ~85% compared to the metallic waveguide air-filled antennas.
VII. CONCLUSION
The results have demonstrated that photo imageable thick-film technology is a viable approach for the fabrication of circuits
working at high millimeter-wave frequencies, offering both low-loss interconnections and the potential to realize fine circuit
geometries. The techniques of using the technology to fabricate 3-D integrated waveguides within a planar circuit proved
successful, and led to the development of a high performance, miniature antenna working at 74 GHz. The technique could be
extended to LTCC, which would permit parallel processing of the layers and avoid the need for the time consuming sequential
processing of each layer.
REFERENCES:
[1] W. Menzal and J. Kassner, ―Millimeter-wave 3-D integration tech-niques using LTCC and related multi-layer circuits,‖ in
Proc. 30th Eur.Microwave Conf. Proc., Paris, France, 2000, pp. 33–53.
[2] D. Deslandes and K. Wu, ―Design consideration and performance anal-ysis of substrate integrated waveguide components,‖
in Eur. Microw.Conf., Milan, Italy, Sep. 2002, pp. 881–884.
[3] Y. Cassivi, L. Perregrini, P. Arcoini, M. Bressan, K. Wu, and G. Con-ciauro, ―Dispersion characteristics of substarte
integrated rectangular waveguide,‖ IEEE Microw. Wireless Compon. Lett., vol. 21, no. 9, pp. 333–335, Sep. 2002.
[4] W. R. McGrath, C. Walker, M. Yap, and Y.-C. Tai, ―Silicon microma-chined waveguides for millimeter-wave and
submillimeter-wave fre-quencies,‖ IEEE Microw. Guided Wave Lett., vol. 3, no. 3, pp. 61–63, Mar. 1993.
[5] C. E. Collins et al., ―A new micro-machined millimeter-wave and tera-hertz snap-together rectangular waveguide
technology,‖ IEEE Microw.Guided Wave Lett., vol. 9, no. 2, pp. 63–65, Feb. 1999.
[6] J. W. Digbyet al., ―Fabrication and characterization of micromachind rectangular waveguide components for use at
millimeter-wave and ter-ahertz frequencies,‖ IEEE Trans. Microwave Theory Tech., vol. 48, no. 8, pp. 1293–1302, Aug.
2000.
[7] M. Henry, C. E. Free, B. S. Izquerido, J. Batchelor, and P. Young, ―Photo-imageable thick-film circuits up to 100 GHz,‖ in
Proc. 39th Int.Symp. Microelectron. IMAPS, San Diego, CA, Nov. 2006, pp. 230–236.
[8] D. Stephens, P. R. Young, and I. D. Robertson, ―Millimeter-wave sub-strate integrated waveguides and filters in
photoimageable thick-film technology,‖ IEEE Trans. Microwave Theory Tech., vol. 53, no. 12, pp. 3822–3838, Dec. 2005.
[9] C. Y. Chang and W. C. Hsu, ―Photonic bandgap dielectric waveguide filter,‖ IEEE Microw. Wireless Compon. Lett., vol.
12, no. 4, pp. 137–139, Apr. 2002.
[10] Y. Cassivi, D. Deslandes, and K. Wu, ―Substrate integrated waveguide directional couplers,‖ presented at the Asia, Pacific
Conf., Kyoto, Japan, Nov. 2002.
[11] D. Deslandeset al., ―Integrated microstrip and rectangular waveguide in planar form,‖ IEEE Microw. Wireless Compon.
Lett., vol. 11, no. 2,pp.68–70, Feb. 2001.
[12] K. Song, Y. Fan, and Y. Zhang, ―Design of low-profile millimeter-wave substrate integrated waveguide power
divider/combiner,‖ , Int. J.Infrared Millimeter Waves, vol. 28, no. 6, pp. 473–478, 2007.
[13] R. M. Knox, ―Dielectric waveguide microwave integrated circuits-an overview,‖ IEEE Trans. Microwave Theory Tech.,
vol. 24, no. 11, pp. 806–814, Nov. 1976.
[14] H. Jacobs, G. Novick, G. M. Locascio, and M. M. Chrepta, ―Measur-ment of guide wavelength in rectangular dielectric
waveguides,‖ IEEETrans. Microwave Theory Tech., vol. 24, no. 11, pp. 815–820, Nov.1976.
[15] N. Grigoropoulos and P. R. Young, ―Compact folded waveguide,‖ in Proc. 34th Eur. Microwave Conf., Amsterdam, The
Netherlands, 2004,pp.973–976.
[16] F. Kolak and C. Eswarappa, ―A low profile 77 GHz three beam antenna for automotive radar,‖ IEEE MTT-S Dig., vol. 2,
pp. 1107–1110, 2001.
[17] C. A. Balanis, Antenna Theory Analysis and Design, 2nd ed. New York: Wiley.
[18] E. V. D. Glazier and H. R. L. Lamont, The Services Textbook ofRadio. London, U.K.: Her Majesty‘s Stationery Office
(HMSO),1958, vol. 5, Transmission and Propagation


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


184
www.ijergs.org

Risk Factor Analysis to Patient Based on Fuzzy Logic Control System
M. Mayilvaganan
1
, K. Rajeswari
2

1
Assiociate professor,Department of Computer Science, PSG College of Arts and Science, Coimbatore, Tamil Nadu, India
2
Assistant Professor: Department of Computer Science, Tiruppur Kumaran College for women, Tiruppur, Tamil Nadu, India
E-mail- vkpani55@gmail.com

Abstract— Fuzzy logic has proved in this paper, a medical fuzzy data is introduced in order to help users in providing accurate
information when there is inaccuracy. Inaccuracy in data represents imprecise or vague values (like the words use in human
conversation) or uncertainty in using the available information required for decision making handle the uncertainty of critical risk
for human health. In this paper involved to diagnosis the health risk which is related to Blood Pressure, Pulse rate and Kidney
function. The confusing nature of the symptoms makes it difficult for physicians using psychometric assessment tools alone to
determine the risk of the disease. This paper describes research results in the development of a fuzzy driven system to determine
the risk levels of health for the patients. The system is implemented and simulated using MATLAB fuzzy tool box.
Keywords—Fuzzy logic control system, Risk analysis, Sugeno-type, Fuzzy Inference System, MATLAB Tool, ANFIS,
Defuzzification
INTRODUCTION
In the fields of medicine area diagnosis, treatment of illnesses and patient pursuit has highly increased. Despite the fact that these
fields, in which the computers are used, have very high complexity and uncertainty and the use of intelligent systems such as
fuzzy logic, artificial neural network and genetic algorithm have been developed. In the other word, there exists no strict boundary
between what is Healthy and what is diseased, thus distinguish is uncertain and vague [2]. Having so many factors to analyze to
diagnose the heart disease of a patient makes the physician‘s job difficult. So, experts require an accurate tool that considering
these risk factors and show certain result in uncertain term. Motivated by the need of such an important tool, in this study, it
designed an expert system to diagnose the heart disease. The designed expert system based on Fuzzy Logic. This fuzzy control
system that deals with diagnosis has been implemented in MATLAB Tool. In this paper introduced fuzzy control system to design
fuzzy rule base to analyse the risk factor of patient health and the rule viewed by surface view.
FUZZY INFERENCE SYSTEM
In this study, it present a Fuzzy control System for the diagnosis risk factor from the collection of Blood pressure value, pulse
rate and kidney function are used as a several parameter to determine risk analysis by fuzzy rule respectively. A typical
architecture of FLC is shown below, which comprises of four principal comprises such as a fuzzifier, a fuzzy rule base, inference
engine, and defuzzifier. In fuzzy inference process, Blood pressure value, pulse rate and kidney function value are the inputs to
transmit for making decision on basis of pattern discerned. Also involves all pieces that are described in Membership Functions
and If-Then Rules.
METHODOLOGY BACKGROUND
INPUT DATA
Medical diagnosis is a complicated task that requires operating accurately and efficiently. Such complicated databases are
supported of uncertain information is called a fuzzy database [7] [8]. Neuro-adaptive learning techniques provide
to learn information about a data set for modeling the operation in procedure. Using a given input/output data set, the toolbox
function adaptive neuro-fuzzy inference system (ANFIS) constructs a fuzzy inference system (FIS) whose membership function
parameters are adjusted using either a back propagation algorithm. The inputs of linguistic variable are put into the measurement
for performing to the Sugeno member function method and assigned the rule base refer the Table I, Table II, and Table III using
If.. Then rule insert into the tool to analyse the risk factor of patient.
Kidney function was measured by several classified Glomerular Filtration Rate (GFR) such as Normal, problem started
GFR, Below GFR, Moderate GFR, Below Moderate GFR, Damage GFR and Kidney failure. Blood pressure (BP) values
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


185
www.ijergs.org

also classified by different ranges such as Low normal, Low BP, Very Low BP, Extreme Low BP, Danger Low BP, Very
Danger Low BP, Danger too Low BP, Normal BP, High Normal BP, Border line BP, High BP, Very High BP, Extreme very
High BP, Very danger High BP. Pulse values are derived from systolic and diastolic Blood pressure value. Such Blood
pressure values to be analyzing to the kidney function for determine the risk factor.
TABLE I. Analysis the variable in Rule Base












TABLE II. Analysis the variable in Rule Base (Contd)














TABLE III. Analysis the variable in Rule Base (Contd)
Cases Comment
Blood Pressure
60-40
Very
Danger
Low BP
50-30
Danger
Too
Low BP
120-80
Normal
BP
130-85 High
Normal BP
140-90
Border Line
BP
Kidney
Function
[Glomerular
Filtration
Rate]
Normal (> 90)
Very
Danger
Low BP
++
Low
BP ++
Normal
BP
High Normal
BP
Border
Line BP
Below GFR
(80-89)
Moderate GFR
(45-59)


Very
Danger
Low
Bp +
Low
BP +
High Normal
BP + Below Moderate
GFR
(30-44)
Damage GFR
(15-29)


Very
Danger
Low BP
Low BP
High Normal
BP ++
Kidney Failure
GFR<15
Cases Comment
Blood Pressure
115-75
Low
Norma
l
100-65
Low BP
90-60
Very
Low BP
80-55
Extreme
Low BP
70-45
Danger Low BP
Kidney
Function
[Glomerular
Filtration
Rate]
Normal (>
90)
Low
Normal
+ +
Low

BP ++
Very
Low BP
Extreme Low
BP ++
Danger Low BP
Below GFR
(80-89)
Moderate
GFR
(45-59)


Low
Normal
+
Low
BP +
Extreme Low
BP +
Below
Moderate
GFR(30-44)
Damage
GFR(15-29)

Low

Normal
Low
BP
Extreme Low
BP Kidney
Failure
GFR<15
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


186
www.ijergs.org













SUGENO FIS METHOD
Adaptive neuro fuzzy inference system (ANFIS) represent Sugeno e Tsukamoto fuzzy models. A typical rule in a Sugeno fuzzy
model has the form an example, If Input 1 = x and Input 2 = y, then Output is z = ax + by + c
For a zero-order Sugeno model, the output level z is a constant (a=b=0).The output level z
i
of each rule is weighted by the firing
strength w
i
of the rule.
Typical membership function is followed by the formula,
( )
1
μ x =
A 2b
i
x-c
i
1+
a
i
(1)
Where parameters are referred as premise parameters. Every node in this layer is a fixed node labeled; an output of factor will
produce by all incoming signals of given parameter. An i
th
node calculates the ratio of the i
th
rules by firing strength to the sum of
all rule‘s firing strengths. The outputs are called normalized firing strength. An overall output computes by the summation of all
incoming signals such as

w f
i
i i
w f =
i
i i
w
i
i
¿
¿
¿
(2)
Through this Sugeno method gives a crisp output f(u) generated from the fuzzy input. Under the process Fuzzification was handle
for first step a proper choice of process state variables and control variables is essential to characterization of the operation of a
fuzzy logic control system. In decision making logic, If...Then rule base follow for measuring the membership values obtained.
Finally the defuzzification is processed for combining the fuzzy outputs of all the rules to give one crisp value [2].
Cases Comment
Blood Pressure
160-100
High BP
180-110
Very
High BP
210-125 Extreme
Very High BP
240-140
Very danger
Low BP
Kidney
Function
[Glomerular
Filtration
Rate]
Normal
(> 90) High
BP ++
Very High
BP ++
Extreme Very
High BP
Very danger
Low BP
Below
GFR (80-89)
Extreme Very
High BP+
Very danger
Low BP+
Moderate GFR
(45-59)


High
Bp +
Very High
BP +
Extreme Very
High BP++
Very danger
Low BP++
Below Moderate
GFR
(30-44)
Extreme Very
High BP+++
Very danger
Low BP+++
Damage GFR
(15-29)


High BP
Very High
BP
Extreme Very
High BP++++
Very danger
Low BP++++
Kidney Failure
GFR<15
Extreme Very
High BP+++++
Very danger
Low BP+++++
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


187
www.ijergs.org


Fig.1. Member function of Blood Pressure

Fig.2. Final plot of Member function - Blood Pressure
From fig.1 and fig.2 represents the member function of blood pressure are constructed for finding the risk factor
based on the rule base inputs[4] [5].
INFERENCE ENGINE
The domain knowledge is represented by a set of facts about the current state of a patient. The inference engine compares each
rule stored in the knowledge base with facts contained in the database. When the IF (condition) part of the rule matches a fact, the
rule is fired and its THEN (action) part is executed. The condition is check blood pressure is mf1, pulse value represents mf2, and
kidney function is representing as mf3. The inference engine uses a system of rules to make decisions through the fuzzy logical
operator and generates a single truth value that determines the outcome of the rules. This way, they emulate human cognitive
process and decision making ability and finally they represent knowledge in a structured homogenous and modular way.




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


188
www.ijergs.org












Fig.3. Logic gate for finding the risk rate
From the figure 3 describe, X and Y are pressure value which represent as S, pulse values are derived from given
pressure value which represent S1, c and C1 represent as a carry‘s out which is used to XOR calculation data wants to
carry‘s the value for getting the result and z are Kidney function value which represent X2.The pulse rate was
analysed by given Systolic and Diastolic values. Finally risk factor was analysed by Blood Pressure, Pulse rate and
GFR rate of kidney functions.
DEFUZZIFICATION
Defuzzification is the process of converting the final output of a fuzzy system to a crisp value. For decision making purposes, the
output fuzzy sets must be defuzzifier to crisp value in the real life domain. Finally the process of defuzzification converts the
conclusion of the mechanism into the actual inputs for the process. The health risk are determines the level of severity of
depression risk given the input variables. The fuzzy system provides an objective process for obtaining the depression risk level.
After determining the fuzzy membership functions, for the purpose of the study a standard rule base is developed to generate rules
for the inference engine. A total of 250 rules were generated representing three fuzzy linguistically designed inputs. The
simulation of the fuzzy system was carried out with MATLAB tool. The severity level is obtained as output response for the input
values (blood pressure= 120/80, pulse value =40, kidney function = below moderate). New input values generate new depression
risk output responses. Also the inputs can be set explicitly using the edit field and this will again produce a corresponding output
that is consistent with the fuzzy rule base. Finally the health risk was observed by the relationship between those attributes in the
determination of risk levels as shown in fig. 4.
X
Y
S
C
X1
Y1
S1
C1
Risk
Z
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


189
www.ijergs.org


Fig.4 Plot of Surface view of health risk

Fuzzy system is used to obtain the severity level which is the only output variable of the system. The risk determines
the level of severity of risk given the inputs.

RESULT AND DISCUSSION

The patient health risk was found from the given input of linguistic variable of Blood pressure, Pulse rate and kidney functions.
Using Sugeno FIS method to construct the membership function for assigned the linguistic variable for fuzzification process.
Using If ... Then rule and inference strategies are chosen for processing the rule base to determine the risk factor among the blood
pressure, kidney function and pulse rate by logical decision making analysis. Through the defuzzification, fuzzy system provides
an objective process of risk factor, also to view the surface view of the risk determination using simulation.
CONCLUSION

It can be concluded in this paper, the fuzzy system accurately predicts depression risk severity levels based on expert knowledge
embedded as fuzzy rules and supplied patients stage retrieve by given parameters. The use of this approach is contributed to
medical decision-making and the development of computer-assisted diagnosis in medical domain and identifies the major risk of
the patient in earlier.

REFERENCES:
Abraham A. ―Rule-based Expert Systems‖. Handbook of Measuring System Design, John Wiley & Sons, 909-919, 2005.
Mahfouf M, Abbod MF & Linkens DA, ―A survey of fuzzy logic
monitoring and control utilization in medicine‖, Artificial Intelligence in
Medicine 21, pp 27-42, 2001.
Agbonifo, Oluwatoyin C. , Ajayi, Adedoyin O. ―Design of a Fuzzy Expert Based System for Diagnosis of Cattle Diseases‖,
International Journal of Computer Applications & Information Technology.
Ahmed M. Ibraham, ―Introduction to Applied Fuzzy Electronics‖, 1997.
http://en.wikipedia.org/wiki/MATLAB.
Adlassing, K.P. ―Fuzzy set theory in medical diagnostics‖, IEEE Trans. On Systems, Man, and Cybernetics, Vol. SMC-16(1986)
260-264.
Seising, R, ―A History of Medical Diagnosis Using Fuzzy Relations‖,. Fuzziness in Finland'04, 1-5, 2004.
Tomar, P.P., Saxena, P.K. 2011. Architecture For Medical Diagnosis Using Rule-Based Technique. First Int. Conf. on
Interdisciplinary Research & Development, Thailand, 25.1-25.5
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


190
www.ijergs.org

A Smart Distribution Automation using Supervisory Control and Data
Acquisition with Advanced Metering Infrastructure and GPRS Technology
A.Merlin Sharmila
1
, S.Savitha Raj
1

1
Assistant Professor, Department of ECE, Mahendra college of Engineering, Salem-636106- TamilNadu, India
E-mail- ece.coolrocks@gmail.com
Abstract— To realize some of the power grid goals, for the distribution system of the rural area, which builds up a real-
time, wireless, multi-object monitoring remote system of electrical equipment depending on GPRS network, with a feeder
automation based on Advanced Metering Infrastructure (AMI) is proposed. GRID uses Supervisory Control and Data Acquisition
(SCADA) to monitor and control switches and protective devices. This will improve routine asset management, quality of service,
operational efficiency, reliability, and security. The three parts of the system are assimilated with the advanced communication
and measurement technology. As a added advantage to the existing system, the proposed methodology can monitor the operating
situation, easily detect and locate the fault of the feeders and status of breakers,. The information from the system helps in
apprehending the advanced distribution operation, which includes improvement in power quality, loss detection and state
estimation .

Keywords— ADVANTAGE METER INFRASTRUCTURE (AMI),SUPERVISORY CONTROL AND DATA ACQUISITION
(SCADA),SMART DISTRIBUTION GRID (SDG), DISTRIBUTION AUTOMATION (DA),GEOGRAPHY
INFORMATIONSYSTEM (GIS), GENERAL PACKET RADIO SERVICE(GPRS),ACCESS POINT NAME(APN),VIRTUAL
PRIVATE NETWORK(VPN).
INTRODUCTION
The distribution systems face to the customers directly, as the key to guarantee the power supply quality and enhance
operating efficiency [1], they are the largest and most complex part of the entire electrical system. While the productivity of the
power system is rather low (about 55%, according to the statistics of USA) now days, the massive fixed asset investment were
wasted [2]; More than 95% of power outage suffered by consumers is due to the electrical power distribution system (except the
reason of generation insufficiency) [3]. Therefore, the advanced distribution automation should be developed to realize flexible
load demand characteristics, the optimum of the assets management and utilization through the communication between utility and
terminal customers.
Substation automation is an integrated system which can realize real-time remote monitoring, coordination and control,
substation remote monitoring system has become one of the key parts of substation automation system because it takes advantage
of the wireless communication technology which has several overwhelming advantages such as convenience, fast and low cost
transmission. Furthermore, GPRS networking has already covered the whole country and become an important sustainable
resource (for utilization and development).
Recently, the smart grid is the focus topic of the power industry, and the network model of it can change the future of the
power system. The smart grid includes smart transmission grid and smart distribution grid (SDG). The unindustrialized smart grid
system necessitates high speed sensing of data from all the sensors on the system within a few power cycles. Advanced Metering
Infrastructure is a meek illustration of a structure where all the meters on a grid must able to provide the necessary information to
the controlling (master) head end within a very short duration [3].With AMI, the distribution wide area measurement and control
system consists the information exchange and integrated infrastructure [4]-[6].
Distribution system plays an important role in power systems. After many years of construction, the most of distribution
system are equipment with SCADA, but, some distribution lines in rural areas with long distance can‘t be monitored and
controlled at all. The optical fiber communication also suits to power systems for its insensitivity to electromagnetic interference,
but it limits its usage in the whole power systems due to its high cost [2]. Therefore, for these distribution lines in rural area and
lines with long distance, the solution with other kind of communication should be applied.
In this study, the wireless communication technology based on the AMI system, the measurement and control system for
distribution system is proposed to monitor and control feeders with long distance in rural area, realize the management of the
whole distribution system, furthermore, it will shorten the fault time, enhance the utilization rate of the power system asset, match
the requirements of the smart distribution grid.

SDG AND AMI

SDG gives us a intergraded grid of all kinds of new technology emerging in distribution network, with perfect working
distributive system. According to the operation of SDG, the one-way communication is replaced by the two-way, the customers
can know the real-time price, to make a plan to use their own distribution generation to support themselves or supply the spare
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


191
www.ijergs.org

electrical power to power system and charge at the period of high-price or they make decision to turn on electrical applications at
the low price period.
The SDG requires high speed communication to the customers on the system. So, the two-way communication system is
used to realize some function of SDG [3]. While SCADA infrastructure is typically limited due to cost, so that AMI is
implemented.
AMI is the deployment of a metering solution with two-way communications to enable time stamping of meter data,
outage reporting, communication into the customer premise, service connect/disconnect, on-request reads, and other functions.
The AMI systems provide a unique opportunity to gather more information on voltage, current, power, and outages
across the distribution grid to enable new and exciting applications.
In the proposed system, the data gathered from AMI is used to monitor the operation status of the distribution feeder, if a
fault occurred, the software will detect its location and send the command to switch off the relevant switches, furthermore, after
the fault disappeared, those switches can be switched on remotely. That is a task required to realize a smart distribution grid.

PROPOSED ARCHITECTURE OF AMI SYSTEM

The measurement and control system consists of three parts, the measurement and controlling device (M&C device),
communication network, and data processing center. Fig.1 shows the block-diagram of the proposed system. There are two 10kV
feeders, which have 2 section switches (or recloser) each, a loop switch connects the two feeder together. The 4 section switches
(S1-S4) and the loop switch (LS) are equipped with GPRS communication FTU, which is called GFTU.

Fig.1 The architecture of the AMI system


MEASUREMENT AND CONTROL DEVI CE
The measurement and controlling device consists of GPRS communication module and FTU, the GFTU is connected
with switches, reclosers or breakers, gathers data from meters or switches. The configuration of the GFTU is shown in Fig.2.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


192
www.ijergs.org


Fig.2 The diagram of the GFTU
The microcontroller collects and packages the data of the switch, and then sends them to the control center by GPRS
network. The data collected includes voltage, current, power factor and so on. On the other hand, the GFTU receives the command
from the control center and controls switches on or off.

COMMUNI CATI ON NETWORK

In the proposed system with AMI, there are two levels of communication, the first level is from switches or meters to
GFTUs by RS-232 or RS-485. The second level is the communication between GFTUs and the center. The GPRS communication
network includes GPRS modules, which is embedded in the GFTUs, GPRS network and the server.
In this system, APN (Access Point Name) private tunnel and VPN (virtual private network) encryption and authentication
technology were adopted in the control center. Each GFTU has a static IP address, they register and send data to APN (given by
mobile department), and then the data are sent to the server. For the adoption of the technology of exchanging tunnel, the user can
be identified. The user‘s data would be transmitted through the public network with high security and speed. The scheme of the
GPRS network is shown in Fig.3.


Fig.3 Schemes of the GPRS network in the AMI system
PROCESSI NG SOFTWARE
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


193
www.ijergs.org

The processing software includes database and GIS (Geography Information System) the software gathers, process and
transmit data to realize such functions:
-Display the current operating situation of the feeder onScreen;
-The fault detecting, locating, insulating and restoration in distribution systems.
In the system, data transmission between the switches and the GFTU, the GFTU and the center are bidirectional. The
operator in the center monitors the operating situation of the feeder and also controls the switches in the feeder when a fault
occurs.

FAULT DETECTI ON AND LOCATI ON

The distribution feeder enhances the outage management systems, to enable the ―fault diagnose‖ capabilities of the
software, which will not only lead to improved outage restoration times and also provides support for moreeffective restoration
procedures.
The GFTUs collects the operating information of the switches such as voltage, current and its status. If a fault occurs, the
CB willswitch off immediately receiving the current operating information from all GFTUs. In the center,the fault can be located
with the help of the data from every GFTU on feeders, according to the fault current.
If it is a transient fault, the CB would reclose successfully after several seconds. If not, it would open again. The action of
switches will be recorded, to make the decision to switch off the relevant switches to insulate the fault, and to reconstruct the
distribution feeder to make the loss low.
For example, as shown in Fig.4, the hand to hand circle between the substation A and B, the loop switch is in off state.
When a permanent fault occurs between S2 and S3, the CB at line A trips and at the same time GFTUs at the line A sents the
voltage and current parameter to the control center, the fault current will be found in data from S1 and S2, otherwise from S3 and
S4.
According to the topological structure, the fault was located between S2 and S3 should be switched off and the CB
should be switched on remotely to restore the power supply of the feeder circuit.

AI D TO ADO
To work with other systems, the monitor and control system helps in running the advanced distribution operation (ADO)
of the whole distribution system, the connection of the proposed system with other software is shown as Fig 6.

MAKETHEI NVI SI BLEVI SI BLE

The geography information system (GIS) receives the data and display on the map, so the operating situation of feeders
can monitor clearly. When the fault occurs, it is displayed on the map along with its location. This helps workers to find easily.

Red: ON Green: OFF
Fig.5 Fault location and restoration.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


194
www.ijergs.org

I MPROVE POWER QUALI TY

The GFTUs collects the voltage value at the different point of the feeder, the power parameters like harmonic value and
reactive power can be analysed quickly helping the utility to adopt relent technology and improving the power quality of the
device and also avoids the unnecessary investment.

LOSS DETECTION

It is very difficult to know the actual losses on thedistribution network. Generally, rules of thumb are used to estimate the
power losses. It is probable to calculate the system losses by relating information nodes at the distribution feeder and distribution
transformers. This empowers better tracking and efficacy on the distribution network.

STATEESTIMATI ON

The measurements are only available at thedistribution substations. The power flowing on the distribution grid are
unclear, they are typically allocated using the generic models or transformer kVA rating.
By utilizing the information from the beginning and end of the distribution feeders,accurate load models can be
computed allowing accurate load estimation on the distribution grid.
This data is perilous to understand the impact and benefits of connecting renewable energy sources to the distribution
grid.




APPLICATION CASES

The proposed monitor and control system has run for more than one year with success in Qingdao utilities Shandong
Province, China. The distribution feeder lines, line Y and line Q, are located in the north urban area of the Qingdao city, far away
from the central office, about 30km. Before the installation, the lines were patrolled manually every day or week and those
switches were on or off manually, it was difficult to find the fault location. 7 GFTUs were equipped in the two feeders.
The application of the proposed system helped in gathering the data almost immediately and shortened the time used to
location the fault. When the measurement and controlling system based AMI was coming into use, a comprehensive test carried,
as shown in the Table 1.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


195
www.ijergs.org


Table 1. The Test Item and the Result of the system
Item Result

Switch on/off the loop
switch 3 times

Correct
Success rate: 100%
Response time:1.2seconds

Remote reading every 5
minutes

GPRS code loss rate: less
than 1%

The proposed system is cost-effective. To the information from the company, the power supply of the two lines are
12.3026MkWH and 6.70MkWH, if two fault occurs in each line, the benefit of the measurement and control system for
distribution feeder is listed in Table 2.
Table 2.The benefit of the system
Item Benefit

Saved time to seek the
fault

More than 10 hours

Saved Transportation fee

500,000Yuan RMB
Avoided Electricity Loss

44,000KWH
Saved Device Loss

2530,000Yuan RMB
Total Benefit 3610,000Yuan RMB

CONCLUSION
Smart grid is a throng of perceptions that includes newpower delivery components, control and monitoring throughout
the power grid and more informed customer options about the next generation power system. Smart distribution grid is an
important part in it. To realize the smart distribution grid, an AMI based measurement and control system for the distribution
system is proposed in this paper.
It enables utilities to run the advanced distribution operation in a cost-effective manner. The adoption of advanced multi-
communication media, such as GPRS or 3G, enables the AMI system to collect the meter data quickly with accuracy
automatically.
The proposed system can work on existed feeder automation system with other communication types, and integrate distribution
automation system to reduce the labour cost, accurate loss detection and load forecasting. It will be a very important installation to
the realization of the smart distribution grid.

REFERENCES:
[1] XIAO Shijie, ―Consideration of technology for constructing Chinesesmart grid,‖ Automation of Electric Power Systems,
vol.33, pp.124,2009 .
[2] MCGRANAGHAN M, GOODMAN F., ―Technical and systemrequirement s for advanced distribution automation,‖
Proceedings ofthe 18th International Conference on Electricity Distribution, June 6-9 ,2005 , Turin , Italy : 5p.
[3] XU Bingyin , LI Tianyou , XUE Yong duan2, ―Smart Distribution Gridand Distribution Automation,‖ Automation of
Electric Power Systems,vol.33, pp. 38-41, 2009.
[4] YU Yixing, LUAN Wenpeng, ―Smart Grid‖, Power System Technology,vol.25, pp. 7-11, 2009.
[5] YU Yi-xin, ‗Technical Composition of Smart Grid and itsImplementation Sequence,‖ Southern Power System
Technology, vol.3,pp.1-5 ,2009
[6] Collier S E. ―Ten Steps to a Smarter Grid,‖ Rural Electric PowerConference, REPC‟09, IEEE.2009.
[7] Sumanth S., Simha V., Bapat J., and et al., ―Data Communication overthe Smart Grid,‖ International Symposium on
Power LineCommunications and Its Applications, pp.273 – 279, 2009.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


196
www.ijergs.org

[8] Hart, D.G._―Using AMI to realize the smart grid,‖ Power and Energy Society General Meeting-Conversion and Delivery
of Electrical Energy in the 21
st
Century at Pittsburgh in Pennsylvania_USA, 2008.
[9] SUI Hui-bin, WANG Hong-hong, WANG Hong-wei, and et. al.,―Remote meter reading system based on GPRS used in
substations,‖Telecommunications for Electric Power System, vol.28, pp. 57-59,65,Jan.2007.
[10] ZHAO Bing, ZOU He-ping, LV Ying-jie, ―Access Security Technology in Monitoring System of Distribution
Transformer Based on GPRS, ‖.Advances of Power System &Hydroelectric Engineering, vol.26(3),pp.16-19, 2010.
[11] Hu Hongsheng, Qian Suxiang, Wang Juan, Shi Zhengjun, ―Application of Information Fusion Technology in the Remote
State On-line Monitoring and Fault Diagnosing System for Power Transformer, ‖Electronic Measurement and
Instruments, 2007. ICEMI‘07. 8th International Conference on , pp:3-550 - 3-555.oct.2007
[12] Nie Huaiyun, ―Research and design of μC/OS- II\GPRS-based remote ship supervision system, ‖ Nanjing university of
Science and technology:Academic,2006,pp.6-11,14-23,37-44. (in Chinese)
[13] B. Berry, ―A Fast Introduction to SCADA Fundamentals and Implementation, DSP Telecom‖, retrieved on July 28, 2009,
from http://www.dpstelecom.com/w_p.
[14] Electricity Company of Ghana (ECG), ―Automation of ECG‘s Power Delivery Process (SCADA)‖, retrieved on July28,
2009, http://www.ecgonline.info/Projects/CurrentProjects/Engineering Projects/SCADA, 2008




















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


197
www.ijergs.org

Performance Comparison of AODV, DSDV and ZRP Routing Protocols
Ajay Singh
1
, Anil yadav
2
, Dr. mukesh Sharma
2

1
Research Scholar (M.Tech), Department of Computer Science, T.I.T&S, bhiwani
1
Faculty, Department of Computer Science, T.I.T&S, bhiwani
E-mail- ajays.cs@gmail.com
Abstract: Mobile Ad Hoc Networking (MANET) is a group of independent network mobile devices that are connected
over various wireless links. It is relatively working on a constrained bandwidth. The network topologies are dynamic
and may vary from time to time. Each device must act as a router for transferring any traffic among each other. This
network can operate by itself or incorporate into large area network (LAN). In this paper, we have analyzed various
Random based mobility models: Random Waypoint model, Random Walk model, Random Direction model and
Probabilistic Random Walk model using AODV,DSDV and ZRP protocols in Network Simulator (NS 2.35). The
performance comparison of MANET mobility models have been analyzed by varying number of nodes, type of
traffic (CBR, TCP) and maximum speed of nodes. The comparative conclusions are drawn on the basis of
various performance metrics such as: Routing Overhead (packets), Packet Delivery Fraction (%), Normalized
Routing Load, Average End-to-End Delay (milliseconds) and Packet Loss (%).

Keywords:
Mobile Ad hoc, AODV, DSDV,ZRP, TCP, CBR, routing overhead, packet delivery fraction, End-to-End delay,
normalized routing load.
1 Introduction:
Wireless technology came into existence since the 1970s and is getting more advancement every day. Because of
unlimited use of internet at present, the wireless technology has reached new heights. Today we see two kinds of
wireless networks. The first one which is a wireless network built on-top of a wired network and thus creates a reliable
infrastructure wireless network. The wireless nodes also connected to the wired network and these nodes are connected
to base stations. An example of this is the cellular phone networks where a phone connects to the base-station with the
best signal quality.
The second type of wireless technology is where no infrastructure [1] exists at all except the participating mobile
nodes. This is called an infrastructure less wireless network or an Ad hoc network. The word Ad hoc means something
which is not fixed or not organized i.e. dynamic. Recent advancements such as Bluetooth introduced a fresh type of
wireless systems which is frequently known as mobile Ad-hoc networks.
A MANET is an autonomous group of mobile users that communicate over reasonably slow wireless links. The
network topology may vary rapidly and unpredictably over time because the nodes are mobile. The network is
decentralized where all network activity, including discovering the topology and delivering messages must be
executed by the nodes themselves. Hence routing functionality will have to be incorporated into the mobile nodes.
Mobile ad hoc network is a collection of independent mobile nodes that can communicate to each other via radio
waves. The mobile nodes can directly communicate to those nodes that are in radio range of each other, whereas others
nodes need the help of intermediate nodes to route their packets. These networks are fully distributed, and can work at
any place without the aid of any infrastructure. This property makes these networks highly robust.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


198
www.ijergs.org

In late 1980, within the Internet [1] Engineering Task Force (IETF) a Mobile Ad hoc Networking (MANET) Working
Group was formed to standardize the protocols, functional specification, and to develop a routing framework for IP-
based protocols in ad hoc networks. There are a number of protocols that have been developed since then, basically
classified as Proactive/Table Driven and Reactive/On-demand Driven routing protocols, with their respective
advantages and disadvantages, but currently there does not exist any standard for ad hoc network routing protocol and
the work is still in progress. Therefore, routing is one of the most important issues for an ad hoc network to make their
existence in the present world and prove to be divine for generations to come. The area of ad hoc networking has been
receiving increasing attention among researchers in recent years. The work presented in this thesis is expected to
provide useful input to the routing mechanism in ad hoc Networks.
2 Protocol Descriptions
2.1 Ad hoc On Demand Distance Vector (AODV)
AODV routing algorithm is a source initiated, on demand driven, routing protocol. Since the routing is ―on demand‖,
a route is only traced when a source node wants to establish communication with a specific destination. The route
remains established as long as it is needed for further communication. Furthermore, another feature of AODV is its use
of a ―destination sequence number‖ for every route entry. This number is included in the RREQ (Route Request) of
any node that desires to send data. These numbers are used to ensure the ―freshness‖ of routing information. For
instance, a requesting node always chooses the route with the greatest sequence number to communicate with its
destination node. Once a fresh path is found, a RREP (Route Reply) is sent back to the requesting node. AODV also
has the necessary mechanism to inform network nodes of any possible link break that might have occurred in the
network.

2.2 Destination Sequenced Distance Vector (DSDV)
The Destination Sequenced distance vector routing protocol is a proactive routing protocol which is a medications
of conventional Bellman-Ford routing algorithm. This protocol adds a new attribute, sequence number, to each
route table entry at each node. Routing table is maintained at each node and with this table; node transmits the
packets to other nodes in the network. This protocol was motivated for the use of data exchange along changing
and arbitrary paths of interconnection which may not be close to any base station.

2.3 Zone Routing Protocol (ZRP)
ZRP is designed to address the problems associated with proactive and reactive routing. Excess bandwidth
consumption because of flooding of updates packets and long delay in route discovery request are two main problems
of proactive and reactive routing respectively. ZRP came with the concept of zones. In limited zone, route
maintenance is easier and because of zones, numbers of routing updates are decreased. Nodes out of the zone can
communicate via reactive routing, for this purpose route request is not flooded to entire network only the border node
is responsible to perform this task. ZRP combines the feature of both proactive and reactive routing algorithms. The
architecture of ZRP consists of four elements: MAC-level functions, Intra-Zone Routing Protocol(IARP),Inter-Zone
Routing Protocol (IERP) and broadcast Routing Protocol(BRP). The proactive routing is based within limited
specified zones and beyond the zones reactive routing is used. MAC-level performs neighbour discovery and
maintenance functions. For instance, when a node comes in range a notification of new neighbour is sent to IARP
similarly when node losses connectivity, lost connectivity notification is sent to IARP. Within in a specified zone,
IARP protocol routes packets. IARP keeps information about all nodes in the zone in its routing table. On the other
hand, if node wants to send packet to a node outside the zone, in that case IERP protocol is used to find best path.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


199
www.ijergs.org

That means IERP is responsible to maintains correct routes outside the zone. If IERP does not have any route in its
routing table, it sends route query to BRP. The BRP is responsible to contact with nodes across Ad Hoc networks and
passes route queries. Important thing in bordercasting mechanism of BRP is it avoids packets flood in network. BRP
always passes route query request to border nodes only. Since only border nodes transmit and receive packets

3 Simulation
Both routing techniques were simulated in the same environment using Network Simulator (ns-2). AODV, DSDV &
ZRP were tested by the traffic i.e. TCP. The algorithms were tested using 50 nodes. The simulation area is 1000m by
1000m where the nodes location changes randomly. The connection used at a time is 30. Speed of nodes varies from
1m/s to 10m/s. by using TCP traffic we calculate performance of these two protocols for different random based
mobility model. i.e.:
(i) Random Waypoint (RWP)
(ii) Random walk(RW)
(iii) Random direction(RD)
(iv) Prob. Random Walk(PRW)

4 Simulation result
The results of our simulation will be presented in this section. First we will discuss the results of both AODV, DSDV
& ZRP protocol for different matrices and after that we make the comparison between the two protocols.

4.1 Pause Time Model Result
This test studied the effects of increasing pause time on the performance of three routing protocols. As pause time
increases, mobility in terms of change in directions (movement) decreases. When a pause time occurs, node stops for a
while and select another direction to travel. If speed is deifned as constant then for every occurance of pause time,
speed of node remains constant. In this model pause time changes from 0s to 400s while other parameters (nodes=50,
speed=10 m/s, data sending rate=16kbps and no. CBR flows=10) are constant.


Fig.3(a): Varying pause time vs packets delivery fraction (%)
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


200
www.ijergs.org



Fig. 3(b) Varying pause time vs average network end-to-end delay (in seconds)


Fig. 3(c): Varying pause time vs routing cost(in packets)

The figures 3(a),3(b) and 3(c) demonstrate packets delivery fraction, avg. Network delay and routing cost when pause
time varies from 0s to 400s. Figure 3(a) shows difference in packets delivery fractions of protocols. The performance
of AODV is almost 100%. We recorded an average of 99% packets delivery for AODV during the whole simulation.
DSDV was closed behind to AODV and showed second best performance. With smaller pause time (higher node‘s
movement) DSDV delivered 90% of data packets successfully. As pause time increased (node‘s movement decrease
d)DSDV packets delivery ration also increased and during pause time 300s and 400s DSDV gave similar performance
as AODV. Same happened with ZRP. At pause time0s, 80 % of packets delivery fraction is recorded. We observed
slightly low packets delivery fraction value of ZRP at pause time 100s. Although the value of packets delivery at this
point should have been higher than the previous one. We check the NAM file but didn‘t find anything going wrong.
One possible reason could be the far placement of sources and destinations before the pause time 100s occurred.
Figure 3(b) shows average end-to-end network delay. In high nodes movement , delay of ZRP is recorded 0.1s. As
node‘s movement slowed down till pause time 400s, delay offered by ZRP also moved down and approached to near
AODV as shown in fig. 3(b). DSDV and AODV showed nearly similar performance in terms of delay. But DSDV is
bit smoother and offered lower delay compare to AODV. An average of 0.011s is recorded for DSDV, AODV
possessed the second best position with an average delay of 0.014 s. While ZRP offered an average delay of 0.4 s.



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


201
www.ijergs.org

4.2 Speed Model Simulation Results


Fig. 4(a) Varying speed vs packets delivery fraction (%)


Fig. 4(b) Varying speed vs average network end-to-end delay (in seconds)
Figure 4(b) shows tha average end-to-end network delay. We didn‘t see much difference between the delay values of
AODV and DSDV. But DSDV performed slightly better than AODV and showed a constant performance with and
average delay of 0.01s. Although AODV showed similarity with DSDV but at maximum speed of 50 m/s delay
increased from 0.017 to 0.05s. Comparatively, ZRP showed high delay values. At speed 20 m/s delay slightly went
down and again increased as node‘s speed increased. ZRP maintains an average of 0.1s delay.



Fig. 4(c ) Varying speed vs routing cost (in packets)

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


202
www.ijergs.org

Figure 4 (c) illustrates routing cost introduced in network. DSDV maintained an average of 12 control packets per data
packets throughout the simulation. As speed increased, routing overhead of AODV also increased and reached up to
54 control packets per data packets. ZRP showed a high routing overhead. The maximum recorded routing load at high
mobility was 2280 control packets.



4.3 Network Model Simulation Results



Fig.5(a) Varying nodes vs packets delivery fraction (%)

Figure 5(a) 5 (b) and 5(c) show protocols performance in network model. We recorded consistent packets delivery
fraction values of AODV in different network seize. In contrast, ZRP achieved consistent packet delivery till network
size of 30 nodes. An average of 96% delivery ratio is recorded. In network size of 40 nodes, ZRP packets delivery
fraction fell down from 95% to 91%. While in network size of 50 nodes the lowest value of packets delivery fraction is
recorded (69%).DSDV showed the 3
rd
best performance in network model in terms of packets delivery fraction. As
size of network increased , packets delivery fraction value of DSDV also increased and reached up to 91%. Packets
delivery fraction comparison of protocols can be seen in figure 5(a). In terms of delay, figure 5(b), DSDV showed
slightly consistent performance with an average delay of 0.01s. But delay of AODV varies in between 0.012s and
0.026s during whole simulation. ZRP, on the other hand, gave lowest delay as compared to AODV and DSDV until
network size of 30 nodes. From network size of 30 nodes to 40 nodes, we saw slight increase in delay value of ZRP

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


203
www.ijergs.org


Fig. 5(b) Varying nodes vs average network end-to-end delay



Fig. 5 (c ) Varying nodes vs routing cost ( in packets)
Figure 5(a) 5(b) and 5(c) show protocols performance in network model. We recorded consistent packets delivery
fraction values of AODV in different network seize. In contrast, ZRP achieved consistent packet delivery till network
size of 30 nodes. An average of 96 % delivery ratio is recorded. In network size of 40 nodes, ZRP packets delivery
fraction fell down from 95% to 91%. While in network size of 50 nodes the lowest value of packets delivery fraction is
recorded (69%).DSDV showed the 3
rd
best performance in network model in terms of packets delivery fraction. As
size of network increased, packets delivery fraction values of DSDV also increased and reached up to 91%. Packets
delivery fraction comparison of protocols can be seen in figure 5(a). In terms of delay, figure 5(b), DSDV showed
slightly consistent performance with an average delay of 0.01s. But delay of AODV varies in between 0.012 and
0.026s during whole simulation. ZRP, on the other hand, gave lowest delay as compared to AODV and DSDV until
network size of 30 nodes. From network size of 30 nodes to 40 nodes, we saw slight increase in delay value of ZRP
and from nodes 40 to 50, there was a drastic increase in delay value. The maximum delay we calculated for ZRP at
this point is 0.095s.
Figure 5 (c) demonstrates routing cost offered by protocols. From the figure , it is quite visible that routing load of
ZRP is much higher than of AODV and DSDV. As network became fully dense the routing load of ZRP reached up to
1915 control packets per data packets. AODV and DSDV also showed the same behaviour . However, DSDV
comparatively gave low routing load and an increased of 3 to 4 control packets are calculated as network size
increased. AODV seemed to approach to DSDV when network size was 20 nodes but just after this point the load
raised and reached up to 22 control packets. After the network size of 40 nodes we saw a consistent performance of
AODV.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


204
www.ijergs.org

4.4 Load Model Simulation Results
In this testing model, varying parameter is data sending rate. With 10 CBR sources we offered different workload.
Load increased from 4 to 20 data packets/second while pause time is null, node‘s speed is 10 m/s and number of nodes
are 50.



Figures 6(a), 6(b) and 6(c) highlight relative performance of three protocols on load model. As seen in figure 6(a)
packets delivery fraction of all protocols are affected as data sending rate increased. DSDV looked closer to AODV.
Both maintained consistent delivery ratio till rate of 8 data packets/s. As sending rate increased from that point both
protocols started droping data packets. At sending rate of 20 packets/s, AODV and DSDV gave lowest packets
delivery fraction i.e 63% and 66% respectively. ZRP suffered badly when load increased and gave worst packets
delivery fraction at sending rate of 8,12,16 and 20 packets/s.




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


205
www.ijergs.org



ZRP delivered only 18% f data packets at sendinf rate of 20 packets/s. Network delay can be found in figure 6(b). As
figure highlights , ZRP maintained an average delay of 0.3s against increasing load. AODV and DSDV initially
showed small delay value under low sending rate. As offered load increased from 8 packets/s to on ward, both AODV
and DSDV reported high delay values.AODV however showed a rapid increased in delay and reported highest delay
value of 1.064s when transmission rate was 16 packets/s Routing cost of protocols in load model is presented in fugure
6(c). As shown in figure the routing cost of DSDV is lower than AODV. As load in the network increases DSDV
generates less routing packets. AODV gave slightly higher overhead than DSDV. For AODV, from offered load of 4
packets/s to 8 packets/s.
Finally at maximum applied load AODV generated 10 control packets. ZRP in this model again generated high
number of control packets But this time as compare to figures 5(c) and 4(c). ZRP showed variation in routing load.
From the sending rate of 8 to 16 packets/s. ZRP generated an average of 1540 control packets. At highest sending rate
of 20 packets/s ZRP generated 1756 control packets.
4.5 Flow Model Simulation Results
In this testing model each CBR flows generated 16 kbps of traffic. Number of flows (connections) varied from 5 to 25.
This model evaluates the strength of protocols in various source connections.
Figure 7(a), 7(b) and 7(c) show results we drawn after simulation. As shown in figure 7(a) packets delivery fraction of
ZRP is lower than other two protocols. As number of flows increased from 5 to 25 sources, packets delivery fraction
of ZRP also suffered and moved down fastly. For 5 sources both ZRP and DSDV delivered almost same number of
packets to destination. But as number of CBR sourcesincreased DSDV maintained its packets delivery (an average of
90%) continuesly till the end of simulation while ZRP started dropping packets. Finally for 25 number of CBR sources
ZRP only delivered 38% of data packets to destination. AODV outperformed here and delivered 99% of data packets
against increasing number of CBR sources. Average network delay is shown in figure 7(b). AODV and DSDV , both
showed small delay values and almost same values till 20 number of CBR sources. Only a slight increase in delay
(near to 0.1s) of both protocols happened for 25 number of CBR sources. From the start till end, delay of ZRP
countinuesly moved up as number of CBR sources increases and reached up to highest value fo 0.543s. ZRP offered
high delay as compared to AODV and DSDV.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


206
www.ijergs.org











Routing cost of all the protocols reduced when number of CBR sources increased as shown in figure 7(c). If we see
AODV and DSDV, initially for 5 number of sources AODV generated 18 control packets while DSDV generated 23
control packets. As CBR sources changed from 5 to 25, both protocols generated small number of control packets.
0
1000
2000
3000
5

1
0

1
5

2
0

2
5

R
O
U
T
I
N
G

C
O
S
T
(
P
A
C
K
E
T
S
)
NO. OF FLOWS(CONNECTIONS)
ZRP
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


207
www.ijergs.org

Although performance of DSDV is more satisfactory as it generated an average of 9 control packets, while AODV
generated an average of 15 control packets. For ZRP the value of routing cost is very high (figure 7c). As we can see
for 5 number of CBR sources ZRP generated the maximum routing packetsthat is 2646. Although , routing overhead
decreased as number of sources increased and reached up to its lowest value of 1364 routing packet for 24 CBR
sources. But still the routing load of ZRP is very much higher than DSDV and AODV
6 Future works:
In this paper four Random mobility models have been compared using AODV, DSDV and ZRP protocols. This work
can be extended on the following aspects:
 I nvest i ga t i on of other MANET mobility models using different protocols under different types of traffic like
CBR.
 Di f f er ent number of nodes and different node speeds.

REFERENCES:
[1] E.M. Royer & C.E. Perkins, An Implementation Study of the AODV Routing Protocol, Proceedings of the IEEE
Wireless Communications and Networking Conference, Chicago, IL, September 2000
[2] B.C. Lesiuk, Routing in Ad Hoc Networks of Mobile Hosts, Available Online:
http://phantom.me.uvic.ca/clesiuk/thesis/reports/adhoc/ adhoc.html#E16E2
[3]Andrea Goldsmith; Wireless Communications; Cambridge University Press, 2005.

[4]Bing Lin and I. Chlamtac; Wireless and Mobile Network Architectures; Wiley, 2000. [5] S.K. Sarkar, T.G.
Basawaraju and C Puttamadappa; Ad hoc Mobile Wireless Networks: Principles, Protocols and Applications;
Auerbach Publications, pp. 1, 2008.
[5] C.E. Perkins, E.M. Royer & S. Das, Ad Hoc On Demand Distance Vector (AODV) Routing, IETF Internet draft,
draft-ietf-manet-aodv-08.txt, March 2001
[6] C.E. Perkins & E.M. Royer, Ad-hoc On-Demand Distance Vector Routing, Proceedings of the 2nd IEEE
Workshop on Mobile Computing Systems and Applications, New Orleans, LA, February 1999, pp. 90- 100
[6] E.M. Royer & C.K. Toh, A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks, IEEE
Personal Communications Magazine,
April 1999, pp. 46-55.
[8] D. Comer, Internetworking with TCP/IP Volume 1 (Prentice Hall, 2000).



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


208
www.ijergs.org

Analysis of Thick Beam Bending Problem by Using a New Hyperbolic Shear
Deformation Theory
Vaibhav B. Chavan
1
, Dr. Ajay G. Dahake
2

1
Research Scholar (PG), Department of Civil Engineering, Shreeyash College of Engineering and Technology, Aurangabad (MS),
India
2
Associate Professor, Department of Civil Engineering, Shreeyash College of Engineering and Technology, Aurangabad (MS),
India
E-mail- vaibhav.chavan25@yahoo.com
Abstract: A new hyperbolic shear deformation theory for bending of deep beams, in which number of variables is same as that in
the hyperbolic shear deformation theory, is developed. The noteworthy feature of theory is that the transverse shear stresses can be
obtained directly from the use of constitutive relations with efficacy, satisfying the shear stress free condition on the top and bottom
surfaces of the beam. Hence, the theory obviates the need of shear correction factor. The fixed-fixed isotropic beam subjected to
varying load is examined using the present theory .Governing differential equation and boundary conditions are obtained by
using the principle of virtual work. Results obtained are discussed critically with those of other theories.

Keywords: thick beam, new hyperbolic shear deformation, principle of virtual work, equilibrium equations, displacement.

I. INTRODUCTION
1.1 Introduction
It is well-known that elementary theory of bending of beam based on Euler-Bernoulli hypothesis disregards the effects of the shear
deformation and stress concentration. The theory is suitable for slender beams and is not suitable for thick or deep beams since it is
based on the assumption that the sections normal to neutral axis before bending remain so during bending and after bending,
implying that the transverse shear strain is zero. Since theory neglects the transverse shear deformation, it underestimates
deflections in case of thick beams where shear deformation effects are significant. Thick beams and plates, either isotropic or
anisotropic, basically form two-and three dimensional problems of elasticity theory. Reduction of these problems to the
corresponding one- and two-dimensional approximate problems for their analysis has always been the main objective of research
workers. As a result, numerous refined theories of beams and plates have been formulated in last three decades which approximate
the three dimensional solutions with reasonable accuracy.
1.2 Literature survey
Rayleigh [9] and Timoshenko [10] were the pioneer investigators to include refined effects such as rotatory inertia and shear
deformation in the beam theory. Timoshenko showed that the effect of transverse shear is much greater than that of rotatory
inertia on the response of transverse vibration of prismatic bars. This theory is now widely referred to as Timoshenko beam theory
or first order shear deformation theory (FSDT) in the literature. The first order shear deformation theory (FSDT) of Timoshenko
[11] includes refined effects .such as the rotatory inertia and shear deformation in the beam theory. Timoshenko showed that the
effect of transverse shear is much greater than that of rotatory inertia on the response of transverse vibration of prismatic bars. In
this theory transverse shear strain distribution is assumed to be constant through the beam thickness and thus requires shear
correction factor to appropriately represent the strain energy of deformation. Cowper [3] has given refined expression for the
shear correction factor for different cross-sections of the beam.
Heyliger and Reddy [6] presented higher order shear deformation theories for the static and free vibration The theories based on
trigonometric and hyperbolic functions to represent the shear de-formation effects through the thickness is the another class of
refined theories. However, with these theories shear stress free boundary conditions are not satisfied at top and bottom surfaces of
the beam. This discrepancy is removed by Ghugal and Shimpi [4] and developed a variationally consistent refined trigonometric
shear deformation theory for flexure and free vibration of thick isotropic beams. Ghugal and Sharma [5] developed the
variationally consistent hyperbolic shear deformation theory for flexure analysis of thick beams and obtained the displacements,
stresses and fundamental frequencies of flexure mode and thickness shear modes from free vibration of simply supported beams.
In this paper, a variationally consistent hyperbolic shear deformation theory previously developed by Ghugal and Sharma [5] for
thick beams is used to obtain the general bending solutions for thick isotropic beams. The theory is applied to uniform isotropic
solid beams of rectangular cross-section for static flexure with various boundary and loading conditions. A refined theory
containing the trigonometric sine and cosine functions in thickness coordinate, in the displacement field is termed here as
trigonometric shear deformation theory (TSDT). The trigonometric functions involving thickness coordinate are associated with
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


209
www.ijergs.org

q(x)

transverse shear deformation effects and the shear stress distribution through the thickness of the beam. This is another class of
refined theories in which number of displacement variables in the simplest form can be same as those in FSDT the results are
compared with those of elementary, refined beam theory to verify the credibility of the present shear deformation theory.
In this paper development of theory and its application to thick fixed beam is presented.

II. DEVELOPMENT OF THEORY
The beam under consideration as shown in Figure1 occupies in 0 x y z ÷ ÷ ÷ Cartesian coordinate system the region:

0 ; 0 ;
2 2
h h
x L y b z s s s s ÷ s s
(1)

where x, y, z are Cartesian coordinates, L and b are the length and width of beam in the x and y directions respectively, and h is the
thickness of the beam in the z-direction. The beam is made up of homogeneous, linearly elastic isotropic material.





Fig. 1 Beam under bending in x-z plane
2.1 The displacement field
The displacement field of the present beam theory is of the form:
( , ) cosh sinh
2
( , ) ( )
dw h z
u x z z z
dx h
w x z w x
o o
o
= ÷ + ÷
=
| | | |(
| |
(
¸ \ . \ .¸

(2)
where u the axial displacement in x direction and w is the transverse displacement in z direction of the beam. The sinusoidal
function is assigned according to the shear stress distribution through the thickness of the beam. The function | represents
rotation of the beam at neutral axis, which is an unknown function to be determined. The normal and shear strains obtained within
the framework of linear theory of elasticity using displacement field given by Eqn. (1) are as follows.
Shear strain: cos
zx
u dw z
z dx h
t
¸ |
c
= + =
c
(3)

The stress-strain relationships used are as follows:

2
2
= sin
x x
d w Eh z d
E Ez
dx h dx
t |
o c
t
= ÷ +


cos
zx zx
z
G G
h
t
t ¸ | = =
(4)
2.2 Governing equations and boundary conditions
Using the expressions for strains and stresses (2) through (4) and using the principle of virtual work, variationally consistent
governing differential equations and boundary conditions for the beam under consideration can be obtained. The principle of
virtual work when applied to the beam leads to:

( )
.
/2
0 /2 0
( ) 0
x x zx zx
x L z h x L
x z h x
b dxdz q x wdx o oc t o¸ o
= =+ =
= =÷ =
+ ÷ =
} } }
(5)
where the symbol o denotes the variational operator. Employing Green‘s theorem in Eqn. (4) successively, we obtain the coupled
Euler-Lagrange equations which are the governing differential equations and associated boundary conditions of the beam. The
governing differential equations obtained are as follows:
L
b
h
z, w
z
x, u y

z



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


210
www.ijergs.org


( )
4 3
4 3 3
24 d w d
EI EI q x
dx dx
|
t
÷ =
(6)
3 2
3 3 2 2
24 6
0
2
d w d GA
EI EI
dx dx
|
|
t t
÷ + =
(7)

The associated consistent natural boundary condition obtained is of following form:
At the ends x = 0 and x = L


3 2
3 3 2
24
0
x
d w d
V EI EI
dx dx
|
t
= ÷ = or w is prescribed (8)

2
2 3
24
0
x
d w d
M EI EI
dx dx
|
t
= ÷ = or
dw
dx
is prescribed (9)
2
3 2 2
24 6
0
a
d w d
M EI EI
dx dx
|
t t
= ÷ = or | is prescribed (10)
2.3 The general solution of governing equilibrium equations of the Beam
The general solution for transverse displacement w(x) and warping function| (x) is obtained using Eqns. (6) and (7) using method
of solution of linear differential equations with constant coefficients. Integrating and rearranging the first governing Eqn. (6), we
obtain the following equation

( )
3 2
3 3 2
24
Q x
d w d
EI dx dx
|
t
= + (11)
where Q(x) is the generalized shear force for beam and it is given by ( )
1
0
x
Q x qdx C = +
}
.
Now the second governing Eqn. (7) is rearranged in the following form:

3 2
3 2
4
d w d
dx dx
t |
| | = ÷ (12)
A single equation in terms of| is now obtained using Eqns. (11) and (12) as:

2
2
2
( ) d Q x
EI dx
|
ì |
o
÷ = (13)
where constantso , | and ì

in Eqns. (11) and (12) are as follows

3
2
3
24
, and
4 48
GA
EI
t t |
o | ì
o t
| |
| |
= ÷ = =
| |
\ .
\ .

The general solution of Eqn. (13) is as follows:

2 3
( )
( ) cosh sinh
Q x
x C x C x
EI
| ì ì
|
= + ÷ (14)

The equation of transverse displacement w(x) is obtained by substituting the expression of | (x) in Eqn. (12) and then integrating it
thrice with respect to x. The general solution for w(x) is obtained as follows:

( )
3 2
2 1
2 3 4 5 6 3
( ) sinh cosh
6 4 2
C x EI x
EI w x qdxdxdxdx C x C x C C x C
t
ì | ì ì
ì
| |
= + + ÷ + + + +
|
\ .
} } } }

(15)
where
1 2 3 4 5 6
, , , , and C C C C C C are arbitrary constants and can be obtained by imposing boundary conditions of beam.
III. ILLUSTRATIVE EXAMPLE
In order to prove the efficacy of the present theory, the following numerical examples are considered. The material properties for
beam used are: E = 210 GPa, μ = 0.3 and µ = 7800 kg/m
3
, where E is the Young‘s modulus, µ is the density, and μ is the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


211
www.ijergs.org


Poisson‘s ratio of beam material. The kinematic and static boundary conditions associated with various beam bending problems
depending upon type of supports
Fixed end:

0
dw
w
dx
| = = = at x = 0, L







Fig. 2: A fixed beam with varying load
General expressions obtained are as follows:
2 2 5 3 2 2 2 2 2 3 2
0 0 0
5 3 2 2 2 2 2 3 2
0 0 0
( )
3 sinh cosh 1 cosh 1 cosh 1 1 1 1
3 2 5
2 sinh sinh 2 3 2
w x
A A B x x x E h x x x x E h x x E h x x
C G L L L L C G L L A G L L L L L L L L L
ì ì ì ì
ì ì ì ì ì
=
| | | | | ÷ + ÷ ÷ |
÷ + ÷ ÷ + + ÷ ÷ ÷
| | |
. \ \ . \ .

( )
2 2 2
0
2 2 2
0
2 3 4 2 2
0 0 0
2
3 4 2 2
0 0 0
2
2
cosh sinh 1
sinh cosh
1 2 9 3 1 1 1 3
cosh sinh
7
2 5 10 20 2 10 2 20
3
x
A x x E h E h
u x x
L G L L L
x x
A B A z L x x E h x x z z E L
x
h C G C L A G L h h C G h h L L L
L
z L
h h
ì ì
o
ì ì
o o
o
= ÷ ÷ ÷ ÷
= ÷
÷ | |
| | ¦ ¹ | | | | | | | |
|
+ ÷ ÷ ÷ ÷ + ÷
´ ` | | | | |
|
÷ + \ . \ . \ . \ . ¹ ) \ . |
\ .
( ) ( )
2
0
2
0
2 3
2 2 2
3
0 0 0
2 2 2
0 0 0
2
2
2
2
2
5
9 3
Lcosh sinh
2
1 1 1 3
5 20
2 1 cosh sinh
20
2 10 2 20
L sinh cosh 3
6
1
4 1
8
EE
zx
x E h
L C G L
A x
x L x
A B A E h E h x z z E
L
x
C G A G L h h C G L L
x x L
x
L
z L
h h
t
ì ì ì ì
o o
o
ì ì ì
+
=
¦ ¹
÷ | |
÷ ÷
| | ¦ ¦ | | | | | |
|
÷ ÷ ÷ ÷
´ ` | | | |
|
÷
\ . \ . \ . | \ .
¦ ¦
÷ \ .
¹ )
÷
| |
÷
|
\ .
( )
2
2
2
2
0
2
0
2 2 0
2
0 2 2 0
2 2 2
0
39
1
4 1 cosh
20
8 2 3 20
sinh cosh
20 3 cosh 1 1 1
L cosh cosh
sinh 5 2
cosh
2
CR
zx
E h
G L
A
z
C
h A E h
L x x
C G L x B E h z
x A G L h
o
ì ì ì
ì
o o
ì
ì o o
o
t
¦ ¹ ( | |
| |
+
÷
¦ ¦ ( | |
¦ ¦ \ . | |
\ .
(
÷ ÷ ÷
´ `
|
(
| | | | \ .
| | | | ¦ ¦
÷ ( ÷ +
| | | |
¦ ¦
÷ ( \ . \ . \ . ¸ ¸ ¹ ) \ .
|
=

\
2
0
2
0
3 7
cosh sinh cosh
20 3
A z L x
x x
h C h L
o
ì ì
| | | | | | |
÷ ÷ ÷
| | | |
. \ . \ . \ .

IV. RESULTS AND DISCUSSION
The results for maximum transverse displacement and maximum transverse shear stresses are presented in the following non
dimensional form for the purpose of presenting the results in this paper,
3
4
10
, , ,
x zx
x zx
b b Ebu Ebh w
u w
qh q q qL
o t
o t = = = =

TABLE-I
NON-DIMENSIONAL AXIAL DISPLACEMENT ( u ) AT (X = 0.75L, Z = H/2), TRANSVERSE DEFLECTION ( w) AT (X =
0.75L, Z=0.0) AXIAL STRESS ( x o ) AT (X = 0, Z = H/2) AND MAXIMUM TRANSVERSE SHEARS STRESSES
CR
zx
t
(X=0.01L, Z =0.0) and

EE
zx
t at (x =0.01L, z =0.0) of the Fixed Beam Subjected to Varying Load for Aspect Ratio 4


0
( )
x
q x q
L
=
x, u
z, w
L
q
0

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


212
www.ijergs.org

Source Model u

x o

CR
zx
t

Present NHPSDT -2.3243 4.5932 -0.7303 3.2118
Ghugal and Sharma [70] [71] HPSDT -2.2480 6.5984 -1.1052 0.5229
Dahake TSDT -2.2688 5.1300 -0.7546 0.4426
Timoshenko [11] FSDT -1.5375 3.2000 0.9000 0.0962
Bernoulli-Euler ETB -1.5375 3.2000 0.9000 -



EE
zx
t
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


213
www.ijergs.org


Fig. 4(a): Variation of maximum axial displacement (u) Fig. 4 (b): Variation of maximum axial stress (σ
x
)

-3 -2 -1 0 1 2 3
-0.50
-0.25
0.00
0.25
0.50
Present NHPSDT
HPSDT
TSDT
FSDT
ETB
-10 -8 -6 -4 -2 0 2 4 6 8 10
-0.50
-0.25
0.00
0.25
0.50
Present NHPSDT
HPSDT
TSDT
FSDT
ETB
u
Z/h Z/h

x o

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


214
www.ijergs.org




Fig. 4 (c): Variation of transverse shear stress (τ
zx
)

AR
Fig. 4 (d): Variation of maximum transverse displacement (w) of fixed beam at (x=0.75L, z = 0) when subjected to varying load
with aspect ratio AR.
V. DISCUSSION OF RESULTS
-3 -2 -1 0 1 2 3
-0.50
-0.25
0.00
0.25
0.50
Present NHPSDT
HPSDT
TSDT
FSDT
ETB
0 1 2 3 4
-0.50
-0.25
0.00
0.25
0.50
Present NHPSDT
HPSDT
TSDT
FSDT
0 10 20 30 40 50
0.10
0.20
0.30
0.40
0.50
0.60
0.70
Present NHPSDT
HPSDT
TSDT
FSDT
ETB
W
0.00 10.00 20.00 30.00 40.00 50.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70
Present NHPSDT HPSDT FSDT ETB
Z/h
h

Z/h

CR
zx
t

CR
zx
t

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


215
www.ijergs.org

The results obtained by present new hyperbolic shear deformation theory are compared with those of elementary theory of beam
bending (ETB), FSDT of Timoshenko, HPSDT of Ghugal and Sharma and TSDT of Dahake and Ghugal. It is to be noted that the
exact results from theory of elasticity are not available for the problems analyzed in this paper. The comparison of results of
maximum non-dimensional transverse displacement and shear stresses for the aspect ratios of 4 and 10 is presented in Table-I for
beam subjected to varying load. Among the results of all the other theories, the values of present theory are in excellent agreement
with the values of other refined theories for aspect ratio 4 except those of classical beam theory (ETB) and FSDT of Timoshenko.

VI. CONCLUSIONS
The variationally consistent theoretical formulation of the theory with general solution technique of governing differential
equations is presented. The general solutions for beam with varying load are obtained in case of thick fixed beam. The
displacements and shear stresses obtained by present theory are in excellent agreement with those of other equivalent refined and
higher order theories. The present theory yields the realistic variation of transverse displacement through and shear stresses the
thickness of beam. Thus the validity of the present theory is established.

ACKNOWLEDGEMENT
I am greatly indebted forever to my guide Dr. A.G. Dahake, Asso. Prof. Shreeyash College of Engineering and Technology,
Aurangabad for his continuous encouragement, support, ideas, most constructive suggestions, valuable advice and confidence in
me. I sincerely thank to Prof. M.K. Sawant, Shreeyash Polytechnic, Aurangabad for their encouragement and kind support and
stimulating advice.

REFERENCES:
[1] Baluch, M. H., Azad, A. K. and Khidir, M. A. Technical theory of beams with normal strain, ASCE J. of Engineering
Mechanics,1984, 0110(8), p.1233–37.
[2] Bhimaraddi, A., and Chandrashekhara, K. Observations on higher order beam Theory, ASCE J. of Aerospace Engineering,
1993, 6(4), p.408-413.
[3] Cowper, G. R. On the accuracy of Timoshenko beam theory, ASCE J. Engineering Mechanics Division. 1968, 94 (EM6),
p.1447–53,
[4] Ghugal, Y. M. and Shmipi, R. P. A review of refined shear deformation theories for isotropic and anisotropic laminated
beams, J. Reinforced Plastics And Composites, 2001, 20(3), p. 255–72.
[5] Ghugal, Y. M. and Sharma, R. A hyperbolic shear deformation theory for flexure and vibration of thick isotropic
beams,International J. of Computationa1 Methods, 2009, 6(4), p.585–604.
[6] Heyliger, P. R. and Reddy, J. N. A higher order beam finite element for bending and vibration problems, J. Sound and
Vibration, 1988, 126(2), p.309–326.
[7] Krishna Murthy, A. V. Towards a consistent beam theory, AIAA Journal, 1984, 22(6), p.811–16.
[8] Levinson, M. A. new rectangular beam theory, J. Sound and Vibration, 1981, 74(1), p.81–87.
[9] Lord Rayleigh J. W. S. The Theory of Sound, Macmillan Publishers, London, 1877.
[10] Timoshenko, S. P. On the correction for shear of the differential equation for transverse vibrations of prismatic bars,
Philosophical Magazine, 1921, 41 (6), p. 742–46.
[11] Timoshenko, S. P. and Goodier, J. N. Theory of Elasticity, McGraw-Hill, 3
rd
Edition, Singapore. 1970.
[12] Dahake A. G. and Ghugal Y. M., ―A Trigonometric Shear Deformation Theory for Thick Beam‖, Journal of Computing
Technologies, 2012, 1(7), pp. 31-37







International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


216
www.ijergs.org

Mobile Tracing Software for Android Phone
Anuradha Sharma
1
, Jyoti Sharma
4
, Dipesh Monga
2
, Ratul Aggarwal
3

1
Information Technology, College of Technology and Engineering, Udaipur
2
Electronic and Communication, College of Technology and Engineering, Udaipur
3
Electronic and Communication,vellore institute of technology, vellore, Tamilnadu
4Information Technology, vellore institute of technology, vellore, Tamilnadu
E-mail- anuradha9462@gmail.com

ABSTRACT: The goal of this document is to give description of how to use the ―Mobile Security System (MSS)‖ (release 1.0). It
gives complete information about functional and nonfunctional requirements of system Mobile security system is security
software tocapture missing mobile phones or tablet pcs. The main purposes behind this project are to reduce some of the
vulnerabilities in existing security systems, providing user friendly authentication mechanisms (for organizations‘ resources such
as domains, networks and so on) and location identification. This will be useful for business organizations and for individuals as
well to keep track their mobile. For example the strategies, stability, configurations and etc. are can be considered as some of the
confidential information. And it can also provide some management capabilities as well. The project is carried out within two
phases.
Phase 1: Client Application which will be installed on any Mobile Devices.
Phase 2: Admin Application which will be installed on Any Server or Mobile.
(2) Introduction
I hereby declare that the seminar titled ―Mobile Tracing Software for Android phone‖ has been presented by me and is not
reproduce as it is from any other source. The objective of this project is to develop an Android application which provides location
tracking functionality for Android device using SMS. This project supports the Android OS only and makes communication with
the phone through SMS messages only. The
Architecture Security and the accuracy of tracking unit itself are the scope of this project.
(3)Abbreviation

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


217
www.ijergs.org

(4) Existing System
1. Ringer
A silent phone can be extremely tricky to find. If you're in the habit of losing a silent cell phone, you may wish to
invest in a phone sensor, also known as a phone detector. These are tools that, when placed near a cell phone, will
actually pick up the call signal and make sounds to indicate that thephone is somewhere within proximity. If the phone
is lost, all you need to do is have someone call you as you walk around with the sensor until the device begins to
indicate that a call signal is nearby. When you hear the signal, you then have a basic idea of where to start looking for
your cell phone.
2. Phone Tracking Using IMEI Number:Every phone comes with a unique International Mobile Equipment Identify
Number which can come in useful to track it in case of loss or theft. This number can be accessed by dialing *#06#
and it is advisable to make a note of it as soon as you purchase your handset. In case the phone gets stolen, file an
FIR with the police and give them its identity number Pass on a copy of the FIR and IMEI number to your service
provider who will then be able to track your handset. With its IMEI number, a device can be traced even if it is
being used with another SIM. Once the handset is located, request your service provider to block it from being used
till you are able to get your hands on it again.
3. Proposed System
Using simple SMS commands so you can ring your Android Device even if it is in silent mode and thus locate your
device locally.


(5)SoftwareRequirement
Specification
Introduction:
The Software Requirement Specification document itself states in precise and explicit language those
functions and capabilities a software system (i.e., a software application, an e-Commerce Web site, etc.) must provide,
as well as states any required constraints by which the system must abide.
The SRS contains Functional and Non-functional requirements
(6)FunctionalRequirements
a. Be able to recognize the attention word received through SMS.
b. Be able to handle the phone state to ring automatically.
c. Be able to detect the current location of Android device.
d. Be able to retrieve the device, sim card & location details.
e. Be able to send retrieved details through SMS
(7)Non-functionalRequirements
a. Functionality
b. Performance
c. Environment
d. Usability
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


218
www.ijergs.org

e. Other Constraints
(8)Software&HardwareRequirements
a. Hardware Requirements
b. Processor Pentium IV or above.
RAM 2 GB RAM or more.
c. Hard Disk Space Minimum of 40 GB.
d. GPS enabled Android 4.0 devices.
e. Software Requirements
f. Microsoft Windows (XP or later)
g. The Android SDK starter package
h. Java Development Kit (JDK) 5 or 6
i. Eclipse (Indigo)
(9)State Diagram

(10)Use case Diagram

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


219
www.ijergs.org

10.1 Use case related to Installation
Use case 1. Installation
Primary Actor: Admin App / Client App
Pre-condition: Android Device Internet Connection
Main scenario 1. User imitates installation project.
2. System asks the Admin home directory in which all the working file
will be created.
3. Admin specifies the home directory and username/password.
4. Client App ask for admin code for authentication

10.2 Use case related to system authorization
Use case 2. Login
Primary Actor: Super Admin, Admin, User
Pre-condition: User need to be pre-registered with a Admin
Main scenario 1. Start the application. User prompted for login and password.
2. User gives the login and password.
3. System does authentication.
4. Main screen is displayed.
Alternate scenario 1. Prompt the user that the wrong entered password and username.
2. Allow user to reenter the password and user name. Give 3 chances.
10.3 Use case related to change password

Use case 3. Change password
Primary Actor: User
Pre-condition: user logged in
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


220
www.ijergs.org

Main scenario 1. User initiates the password change command.
2. User performed for old password, new password and confirm new
password.
3. User gives for old password, new password and confirm new
password.
4. System does authentication.
5. New password is registered with the system.
Alternate scenario 1. Prompt the user that the entered password and username.
2. Allow user to renter the password and user name. Give 3 chances.

10.4 Use case related to Admin
Use case 4. Manage Devices
Primary Actor: Admin
Pre-condition: Online Connection or SIM or GPS Connection
Main scenario 1. Admin initiates the manage user device option.
2. Admin can to add, edit or delete any client mobile device.

10.5Use case related to Admin

Use case 5. Search for a lost client device
Primary Actor: Admin
Pre-condition: Internet Connection or GPS or SIM
Main scenario 1. Admin initiates the Search function.
2. System asks the Admin to select its registered device.
3. System displays the Report of the found device with location and
other information‘s.
10.6 Use case related to Super Admin
Use case 6. Super Admin
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


221
www.ijergs.org

Primary
Actor:
Super Admin
Pre-
condition:
Internet Connection
Main
scenario
1. Super Admin login
and initiates the list of
users function.
2. It can search for any
client device in range.
3. It can manage each
device.

(11)Implementation
a. Implementation is the stage in the project where the theoretical design is turned into a working system. The
implementation phase constructs, installs and operated the new system. The most crucial stage in achieving a
new successful system is that it works effectively and efficiently.
b. Implementation is the process of assuring that the information system is operational and then allowing user to
take over its operation for use and evaluation
(12)IMPLEMENTATION OF MODULES
1.Broadcast receiver that alerts application when each new SMS arrived.
a. Step 1: START
b. Step 2: SMS received.
c. Step 3: Checks attention word.
d. Step 4: If attention word matches with ―Add Client added by admin ‖then starts Tracing activity and
abort broadcasting.
e. Step 5: If attention word matches with ―getlocation‖ then starts ringing activity and abort
broadcasting.
f. Step 6: If attention word not matched then allow broadcasting.
g. Step 7: End
2. Enable device ringing and acknowledges the user.
a. Step 1: START
b. Step 2: Checks device it in silent or vibrate mode.
c. Step 3: If it is in silent or vibrate mode than set device to ringing mode.
d. Step 4: Enable device ringing.
e. Step 5: Acknowledges user that device ringing by sending device status information to user.
f. Step 6: End

3.Get location And Acknowledges user.
Step 1: START
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


222
www.ijergs.org

Step 2: Checks that internet is available.
Step 3: If internet is available then get location details from Network Provider.
Step 4: If internet is not available then Checks is GPS turned on.
Step 5: If GPS is available then get location details.
Step 6: Send location information to user.
Step 7: End
(13)DATA FLOW DIAGRAM
The data flow diagram is a graphical representation that depicts information flow and the transforms that are applied as
data moves from input to output. The DFD may be used to represent a system or software at any level of abstraction.
In fact DFD may be partitioned into levels that represent increasing information flow and functional detail.
Leve0


Level1


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


223
www.ijergs.org


Level2

Entity Relation Ship Modeling
P.P. Chen introduced the E- R model. Entity - Relationship modeling is a details logical representation of the entities,
associations and data elements for an organization or business area.

Entities
An Entity is a person, place, thing or event of interest to the organization and about which data are captured, stored or
processed.

Attributes
Various types of data items that describe an entity are known as attributes.

Relationship
An association of several entities in an Entity-Relation model is called relationship.

Entity Relationship Diagram
The overall logical structure of a database can be expressed graphically by an entity relationship diagram
ER DIAGRAM
It is an abstract and conceptual representation of the data. Entity Relationship modeling is a Database Modeling
Method, used to produce a types of conceptual schema. Entities:


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


224
www.ijergs.org

(14)Testing
1. Unit Testing
a. Try to detect if all Application functions work correct individually.
2. Integration Testing
b. Try to detect if all these functions are accessible in our application and they are properly integrated.

3.Integration Testing
a. Application starts on SMS receive.
b. Contents of SMS read and matched with the attention word.
c. Acknowledges the phone status to the requesting phone through SMS.
d. If it is GPS attention word then retrieves current location details and sends back to the requesting phone
without the knowledge of device user.
e. Application Stops.

(15)Snapshots


(16) DEPLOYMENT
Software deployment is all of the activities that make a software system available for use.
Android application can be deployed multiple ways:
a. If you are using eclipse, first you have to create Android virtual device manager and then right click on your
project and run as android application.
b. You can export your package to your android device And then browse to it to install.

(17)Future Enhancement
a. SMS/Call Filtering.
b. Allowing user to specify his own attention words(Database Connectivity).
c. Lock device, wipe memory to keep your private data safe.
d. Control your Android remotely via a web-based interface through DroidLocator
(18)Conclusion
Lost android mobile phone tracker is a unique & efficient application, which is used to track the lost/ misplaced
android phone.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


225
www.ijergs.org

All the features work on SMS basis. Therefore, incoming SMS format plays a vital role. Our android
application running in the cell monitors all the incoming messages. If the SMS is meant for the application, it reads the
same and performs the expected task.
We have created features, which will enhance the existing cell tracking system. Application stands different
from the existing system as its not only the GPS value it makes use of but it works on GSM/ text messaging services
which makes application a simple & unique one.

REFERENCES:
a. E. Burnette, Hello Android, The Pragmatic Programmers (2009).
b. R. Meier, Professional Android 2 Application Development, Wiley (2010).
c. M. Murphy, Beginning Android 2, Apress (2010).
d. Android Developer Guide: http://developer.android.com/guide/index.html.
e. Android API: http://developer.android.com/reference/packages.html
f. V-Model: http://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/Process/V-Model

















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


226
www.ijergs.org

VLSI Based Fluid Flow Measurement Using Constant Temperature Hot
Wire Anemometer
Anuruddh Singh
1
, Pramod Kumar Jain
1

1
Research Scholar (M.Tech), Scholars of SGSITS
E-mail- anuruddh.singh@yahoo.co.in
Abstract— The performance of a hot-wire anemometer configuration is affected by variation in the fluid temperature. The
classical temperature compensation techniques in such anemometers employ two sensors. The performance of a temperature-
compensated hot-wire anemometer configuration using a single sensor alternating between two operating temperatures and
proposed for constant fluid velocity is investigated under conditions of time-varying fluid velocity. The measurement error
introduced is quantified and can be practically eliminated using a low-pass digital filter.
Keywords— Electrical equivalence, fluid temperature compensation, hot-wire anemometer and
thermoresistive sensor, measurement error,op-amp,CMRR.
INTRODUCTION
Constant Temperature hot-wire anemometer (CTA) circuit based on a feedback self-balanced Wheatstone bridge containing a
thermo-resistive sensor is known to exhibit a relatively wide bandwidth [1]. The compensation of the effect of fluid temperature T
f

is usually done by employing an independent fluid temperature sensor [1]–[5] or two similar feedback bridge circuits with two
identical sensors operating at two different constant temperatures [6]. The finite nonzero amplifier input offset voltage does not
permit the sensor temperature to remain constant with varying fluid velocity [7]. This offset voltage also affects the dynamic
response of the feedback circuit. The circuit temporal response is slower for a higher offset voltage. Further, it has been shown that
when the amplifier input offset voltage is zero, or below a critical value, the circuit becomes oscillatory.
Thermal anemometry is the most common method used to measure instantaneous fluid velocity. The technique depends on the
convective heat loss to the surrounding fluid from an electrically heated sensing element or probe. If only the fluid velocity varies,
then the heat loss can be interpreted as a measure of that variable.
Thermal anemometry enjoys its popularity because the technique involves the use of very small probes that offer very high spatial
resolution and excellent frequency response characteristics. The basic principles of the technique are relatively straightforward and
the probes are difficult to damage if reasonable care is taken. Most sensors are operated in the constant temperature mode.
PRINCIPLE OF OPERATION

Based on convective heat transfer from a heated sensing element, possessing temperature coefficient of resistance .
Hot-wire anemometers have been used for many years in the study of laminar, transitional and turbulent boundary layer flows and
much of our current understanding of the physics of boundary layer transition has come solely from hot-wire measurements.
Thermal anemometers are also ideally suited to the measurement of unsteady flows such as those that arise behind rotating blade
rows when the flow is viewed in the stationary frame of reference. By a transformation of co-ordinates, the time-history of the
flow behind a rotor can be converted into a pitch-wise variation in the relative frame so that it is possible to determine the structure
of the rotor relative exit flow. Until the advent of laser anemometry or rotating frame instrumentation, this was the only available
technique for the acquisition of rotating frame data.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


227
www.ijergs.org


Fig.1- Block Diagram of Fluid Flow Hot Wire Sensor
3. HOT WIRE EQUATION
To examine the behaviour of the hot wire, the general hot wire equation must first be derived. This equation will be used to
examine both the steady state response of the hot wire, discussed here, and its frequency response, discussed later. By considering
a small circular element of the hot wire, Figure.2, an energy balance can be performed, assuming a uniform temperature over its
cross-section:

2

=

+

+


0

+

2

2
+

4

4
(1)

This can be simplified, Højstrup et al., 1976, to give the general hot wire equation :

1

=

2

2

1

+
2


3
(2)

if radiation is neglected. The constants are given by:

1
=

(3)

1
=

2

2
, (4)

2
=

(5)

and

3
=

2

2

− 1. (6)



Fig.2-Heat Balance for an Incremental Element

A heat balance can then be performed over the whole wire, assuming that the flow conditions are uniform over the wire:

2

=

+

(7)
The two heat transfer components can be found from the flow conditions and the wire temperature distribution:
FLOW
RATE
VARIES
CONVECTIV
E HEAT
TRANFER
COFFICIENT
(h) VARIES
HEAT
TRANSFER
FROM
FILLAMENT
VARIES
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


228
www.ijergs.org

= 2

, (8)

= 2

=1
, (9)

To give a steady state heat transfer equation:

2

= 2h
c

dlT
m
−T
a

, (10)

HOT WIRE ANEMOMETER DESIGN

A fluid flow measurement using constant temperature hot wire anemometer is show in fig. 5. The input stage consist of M1 and
M2 and the baising current is provided by M3 and M4 and the dc baising current is given 1nA.The output port Vout is connected
to M10 and M12 transistor.



Fig.3-Schematic of CTA


The table- 1 show the dimension of each transistor in the circuit. The input transistor M1 and M2 are drawn with identical sizes
and their width to length ratio as

1
. similarly transistors M8 and M9 are same size

8
. The PMOS current transistors
M3, M4, M5 and M8 are same size

3
.

Table 1: W/L Of CTA

Transistors

M1, M2 50
0.18

M3, M4, M5, M8, M9 4
0.18

M6, M7 1
0.18

M10 30
0.18

M11 50
0.18

M12 45
0.18



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


229
www.ijergs.org



Fig.4-Common mode supply Fig.5- Fluid Flow Measurement Using Constant
Temperature
Hot Wire Anemometer




SIMULATION AND RESULT



Fig.6-Output of commom mode input supply

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


230
www.ijergs.org



Fig.7-output of differential mode input in db



Fig.8- Output result
CONCLUSION

In this paper an Fluid Flow Measurement Using Constant Temperature Hot Wire Anemometer using on 0.18 µm technology
proposed.The input can vary in the range of microvolts. Therefore the simulation result with 64.2dB value of gain, 70dB value of
CMRR, and 400Hz bandwidth are obtained. These result demonstrate that the proposed circuit can be used to develop an
integrated circuit.The output obtain is in milivolts and then amplify.


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


231
www.ijergs.org

REFERENCES:
[1] Anderson, C. S., Semercigil, S. E. and Turan, Ö. F., 2003, ―Local Structural Modifications for Hot-Wire Probe Design‖,
Experimental Thermal and Fluid Science, Vol.27, pp. 193-198.

[2] Brunn, H. H., 1971, ―Linearization and hot wire anemometry‖, Journal of Physics & Scientific Instruments, Vol.4, pp. 815-
820.

[3] Bruun, H. H., 1995, ―Hot-Wire Anemometry-Principles and Signal Analysis‖, Oxford Science Publications, New York.

[4] Bruun, H. H., Khan, M. A., Al-Kayiem, H. H. and Fardad, A. A., 1988, ―Velocity Calibration Relationships for Hot-Wire
Anemometry‖, Journal of Physics & Scientific Instruments, Vol.21, pp. 225-232.

[5] Citriniti, J. H., Taulbee, K. D. and Woodward, S. H., 1994, ―Design of Multiple Channel Hot Wire Anemometers‖, Fluid
Measurement and Instrumentation, Vol.183, USA, pp. 67-73.

[6] Eguti, C. S. A.; Woiski, E. R. and Vieira, E. D R., 2002, ―A Laboratory Class for Introducing Hot-wire Anemometry in a
Mechanical Engineering Course, Proceedings (in CD ROM) of the ENCIT 2002 VIII Brazilian Congress of Thermal Science and
Engineering , Paper code: CIT02-0411, October, 15 – 18 – Caxambu, MG.

[7] Goldstein, R. J., 1983, ―Fluid Mechanics Measurements‖, Hemisphere Publishing Corp., 630 p. Gonçalves, H. C., 2001,
―Determinação Experimental da Freqüência de Emissão de Vórtices de Corpos Rombudos‖, Master of science dissertation, Unesp
– Ilha Solteira, 200p.

[8] Lekakis I., 1996, ―Calibration and Signal Interpretation for Single and Multiple Hot-wire/hot-filme Probes‖, Measurement.
Science and Technology, Vol. 7, pp.1313-1333.

[9] Lomas, C. G., 1986, ―Fundamentals of the Hot-wire Anemometry‖, Cambridge University Press.

[10] Menut, P. M., 1998, ―Anemometria de Fio-quente‖, Proceedings of the First Spring Schools of Transition and Turbulence,
(A. P. S. Freire ed.), Rio de Janeiro, pp.235-272.

[11] Möller, S. V., 2000, ―Experimentação em turbulência‖, Proceedings of the Second Spring Schools of Transition and
Turbulence, (A. Silveira Neto ed.), Uberlândia, MG, pp.63-97.

[12] Perry, A. E., 1982, ―Hot-Wire Anemometry‖, Oxford University Press, New York, 185 p.

[13] Persen, L. N. and Saetran, L. R., 1984, ―Hot-film Measurements in a Water Tunnel‖, Journal of Hydraulic Research, vol.21,
no. 4, pp. 379-387.

[14] Sasayama, T., Hirayama, T., Amano, M., Sakamoto, S., Miki, M., Nishimura, Y. and Ueno, S., 1983, ―A New Electronic
Engine Control Using a Hot-wire Airflow Sensor‖ SAE Paper 820323, 1983.

[15] Vieira, E. D. R., 2000, ―A Laboratory Class for Introducing Hot-Wire Anemometry‖, proceedings of the ICECE 2000
International Conference on Engineering and Computer Education, de 27 a 30 de agosto de 2000, São Paulo, SP.

[16] Weidman, P. D. & Browand, F. K., 1975, ―Analysis of a Simple Circuit for Constant Temperature Anemometry‖, Journal of
Physics & Scientific Instruments, Vol.8, pp. 553-560





International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


232
www.ijergs.org

Canonical Cosine Transform Novel Stools in Signal processing
S.B.Chavhan
1

1
Yeshwant Mahavidyalaya, Nanded-431602, India
E-mail- chavhan_satish49@yahoo.in

Abstract: In this paper a theory of distributional two-dimensional (2-D) canonical cosine is developed using Gelfand-Shilov
technique and defined some operators on these spaces also the topological structure of some of the S-type spaces of
distributional two dimensional canonical cosine transform.
Keywords: 2-D canonical transforms, generalized function, testing function space, s-type spaces, canonical cosine
transform.

1. INTRODUCTION:
Linear canonical transform is useful tools for optical analysis and signal processing .The Fourier
Analysis is undoubtedly the one of the most valuable and powerful tools in signal processing, image processing and
many other branches of engineering .The fractional Fourier transform, a special case of linear canonical transform is
studied through different angles. Almeida [1], [2] Had introduced it and proved many of its properties Namias
[5].Opened the way of defining the fractional transform through the Eigen value as in case of fractional Fourier
transform. The conversional canonical cosine transform is defined as .

2 2
2 2
1
{ ( )}( ) cos ( ) ,
2
i d i a
s t
b b
s
CCTf t s e t e f t dt
ib b t
| | | | ·
| |
\ . \ .
÷·
| |
= · ·
|
\ .
}

It is easily seen that for each
n
s R e and the function ( ) ,
c
K t s belongs to E(R
n
) as a function of t, where
2 2
2 2
1
cos
2
( , )
i d i a
s t
b b
c
s
e e t
ib b
K t s
t
| | | |
| |
\ . \ .
| |
|
\ .
=

Hence the canonical cosine transform of
1
( )
n
f E R e can be defined by

( ) ( ) { ( )}( ) , , ,
c
CCTf t s f t K t s =
where right hand side has a meaning as the application of
1
E f e to ( , ) .
c
K t s E e

As compared to one dimensional, canonical cosine transform has a considerably richer structure in two dimensional.
The definition of distributional two dimensional canonical cosine transform is given in section 2. S-type spaces using
Gelfand-shilov technique are developed in section 3.Section 4 is devoted for the operators on the above spaces. In
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


233
www.ijergs.org

section 5, discuss the result on the topological structures of some spaces. The notation and terminology as per
Zemanian[6],[7]. Gelfand-Shilove[3],[4].
2. DEFINITION OF TWO DIMENSIONAL (2D) CANONICAL COSINES TRANSFORMS:
Let
1
( x ) E R R denote the dual of ( x ) E R R . Therefore the generalized canonical cosine-cosine transform of
'
( , ) ( x ) f t x E R R e is defined as

{ }
1 2
2 ( , ) ( , ) ( , ), ( , ) ( , )
C C
DCCCT f t x s w f t x K t s K x w =

{ }
2 2 2 2
2 2 2 2
2 ( , ) ( , )
1 1
cos cos . ( , )
2 2
i d i d i a i a
s w t x
b b b b
DCCCT f t x s w
s w
e e t x e e f t x dxdt
b b
ib ib t t
| | | | | | | | · ·
| | | |
\ . \ . \ . \ .
÷· ÷·
| | | |
=
| |
\ . \ .
} }

where,
2 2
1
2 2
1
( , ) . .cos
2
i d i a
s t
b b
C
s
K t x e t
b ib t
| | | |
| |
\ . \ .
| |
=
|
\ .
when 0 b =

( )
2
2
( )
i
cds
t ds
d e
o ÷
= when b=0
&
2 2
2
2 2
1
( , ) . .cos
2
i d i a
w x
b b
C
w
K x w e x
b ib t
| | | |
| |
\ . \ .
| |
=
|
\ .
when 0 b =

2
( )
2
( )
i
cdw
d e x dw o = ÷ where 0 b =
where
{ }
1 2 1 2
,
( , ) ( , ) sup ( , ) ( , )
k l
E k C C t x C C
t
x
K t s K x w D D K t s K x w ¸
÷·< <·
÷·< <·
= <· .
3. VARIOUS TESTING FUNCTION SPACES:
In this section several spaces consisting of infinitely differentiable function are defined on the first and second
quadrants of coordinate plane.
3.1 The space
,
:
a b
CC
¸
It is given by

( )
( )
,
, , ,
1
sup . ,
: / , . (3.1)
l k q
t x a b l l
l k q k q
t D D t x
CC E t x C A l
I
¸
¸
|
| | o |
+
¦ ¹
¦ ¦
= e = s
´ `
¦ ¦
¹ )

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


234
www.ijergs.org

The constant C
k,q
and A depend on | .
3.2 The space
,
:
a b
CC
|


( ) ( )
{ }
,
, , ,
. / , sup , (3.2)
a b l K q k k
l k q t x l q
CC E t x t D D t x C B k
|
|
|| µ | |
+
= e = s
The constants
, l q
C and B depend on | .
3.3 The space
, , a b
CC
|
¸
:
This space is formed by combining the condition (3.1) and (3.2)
( ) ( )
{ }
1
, , sup ,
, , , ,
: / , , (3.3)
a b l q k l l k k
a b q l k I x t
CC E t x t D D t x C Al B k
| ¸ |
¸
| | ç | |
+
= e = s
, , 0,1, 2............ l k q = Where A,B,C depend on | .
In next we have introduced subspaces of each of the above space that are used in defining the inductive limits of these
spaces.
3.4 The space
,
,
a b
m
CC
¸
:It is defined as,

( ) ( ) ( )
{ }
1
, sup
, , , , , ,
: / , , (3.4)
l
a b l k q l
m a b q l k I t x k q
CC E t x t D D t x C m l
¸
¸
| | o | | µ
+
= e = s +
For any 0 > µ where m is the constant, depending on the function| .
3.5 The space
,
,
a b
n
CC
|
:This space is given by

( ) ( ) ( )
{ }
1
, sup
, , , , , , ,
: / , , (3.5)
k
a b l k q k
n a b q l k I t x l q
CC E t x t D D t x C n k
|
| o
| | µ | | o
+
= e = s +
For any 0 o > where n the constant is depends on the function| .
3.6 The space
, , ,
,
a b n
m
CC
|
¸
:
This space is defined by combining the conditions in (3.4) and (3.5).
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


235
www.ijergs.org

( ) ( )
{
( ) ( )
}
1
,, , sup
, , ,
: / , ,
. (3.6)
a b n l k q
m l k q I t x
l k
l k
C E t x t D D t x
C m n l k
|
¸
¸ |
µo
| | ç | |
µ o
+
= e =
s + +

For any 0, 0 > > µ o and for given
m > 0, n > 0 unless specified otherwise the space introduced in (3.1) through (3.6) will henceforth be consider
equipped with their natural, Hausdoff, locally convex topologies to be denoted respectively by,
,
, , ,, , , , , , ,
, ,
, , , , ,
m
a b a b a b a b a b a b n
m n
T T T T T T
¸
| |
¸ | ¸ ¸ |

These topologies are respectively generalized by the total families of seminorms.
{ } { } { } { } { } { }
, , , , , , , , , , , , , , , , , , , ,
, , , ,
, , , , and
a b q l k a b q l k a b q l k a b q l k a b q l k
a b q l k
o µ ç o µ ç

4 SOME BOUNDED OPERATORS IN S-TYPE SPACES:
This section is devoted to the study of different types of linear operators, namely, shifting operator, differentiation
operator, scaling operator, in the
, , a b
CC
|
¸
space. These operators are found to be bounded (continuous also) in the
, , a b
CC
|
¸
.

Proposition 4.1: If ( )
, ,
,
a b
t x CC
|
¸
| e and ì
is fixed real number then
( )
, ,
, , 0
a b
t x C t
|
¸
| ì ì + e + >
Proof: Consider, ( ) ( )
1
, sup ,
l q k
lk x t
I
t x t D D t x ç | ì | ì + = +
( ) ( ) ( )
1
' '
, sup ,
l
k q
lk t x
I
t x t D D t x ç | ì ì | + = ÷

'
where t = + t ì


s
l l k k
CAl B k
¸ |

( )
, ,
thus , ,
a b
t x C
|
¸
| ì + e

for 0. + > t ì
Proposition 4.2: The translation (shifting) operator ( ) ( ) : , , T t x t x | | ì ÷ + is a topological automorphism on
, , a b
C
|
¸
for 0 + > t ì .
Proposition 4.3: If ( )
, ,
,
a b
t x C
|
¸
| e and 0 > µ strictly positive number then ( )
, ,
,
a b
t x C
|
¸
| µ e
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


236
www.ijergs.org

Proof: Consider
( ) ( )
1
sup
,
, ,
l k q
l k t x I
t x t D D t x ç | µ | µ =
( )
1
sup
,
l
k q
t x I
T
D D T x |
µ
| |
=
|
\ .


( )
1
sup
,
l k q
l T x I
C T D D T x | =
Where
l
C is constant depending on ‗ µ ‘

1 2
. s
l l k k
CC A l B k
¸ |


.
. . s
l l y k k
CA l B K
|
,
where C = C
1
C
2

Thus ( )
, ,
,
a b
t x C
|
¸
| µ e for 0 > µ

Proposition 4.4:

If 0 > µ and ( )
, ,
,
a b
t x C
|
¸
| e then the scaling operator.

, , , ,
:
a b a b
R C C
| |
¸ ¸
÷ defined = R| ¢
Where ( ) ( ) , , t x t x ¢ | µ = is a topological automorphism.
Proposition 4.5:The operator ( ) ( ) , ,
t
t x D t x | | ÷ is defined on the space
, , a b
C
|
¸
and transform this space into itself
Proof: Let ( )
, ,
,
a b
t x C
|
¸
| e ,if ( ) ( ) , ,
t
D t x t x | ¢ = we have,
( ) ( )
1
sup
,
,
l k q
l k t x I
t D D t x ç ¢ ¢ = ( )
1
sup
,
l k q
t x t I
t D D D t x =
( )
1
sup 1
,
l q k
x t I
t D D t x
+
=

( )
( )
( ) 1 1
1
+ +
s +
k k l l
CA l B k
|
¸

( )
, ,
,
a b
t x C
|
¸
¢ e

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


237
www.ijergs.org

5 TOPOLOGICAL PROPERTIES OF
, a b
CC
¸
- SPACE:
This section is devoted to discuss the result on the topological structures of some of the spaces and the results
exhibiting their relationship. Then attention is also paid to be strict inductive limits of some of these spaces.
Theorem 5.1:
( )
, ,
,
a b a b
CC T
¸ ¸
is a Frechet space
Proof: As the family
, a b
A
¸
of seminorms
{ }
, , , ,
, , 0
l k q a b
l k q
o
·
=
generating
, a b
T
¸
is countable it, suffices to prove the
completeness of the space
( )
, ,
,
a b a b
CC T
¸ ¸
.
Let us consider a Cauchy sequence { }
n
| in
, a b
CC
¸
. Hence for a given 0 e> there exist
an
, , l k q
N N = such that for , > m n N
( ) ( )
1
sup
, , , ,
l q k
a b l k q m n x t m n I
t D D o | | | | ÷ = ÷ <e (5.1)
In particular for 0, , l k q m n N = = = >


( ) ( )
1
sup
, ,
m n I
t x t x | | ÷ <e
(5.2)

Consequently for fixed t in
1
I ( ) { }
, t x | is a numerical Cauchy sequence.
let ( ) , t x | be the point wise limit of ( ) { }
,
m
t x | using (5.2) we can easily deduce that ( ) { }
,
m
t x | converges to |
uniformly on
1
I
.
Thus | is continuous moreover, repeated use of (5.1) for different values l,k,q, yields that |

is
smooth i.e.

+
eE | further from,(5.1)
We get, ( ) ( )
, , , , , , , a b ql k m a b l k q N
o | o | s +e ¬ > m n

,
,
l l y
k q
C A l E s +
taking m÷· and e is arbitrary we get,
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


238
www.ijergs.org

( ) ( )
1
sup
, , ,
,
l q k
a b l k x t I
t D D t x o | | =
s
l l
k
C A l
¸

Hence
, a b
CC
¸
| e and it is the
, a b
T
¸
limit of
m
| by (5.1).
This proves the completeness of
, a b
CC
¸
and
( )
, ,
,
a b a b
CC T
¸ ¸
is a Frechet space.
Proposition 5.2: If
1 2
< m m then
1 2
, ,
, ,
.
a b a b
m m
C C
¸ ¸
c The topology of
1
,
,
a b
m
C
¸
is equivalent to the topology induced on
1
,
,
a b
m
C
¸
by
2
,
,
a b
m
C
¸

1 2 1
, , ,
, , ,
. ~ /
a b a b a b
m m m
i e T T C
¸ ¸ ¸

Proof: For
1
,
,
a b
m
C
¸
| e

and
( ) ( )
, , , , 1
,
l
l
a b l k k
x C m l
¸
µ
o | µ s +

( )
, 2
s +
l
l
k
C m l
¸
µ
µ thus,

, 1 , 2
, , a b a b
m m
C C
¸ ¸
c
The second part is clearly from the definition of topologies of these spaces. The space

,
,
a b
C
¸
can be expressed as union of countably normed spaces.
6. CONCLUSION:
In this paper two-dimensional canonical cosine is generalized in the form the distributional sense, and proved
some operators on these spaces also discussed the topological structure of some of the S-type spaces.



REFERENCES:
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


239
www.ijergs.org

[1] Almeida Lufs B., ―The fractional Fourier transform and time frequency,‖ representation IEEE trans. On Signal
Processing, Vol. 42, No. 11, Nov. 1994.
[2] Almeida Lufs B., ―An introduction to the Angular Fourier transform,‖ IEEE, 1993.
[3] Gelfand I.M. and Shilov G.E., ―Generalized Functions,‖ Volume I Academic Press, New York, 1964.
[4] Gelfand I.M. and Shilov G.E., ―Generalized Functions,‖ Volume II, Academic Press, New York, 1967.
[5] Namias Victor., ―The fractional order Fourier transform and its application to quantum mechanics,‖ J. Inst Math.
Apptics, Vol. 25, P. 241-265, 1980.
[6] Zemanian A.H., ―Distribution theory and transform analysis,‖ McGraw Hill, New York, 1965.
[7] Zemanian A.H., Generalized integral transform,‖ Inter Science Publisher‘s New York, 1968



















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


240
www.ijergs.org

Students and Teachers’ Perception of the Causes of Poor Academic
Performance in General and Further Mathematics in Sierra Leone: A Case
Study of BO District Sourthern Province
Gegbe B
1
, Koroma J.M
1

1
Department of Mathematics and statistic, School of Technology, Njala University
E-mail- bgegbe@njala.edu

ABSTRACT: The essential basis for economics and social well being of any country lies in the understanding by its people, basic
mathematical, scientific, and technological knowledge. This concern is not just about those expected to continue in further studies
and professions that are related to mathematics, science, technology and economics , but a competent mathematical population
forms a basis for national growth and development. There are number of different forces that have led to strong concern about the
low quality of mathematical knowledge, skills values and performance among students the last few decades in Bo city. Over the
years, many different perceptions about General Mathematics and Further Mathematics have been held. This study examines the
poor performance among Senior Secondary students at West Africa Secondary School Certificates Examination level in Bo City,
in Sierra Leone. The target population of the study included one hundred students (100) and seventy-five (75) teachers randomly
selected from five (5) secondary schools in Bo city. Questionnaires were used to collect relevant data for the study. Chi-square
tests were used to analyse the research questions. Other forms of data are presented in the form of percentages. Teacher
qualification and student environment did not influence students‘ poor performance, but teaching methods have influenced poor
performance of students in General Mathematics and Further Mathematics. Teachers should encourage and motivate the student to
adore the mathematics related subjects. Students must develop positive attitude towards the teacher and the subjects matter.
KEYWORDS: Academic performance; perception; qualifications, student and teacher
ACKNOWLEDGMENT
I owe depth of gratitude to God Almighty through Jesus for giving me knowledge, wisdom and understanding throughout my
academic pursuit.
My sincere thanks go to Miss Marian Johnson who works assiduously as a typist to ensure that this work comes to an end. I am
particularly grateful to my wife for her architectural role in my academic activities. Thanks and appreciations go to my mother and
late father, they nurtured me to the level I am today.
INTODUCTION
The essential basis for economics and social well-being of any country lies in the understanding by its people, basic
mathematical, scientific, and technological knowledge, there are number of different forces that have led to strong
concern about the low quality of mathematical knowledge, skills, values and performance among students in the last
few decades in Bo City. The concern is not just about those expected to continue in further studies and professions
that are related to Mathematics, science, Technology and Economics, but a competent mathematical population forms
a basis for national growth and development. This concern has often revolves on how mathematically literate students
are surviving in our rapid advancing scientific and technological world we live today.
In Sierra Leone for instance, the differential scholastic achievement of students has been, and still remain a source of
concern and research interest to educators, government and parents. This is so because of the great importance that
education has on national development of the country. Also, there is a consensus of opinion about the fallen standard
of education, parents and governments are in total agreement that their investment on education is not yielding the
desired dividend. Teachers also have continued to complain of students‘ how performances at both internal and
external examination as result of peer group influence.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


241
www.ijergs.org

Technology related subjects at university, such a result are worrisome. The situation is almost the same for Further
Mathematics for which the average pass was 3.7%. Looking at the analysis is not surprising that the Gbamanja
Commission recommended the following.
Awarding Grant-in Aid to all female students who had gained admission to tertiary institution to study
Science causes such as Mathematics Physics, Chemistry, Biology and Engineering options.
Recruited other four thousand (4000) teachers (2008)

Education at secondary school level is supposed to be bedrock and the foundation towards higher knowledge in
tertiary institutions. It is an investment as well as an instrument that can be used to achieve a more repaid economic,
social, political technological, scientific and cultural development in the country. The national policy on education
(2004) stipulated that secondary education is an instrument for national development general development of the
security and equality of educational opportunities to all Sierra Leone children, irrespective of any real or marginal
disabilities. The role of secondary education is to lay the foundation for further education and id a good foundation is
laid at this level, there are likely to be no problem of subsequent level.
However, different people at different times have passed the blame of poor performance in secondary school to
students because of their low retention, parental factors, association with wrong peers, low achievement motivation
and the like (Aremu & Sokan, 2003; Aremu & Oluwole 2001).
Morakingo (2003) believed that the falling level of academic achievement is attributable to teacher own use at verbal
reinforcement strategy. Others found out that the attitude of some teachers to their job is reflected in their poor
attendance to lessons, lateness to school, unsavoury comments about student‘s performance that could damage their
ego poor method of teaching and they like affect pupils‘ academic performance.
This research is geared towards students and teachers perception of the poor academic performance in General and
Further mathematics in Sierra Leone.
STATEMENT OF THE PROBLEM
According to Audit Service Sierra Leone Report 2009, it was observed that external examination remain poor in 2009.
And out of the total three million, eight thousand one hundred and sixty thousand Leones (Le3, 080,160,000) – paid
for West Africa senior School certificate (WASSCE) fees by government, the sum of two million seven hundred and
eighty one, three hundred and forty eight thousand Leones (Le2, 781,348,000) was for candidates whose credit passes
were low that they could not qualify for entering to any tertiary institution of learning in Sierra Leone. The poor
performance of pupils in the 2008 Basic Education Certificate Examination (BECE) and West Africa Senior School
Certificate Examination (WASSCE) in Sierra Leone prompted his Excellency the president to set up Professor
Gbamanga commission of enquiry to investigate reasons for such dismissal performance. The table below shows
results for WASSCE in Sierra Leone for 2007, 2008 and 2009 respectively in General Mathematics and Further
Mathematics.
Table 1: General Mathematic
Year Total Number of Candidate Credit (A1 –C6) % Failed (above C6) %
2007 18397 4 96
2008 23799 4 96
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


242
www.ijergs.org

2009 2922 5 95
Data source: WASSCE
Table 2 Further Mathematics
Year Total Number of Candidate Credit (A1 –C6) % Failed (above C6) %
2007 384 4 96
2008 2770 4 96
2009 2084 5 96
Data Source: WASSCE
From the above tables 2007 – 2009 academic year students‘ performance in the above subjects prove to be absolutely
poor, hence average of 4.3% of the total number of students offering General Mathematics at WASSCE level maintain
a credit or better. Considering that General Mathematics is a prerequisite for admission for all Science and
Technology related subjects at university level such result is worrisome. The situation is almost the same for Further
Mathematics for which the average percentage pass was 3.7% looking at the above analysis is not surprising.
JUSTIFICATION
All over the country, there is a consensus of opinion of the fallen standards of education in Sierra Leone, parents and
Government are in total agreement in the opinion that, their huge investment on education is not yielding the desired
dividend. Teachers also complain of student‘s low performance at both internal and external examination.
The West Africa senior School Certificate Examination (WASSCE) results conducted by West Africa Examination
Council (WAEC) justified the problematic nature and generalization of poor secondary school students‘ performance
in different school subjects.
The question as early statement, what is the cause of this fallen standard and poor academic performance of students?
Is the fault entirely that of teachers or students or both of them? It is that students of today are non-achievers because
they have low intelligence quotient and a good neutral mechanism to be able to act purposefully, think rationally and
deal effectively with academic tasks? Or is it because teachers are no longer putting in much commitment as before?
Or is it in teacher method of teaching and interaction with pupils? Or is the poor performance of students caused by
parents. Neglect separation and poverty? The present study therefore sought to find out students and teacher
perception on the causes of poor academic performance among secondary school students in Bo city.

THE PURPOSE OF STUDY
The purpose of this study sets out clearly among other things to: Finding out whether there is significant difference
between methods of teaching and academic performance qualification to teachers and academic performance
qualification to teachers and academic performance and students‘ environment and poor academic performance.
RESEARCH QUESTIONS
This research will attempt to answer the following questions:
i. What is the perception of teachers on students‘ poor performance and teachers‘ qualification?
ii. What is students‘ perception on teachers‘ qualification and students‘ poor academic performance?
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


243
www.ijergs.org

iii. What is the perception of teachers on students‘ poor performance and teachers‘ method of teaching?
iv. What is the students‘ perception on their academic performance and teachers‘ methods of teaching?
v. What is teachers‘ perception on students‘ environment and students‘ poor performance?
vi. What is the students‘ perception on students‘ environment and poor academic performance

RESEARCH OBJECTIVES
The specific objectives of the study are to identify
a) Demographic information of respondents‘ and gender
b) To use Chi-Square test to know
i. The perception of teachers on students‘ poor performance and teachers‘ qualification.
ii. The students‘ perception on teachers‘ qualification and students‘ poor academic performance.
iii. The perception of teachers on students‘ poor academic performance and teachers‘ method of teaching.
iv. The students‘ perception on their poor academic and teachers‘ method of teaching.
v. The teachers‘ perception on students‘ environment and students‘ poor performance.
vi. The students‘ perception on students‘ environment and poor academic performance.

STUDY AREA
This study was carried in five (5) randomly selected senior secondary schools in Bo, Bo City. The following
schools for the purpose of the study where
i. Ahmadiya Muslim Secondary School (AMSS)
ii. Bo Government Secondary School (Bo School)
iii. Christ the King College (CKC)
iv. Queen of the rosary Secondary School (QRS)
v. Saint Andrew‘s Secondary School (UCC)

SCOPE OF THE STUDY
The study seeks to investigate of students and Teachers Perception of the consensus poor Academic performance in
five (5) randomly selected senior secondary Schools in Central part of Bo City.
RESEARCH DESIGN
The researcher randomly selected five (5) senior secondary schools in Bo City, to consult with the teachers and student
about some of the problems affecting the student and teachers perception of the causes of poor academic performance
in general mathematics and further mathematics teaching at school. The study adopted descriptive survey design. This
is because the researcher is only interested in determining the influence of the independence variables on the
dependent variables without manipulating any of the variables. The variables that were identified in the study for
research questions and data collection instrument were.
i. Student‘s poor or academic performance and teacher‘s qualifications
ii. Student‘s poor academic performance and teachers method of teaching
iii. Student‘s environment and poor academic performance.

This study of the poor performance toward general mathematics and Further Mathematics on both subjects was done
using the qualities method. This was done by using a questionnaire name where student and teachers used a liker scale
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


244
www.ijergs.org

to gather the data for quantitative aspect of the study. The Information were analysed for a correlation between the
variables in the study. The results of the questionnaire name were placed in to themes for reporting. The researcher
attempted to record student; and teachers; reaction to the impact of attitude on classroom performance. In addition to
the liker scale questions; the subject were asked to qualify their answer with a brief explanation or comment. The
student and teachers were notified of the study. The participants did maintain complete anonym is the study. The
surveys were returned to the researcher through person to person process. The researcher collected all of the surveys
and the data was placed into statistical analysis.
SAMPLING PROCEDURE AND SAMPLE SIZE
Simple random sampling was used to select five (5) major Secondary Schools in Bo City. The standard of the schools
was also taken into consideration for better yield of result.
INSTRUMENTATION
The main instrument designed for the study is a self-designed questionnaire on perception of student‘s poor academic
performance. The questionnaire contained two (2) sections:
A - Contains information
B- Requires responses of alternation options from the respondents. Options ranged from strongly disagree. The
researcher used the following instruments in the study:
i. Well-structured questionnaire, which helped the researcher to attain high response rate.
ii. An informal interview was used to complement the effect of the questionnaire. This was done in the form
of conversational discussion.
iii. A secondary data from examination office was obtained formally

DATA COLLECTION
At the various schools, the researcher and the introduced himself to the principal, class teachers and the student and
briefed the college authorities and the student about the purpose of his visit and study. The researcher equally
explained to the subject what their role should be during the training programme. The researcher the randomly
selected the number of student needed for the study, gave them the questionnaire and explained to them how to
respond to it.
The process of responding to the questionnaire was explained to the student to ensure that valid data were collected.
The researcher printed and administered one hundred and seventy five questionnaires to both student and teachers.
One hundred (100) were administered to student and seventy five (75) to the teachers.
Eight five (85) questionnaires were collected from student and seventy (70) from the teachers. Therefore a total of
fifty five (55) out of one hundred and seventy five (175) were collected from both teacher and student. Primary data
were collected from the student and teachers to determine the performance of student in general mathematics and
further mathematics at WASSCE. The data obtained were analysed using frequency count and chi-Square statistical
analysis with the formula.

χ
2
= (O-E)
2

E
Where χ
2
= Chi-Square
O = Observed frequency
E = Expected frequency
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


245
www.ijergs.org

TREATMENT OF DATA
The data collected were compiled organized and interpreted. This led to the computation of percentages of teachers
and student responses to the questionnaire and interviews based on their perception of the problems. Inferential
statistics (analysis of frequency count and Chi-Square) were the two methods used to analyse the collected data

HYPOTHESES
In attempting to reach decisions, it is useful to make assumption about the population involved. Such assumptions,
which may or may not be true, are called statistical hypothesis. They are generally statements about the probability
distribution of the populations. Chi-Square test was to test the null and alternative hypothesis.

1) H
1
: Teachers perceive that teachers‘ qualification does not affect poor academic performance among
secondary school students.
H
o
: Teachers perceive that teachers‘ qualification does affect poor academic performance among
secondary school students.
2) H
1
:Students‘ perceive teachers‘ Qualification does not have impact on their academic performance
H
o
: Academic performance. Students‘ Perceive Teachers‘ Qualification does have impact on their
academic performance.
3) H
1
: Teachers‘ method of teaching and learning materials does not influence students‘ academic
performance.
H
o
: Teachers‘ method of teaching and learning materials does influence students‘ academic performance.
4) H
1
: Teachers do not perceive students environment as influencing their academic performance.
H
o
: Teachers do perceive student environment as influencing their academic performance.
5) H
1
: Students‘ perception that teachers‘ methods of teaching and learning materials do not influence
students‘ academic performance.
H
o
Students perceive that teachers‘ method of teaching and learning materials do influence students‘
academic performance.
6) H
1
: Teachers do not perceive students environment as influencing their academic performance.
H
o
: Teachers do perceive student environment as influencing their academic performance

THE RESULT OF PRIMARY DATA
The chapter presents the results of study. It does so on the context of the research question in chapter one. The results
of analysis are presented as follows:

DEMORGRAPHIC INFORMATION OF RESPONDENTS
Table 3: Gender of respondents
Male Female Total
Respondents No. % No. % %
Teachers 50 71.4 20 28.6

100
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


246
www.ijergs.org

Students 52 62.4 32 37.6

100

Table 3 shows the gender of respondents. It could be observed that seventy teachers were given questionnaire. Fifty
male (71.4%) responded to the questionnaire and twenty female (28.6%) responded to the questionnaire. It is
observed that eighty four students were given questionnaire, and out of which sixty two percent (62.4%) responded to
the questionnaire and thirty eight female (37.6%) responded to the questionnaire.
Table 4: Perception of academic performance and Teachers' poor academic performance questionnaire.
Items Variable SA A UU D SD Raw
Total
1 Lack of quality of teachers has an
adverse effect in the poor performance
of students
23
(8)
32
(21)
4
(2)
17
(36)
6
(15)
82
2 Most teachers do not have adequate
knowledge of their subject matter
2
(16)
22
(42)
4
(4)
85
(73)
50
(29)
163
3 Teacher‘s extreme dependence on
textbooks can lead to poor academic
performance
4
(16)
17
(43)
1
(4)
115
(75)
31
(30)
168
4 Seminars, workshop, in-service course
are not organized for teachers
15
(11)
40
(43)
3
(3)
42
(53)
18
(21)
118
5 Inadequate teaching skill 6
(13)
38
(36)
3
(3)
70
(62)
22
(25)
139
6 Poor status of teachers with economic
stress have drained the motivation of
the teachers
22
(8)
45
(21)
2
(2)
6
(36)
8
(15)
83

HYPOTHESIS:
H
o
: Teachers perceive that teachers‘ qualification does not affect poor academic performance among secondary
school students

H
1
: Teachers perceive that teachers‘ qualification does affect poor academic performance among secondary school
students

At 5 % level of significance Degree of freedom :(r-1) (c-1) = (5-1) (6-1) = 4*5=20
χ
2
(table) = 31.41

χ
2
(calculated) = (23-8)
2
+ (32-21)
2
+(4-2)
2
… +(2-2)
2
+ (6-37)
2
+(8-15)
2
=
8 21 2 2 37 15
χ
2
(cal) = 228.5



DISCUSSIONS
Since the χ
2
(228.5) calculated is greater than χ
2
(31.41) table, we reject the null hypothesis
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


247
www.ijergs.org

and accept that alternative hypothesis

CONCLUSIONS
Teachers perceive that teachers qualification affect poor academic performance among
Secondary School Students.
Table 5: Perception of students in their poor academic performance and teachers‟ qualification
Items Variable SA A UU D SD Raw
Total
1 Lack of quality of teachers has an
adverse effect in the poor performance
of students
75
(27)
50
(43)
4
(18)
52
(78)
41
(56)
222
2 Most teachers do not have adequate
knowledge of their subject matter
30
(40)
59
(62)
4
(29)
142
(113)
87
(81)
322
3 Teacher‘s extreme dependence on
textbooks can lead to poor academic
performance
28
(30)
62
(58)
115
(18)
80
(135)
102
(125)
387
4 Seminars, workshop, in-service course
are not organized for teachers
30
(45)
58
(71)
18
(30)
135
(129)
`125
(92)
366
5 Inadequate teaching skill 33
(41)
54
(64)
13
(27)
160
(117)
72
(81)
332
6 Poor status of teachers with economic
stress have drained the motivation of
the teachers
42
(38)
91
(60)
6
(26)
113
(109)
60
(72)
312

HYPOTHESIS:

H
o
: Students‘ perceive teachers‘ Qualification does not have impact on their Academic performance.

H
1
: Students‘ perceive teachers‘ Qualification does have impact on their academic performance

CALCULATION
At 5% level on significance
Degree of freedom:

χ
2
(calculated) = (75-27)
2
+ (50-43)
2
+(4-18)
2
… +(6-26)
2
+ (113-109)
2
+(60-72)
2
=
72 27 43 18 26 109
χ
2
(cal.) = 459.6

DISCUSSION

Since the χ
2
(459.6) calculated is greater than χ
2
(31.41) a table we reject the null hypothesis
and accept the alternative hypothesis.

CONCLUSION
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


248
www.ijergs.org

Students perceive teachers‘ qualification as having impact on their academic performance
Table 6: Perception of teachers on the influence teachers' method of teaching and learning materials on students' poor
academic performance
Items Variable SA A UU D SD Raw
Total
1 Large number of Students accommodated
in a classroom make the teacher not do
have classroom management
22
(6)
49
(16)
0
(1)
10
(41)
0
(18)
81
2 Teachers are not innovative in
methodology
4
(12)
18
(32)
0
(2)
112
(85)
33
(36)
167
3 Instructional materials are not provided for
the teachers to use in teaching various
subjects. Teachers never organize inter-
class and inter school debates for the
students
15
(9)
20
(25)
2
(1)
70
(66)
22
(28)
129
4 Inadequate supervision of the inspectors in
secondary schools
6
(10)
22
(26)
4
(1)
75
(169)
`28
(29)
135
5 Teacher do not plan their adequately 2
(13)
14
(36)
1
(2)
93
(96)
79
(41)
189
6 There are no adequate textbooks in schools 15
(9)
51
(24)
2
(1)
34
(63)
22
(27)
124

HYPOTHESIS

H
o
: teachers‘ method of teaching and learning materials does not influence students‘ academic performance.

H
1
: teachers‘ method of teaching and learning materials does not influence students‘ academic
performance.

Calculation
At 5% level on significance

Degree of freedom: (r-1) (c-1) = (5-1) (7-1) =4x6=24
χ
2
(table)
=
36.41

χ
2
(cal) = (22-6)
2
+ (49-16)
2
+ (0-1)
2
… +(2-1)
2
+ (34-63)
2
+(22-7)
2

6 4 1 1 63 27
χ
2

(cal.)
= 329.03

DISCUSSION
Since the χ
2
(329.03)
calculated is greater than χ
2
(36.410) table we reject the null hypothesis
and accept the alternative hypothesis.

CONCLUSION
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


249
www.ijergs.org

Teacher‘s method of teaching and learning materials influences students‘ academic performance.

Table 7: Perception of students on the influence of teachers' method of teaching and learning materials on students
poor performance
Items Variable SA A UU D SD Raw
Total
1 Large number of Students accommodated in a
classroom make the teacher do not have
classroom management
61
(52)
133
(16)
4
(7)
58
(127)
32
(78)
288
2 Teachers are not innovative in methodology 18
(30)
37
(66)
31
(9)
218
(16)
60
(98)
364
3 Instructional materials are not provided for the
teachers to use in teaching various subjects.
Teachers never organize inter-class and inter
school debates for the students
29
(36)
61
(73)
9
(9)
158
(178)
145
(108)
402
4 Inadequate supervision of the inspectors in
secondary schools
21
(28)
58
(61)
2
(8)
159
(149)
`97
(91)
337
5 Teacher do not plan their adequately 18
(36)
39
(79)
4
(10)
289
(191)
132
(116)
432
6 There are dedicated to their teaching subjects 26
(31)
56
(69)
7
(9)
172
(169)
116
(102)
372
7 There are no adequate textbooks 8in schools 39
(30)
83
(66)
3
(8)
129
(160)
108
(97)
362

HYPOTHESIS
H
o
: Students‘ perception that teachers‘ methods of teaching and learning materials do not influence students‘
academic performance.

H
1
: Students perceive that teachers‘ method of teaching and learning materials do influence students‘ academic
performance
Calculation
At 5% level of significance.

Degree of freedom = (r-1) (c-1) = (5-1) (7-1) 4x6 = 24

χ
2
(cal)
=

36.41

χ
2

(cal)
= (61-24)
2
+ (133-52)
2
+(4-7)
2
… +(3-8)
2
+ (129-160)
2
+(108-97)
2

24 52 7 8 160 97
χ
2

(cal)
= 446.6

DISCUSSION
Since the χ
2

(446.6)
calculated is greater than χ
2

(36.41)

table we reject the null hypothesis and accept the alternative
hypothesis

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


250
www.ijergs.org

CONCLUSION
Students perceive that teachers‘ method of teaching and learning material do influence students‘ academic
performance.
Table 8: Perception of teachers' and students' environment and their poor performance
Items Variable SA A UU D SD Raw
Total
1 Student have no negative attitude to their studies 17
(21)
40
(45)
4
(2)
22
(14)
6
(7)
89
2 Most students‘ background/environment do not
stimulate learning or studies
24
(22)
41
(46)
0
(2)
14
(15)
13
(7)
92
3 Level of the parents‘ education affects their
children‘s‘ academic performance
20
(20)
41
(46)
2
(2)
21
(15)
7
(7)
91
4 Poor group influence students 20
(22)
63
(49)
3
(2)
2
(15)
7
(7)
95
5 Divorce among parents affects the academic
performance of students
25
(20)
43
(43)
1
(2)
14
(14)
2
(7)
85

HYPOTHESIS
H
o
: Teachers do not perceive students environment as influencing their academic performance

H
1
: Teachers do perceive student environment as influencing their academic performance
CALCULATION
At 5% level of significance.

Degree of freedom = (r-1) (c-1) = (5-1) (5-1)= 4x4 = 16

χ
2
(cal)
=

26.3

χ
2
(cal)
= (17-21)
2
+ (40-45)
2
+(4-2)
2
… +(1-2)
2
+ (14-14)
2
+(2-7)
2

21 45 2 2 14 7
χ
2

(cal.)
= 39.46

DISCUSSION
Since the χ
2

(39.46)
calculated is greater than χ
2
(26.3)

table we reject the null hypothesis and accept the alternative
hypothesis

CONCLUSION
Teachers do perceive student environment as influencing their academic performance
Table 9: Perception of student on students' environment and their poor academic performance
Items Variable SA A UU D SD Raw
Total
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


251
www.ijergs.org

1 Student have no negative attitude to their studies 46
(53)
99
(108)
13
(9)
118
(94)
71
(83)
347
2 Most students‘ background/environment do not
stimulate learning or studies
41
(49)
79
(101)
3
(8)
102
(88)
98
(77)
323
3 Level of the parents‘ education affects their
children‘s‘ academic performance
40
(49)
82
(100)
2
(8)
92
(87)
104
(77)
320
4 Poor group influence students 44
(40)
90
(81)
7
(7)
77
(71)
42
(62)
260
5 Divorce among parents affects the academic
performance of students
63
(50)
131
(90)
13
(7)
29
(79)
53
(69)
289

HYPOTHESIS

H
o
: Student perceive that environment do not affect their academic performance

H
1
: Student perceives that environment do affect their academic performance

CALCULATION
At 5% level of significance.

Degree of freedom = (r-1) (c-1) = (5-1) (5-1) 4x6 = 16

χ
2
(cal)
=

26.3


χ
2

(cal)
= (46-53)
2
+ (99-108)
2
+(13-9)
2
… +(13-7)
2
+ (29-79)
2
+(53-69)
2

53 108 9 9 79 69
χ
2

(cal.)
= 117.6

DISCUSSION
Since the χ
2

(117.6)
calculated is greater than χ
2
(26.3)

table we reject the null hypothesis and accept the alternative
hypothesis

CONCLUSION
Student perceive that environment do affect their academic performance

SUMMARY OF FINDINGS
1. Perception of teachers on student‘s poor academic performance and teachers qualification, since the χ
2

(228.5)

calculated is greater that χ
2
(31.41)

table we

reject the all hypothesis and then accept the alteration hypothesis.
2. Perception of student on their poor academic performance and teacher‘s qualification, since the χ
2

(459.4)

calculated is greater than χ
2

(31.49)
table we reject the null hypothesis and accept the alteration hypothesis.
3. Perception of teachers on the influence of teacher‘s method of teaching and learning material on student poor
academic performance, since χ
2

(329.03)
calculated is greater than χ
2

(36.41)
table we reject the null hypothesis and
accept the alteration hypothesis.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


252
www.ijergs.org

4. Perception of student on the influence of teachers and learning materials on student poor academic
performance, since the χ
2

(446.6)
calculated is greater than χ
2

(36.41)
table we reject the null hypothesis and accept
the alteration hypothesis.
5. Perception of teachers on the student‘s environment and their poor performance, since the χ
2

(39.46)
calculated is
greater than χ
2

(26.3)
table we reject the all hypothesis and accept the alteration hypothesis.
6. Perception of student on student environment and their poor academic performance, since the χ
2

(117.6)

calculated is greater the χ
2

(26.3)
table we reject the all hypothesis and accept the alteration hypothesis.
7. Moreover it has been revealed that the students are facing enormous constraints in learning general
mathematics constraints involve: difficulties to understand the concept taught cost of learning materials,
inefficiency of teachers.
8. Furthermore, in the finding departments are not given the much-needed motivation to kindle the efficiency of
teaching general mathematics and further mathematics; finally, it has been revealed that the teachers in the
department are facing great constraints in the teaching of General and Further mathematics. Some of these
constraints includes: inadequate and inappropriate class size for individual attention, limited amount of time
allocated to the subject of teaching and the expansions of teachers due to insufficiency in the schools.
DISCUSSION OF FINDINGS
The purpose of this study was to determine if there is a correlation between a student and teachers‘ perception of the
cases of poor academic performance in General and Further mathematics in the classroom.
A five point liker scale survey was used to access the poor performance toward General Mathematics and Further
Mathematics of eighty five (85) students and seventy (70) teachers in five (5) randomly selected secondary schools.
The first three questions in the survey were to gather some general demographic information about the respondents.
They asked for the respondents‘ name, gender and age. The eighteen questions were to access the responses at
alternative opinion from the respondents. Options ranged from strangely agree to strongly disagree.
For research question one and two, teachers believed that students‘ poor academic performance is not influenced by
teachers‘ qualification; while students perceived that teachers‘ qualification affect their academic. The difference in
their perception could be because students have high expectations for teachers that could teach them and therefore
believe that any teacher that does not meet up to such expectations will not aid their academic performance. However,
from the conclusion above, student perception states that students‘ poor academic performance is influence by
teachers‘ qualification.
Also, only teachers perceive that teachers, method of teaching and learning material influence students‘ academic
performance. That is the fallen level of academic achievement is attributed to teachers‘ non-use of verbal
reinforcement strategy. Student‘s disagreement to this may be because they perceive that students‘ personal factors
affect their academic performance more that teachers‘ method of teaching and learning environment.
CONCLUSION
Based on the findings, the following conclusions were arrived at:

1. Teachers perceive that, teachers qualification affect poor academic performance among secondary school
students
2. Student perceive teachers qualification as having impact on the academic performance
3. Teacher‘s method of teaching and learning materials influences student‘s academic performances.
4. Student perceives that teacher‘s method of teaching and learning material do influence student academic
performance
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


253
www.ijergs.org

5. Teacher‘s do perceive student environment as influence their academic performance
6. Student perceive that environment do affect do their academic performance

REFERENCES:
[1] West African Examination Council Bulletin Report 2008, 2009,2010
[2] European Journal of Social Sciences – Volume 13, Number 2 (2010)
[3] Adebule, S. O. (2004). Gender differences on a locally standardized anxiety rating scale in mathematics for
Nigerian secondary schools in Nigerian Journal of Counseling and Applied Psychology.Vol.1, 22-29.
[4] Adesemowo, P. O. (2005). Premium on affective education: panacea for scholastic
Malfunctioning and aberration. 34
th
Inaugural Lecture, Olabisi Onabanjo University. Ago-
Iwoye: Olabisi Onabanjo University Press.
[5] Adeyemo, D. A. (2005). Parental Involvement Interest in Schooling and School Environment as predictors of
Academic Self-efficacy among fresh Secondary School Student in Oyo State,Nigeria. Electronic Journal of Research
in Educational Psychology, 5-3 (1) 163-180.
[6] Ajala and Iyiola (1988). Adolescence Psychology for teachers: Oyo: Abodurin Rogba
Publishers.
[7] Ajala, N. & Iyiola, S. (1988). Adolescence psychology for Teachers. Oyo: Abodunrin Rogba Publishers.
[8] Ajayi, Taiwo (1988). A system approach towards remediation of academic failure in Nigerian schools. Nigeria
Journal of Educational Psychology, 3, 1, 28-35.
[9] Aremu, A. O. (2000). Academic performance 5 factor inventory. Ibadan: Stirling-Horden
Publishers.
[10] Aremu, A.O. & Oluwole, D.A. (2001).Gender and birth order as predictors of normal pupil‘s anxiety pattern in
examination. Ibadan Journal of Educational Studies, 1, (1), 1-7














International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


254
www.ijergs.org

Quality in Design and Architecture- A Comprehensive Study
Ruqia Bibi
1
, Munazza Jannisar Khan
1
, Muhammad Nadeem Majeed
1

1
University of Engineering and Technology, Taxila, Pakistan
E-mail- ruqia.kibria@yahoo.com
Abstract— The design quality holds a decisive influence on success of product. Our work is comprehensive study of
software metrics to evaluate quality in the design phase which will be beneficial to find, repair design problems and
saves large amount of potential expenditure. This paper evaluates employment of several design methods such as
Robust Engineering Design, Failure Mode and Effect Analysis etc., to ensure quality and minimize variation. It has
also figured the use of emerging technologies, new materials and the adoption of a simultaneous approach to design. It
introduces a quality attribute driven perspective on software architecture reconstruction. It is essential to elect and
follow architectures that full fill specific concerns or required properties with a certain degree of confidence, as
architecture and design models together, signifies the functional behavior of the system.

Keywords— FMEA (Failure Mode and Effect Analysis); QMOOD (Quality Model for Object Oriented Design);
DEQUALITE (Design Enhanced Quality Evaluation); EBS (Engineering breakdown Structure); OBS (Objective
Breakdown Structure); SOA (Service Oriented Architecture); (EJB) Enterprise JavaBeans.
INTRODUCTION
Quality of a system is vital and considered as a conditional, perceptual and often subjective attribute. It is always
crucially handled in the system production and must be expressed in a quantified manner. It is not just a marketing
phrase to be used, nor it is created by control, but it is a function which must be designed and synthesized into the
evolution and development of a product [10]. Software quality makes a bridge with the class, organization of class and
key important with their design. Quality in the design of system lays its foundation on a set of various lucid and
correct decisions in design process. To great extent, quality in design is determined by the level of designer‘s decision-
making skill. Designers, taking emergence in to account, concurrently should bring design quality factors under
consideration in product‘s life cycle [1]. The quality of design is influenced by several factors, which include inter-
alia, the designers or design team involved in the project, the design techniques, tools and methods employed during
the design process, the quality and level of available technical knowledge, the quality of the management of the design
process, and the nature of the environment under which the design process is carried out. The above factors in one way
or the other do have significant influence on the quality of both the design process and the resulting product. To design
quality into a product requires the adoption of a planned and controlled approach to design, which can be
accomplished by using a methodical or systematic design process [2]. On-line parameter design mode involves
optimization of the controllable variables with respect to expected levels of outcome quality parameters. Identification
phase embeds establishment of a strategy, a model, which has the ability to relate quality response characteristics with
the controllable and uncontrollable variables [4]. An acceptable claim in systematic, well defined process control and
evaluation of each phase in development, improves overall quality of software product [7].
Our research work is comprises, analysis and illustration of software metrics those embed quality in design and
architecture, along with prior approaches proposed and followed. Design of systems is essential in producing the
product quality. Dig deep strategy is applied on the approach and every implicated methodology is discussed. A
paper structure probe down to Section 2 discusses the quality in design and architecture techniques. Section 2 further is
divided to sub section those comprise of separate analysis of each former research work. Section 3 entrenches the
analysis stage of our work, three tables summarize and depict evaluation parameters and section 4 is about the
conclusion build from all above mentioned study.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


255
www.ijergs.org

QUALITY IN DESIGN AND ARCHITECTURE TECHNIQUES
Quality is one of the most important issues in software development. The developed software product would result in
customer dissatisfaction if it didn‘t meet the quality standards. Quality of a software product relies on complete
understating and evaluation of the underlying design and architecture. Previous studies mentioned [2][6] that the high
level design descriptions helped in predicting the quality of a product, thus used as one of the important quality
assurance technique. The problems left unnoticed in design phase would penetrate in later development stages and
even a minute change will cost much. These factors point to a need of some methods or techniques which can reduce
issues related to design phase and hence contribute in the overall quality of a system. Issues before stated have vital
impact on product‘s output. Our study surveys various approaches those have been applied or proposed to deal with
the concerns. Several techniques such as MQI [1], Online Parameter Design [3], and Factor Strategy Model [4] are
proposed by researchers. The paper surveys these techniques for achieving quality in design and architecture.
FRAMEWORK FOR PERFORMING ON-LINE PARAMETER DESIGN
In previous studies, quality information regarding product and design has not been a matter of consideration by
designers and other persons who are accountable for the project because of the difficulty level they faced to capture
quality information. To cover all the aspects which left uncovered the researchers has presented a model named MQI
(Manufacturing Quality Information) helps in making decisions related to design phase which manufacture quality
information by employing layered approach. The quality information is divided into three layers i.e. application,
logical and physical layer. IDEF0 diagram has been used to demonstrate the supporting design decisions in MQI. The
proposed model will not only shorten the development lifecycle but also reduce the cost dramatically.
DESIGN FUNCTION DEVELOPMENT SYSTEM
Authors of this paper mentioned that the quality is a function which must be designed and synthesized into evolution
and development of a product and/or process, at the early stages of engineering design. They developed a Design
function Development (DFD) system. This system was developed by the expansion of Quality Function Deployment
and the integration of other important aspects of design [2].
APPROACH FOR INTELLIGENT QUALITY CONTROLLERS
During the production or operation phase some uncontrollable factors are left un-noticed which if observed will reveal
significant improvements in quality. The researchers proposed the methodology called online parameter design by
using the extra information about uncontrollable factors. The methodology has two distinct modes- identification
mode and online parameter design mode. For modeling quality response characteristics Feed-forward neural networks
are recommended. Plasma etching manufacturing process is tested against proposed quality controllers [3].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


256
www.ijergs.org


Figure 11 Proposed Framework for performing on-line parameter design [3]
FACTOR- STRATEGY TECHNIQUE
As we move from one software product to the other, their assessment with external quality attributes will be harder
because of the increasing complexity and variation of design properties. The effect on top level quality factors by
object oriented design has been analyzed in this research and to quantify the impact a naval approach has been
proposed called detection strategy. The proposed model (Factor-Strategy) has two major characteristics- an intuitive
construction and direct link between cause/problems and design level [4].


Figure 12 Factor – Strategy Model. The concept [4]
For automation researchers have developed Pro-Detection toolkit. By utilizing detection strategies the toolkit inspects
the code quality.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


257
www.ijergs.org

QUALITY ATTRIBUTE DRIVEN SOFTWARE ARCHITECTURE (QADSAR)
During development phase, architectures need to look back existing systems to inspect methodical obstacles in
incorporating new technology approaches. The researchers proposed Quality Attribute Driven Software Architecture
Reconstruction (QADSAR) approach to present reasoning and irradiates the information needed to link organization‘s
goal to the gained information. By using Software Architecture Reconstruction several mediums can be improved:
understanding and improving the architecture of existing systems, assessing the quality characteristics and improving
the documentation of the overall system. QADSAR proved to be an important contribution when system‘s types and
its quality attributes were studied in detail.

Figure 3 the QADSAR Steps [5]
ANALYSIS OF QUALITY MODELS TO DESIGN SOFTWARE ARCHITECTURE
The success of architecture depends on the quality of its design. Quality software product can be achieved if each
development stage is evaluated and controlled in a well-defined process. The choice of quality model plays vital role
in establishing the quality of software architecture. The researchers discussed three approaches based on quality
models: ISO 9126, ABAS (Attribute Based Architectural Styles) and Dromey. These approaches are useful in
introducing quality issues related to design in the development process. Analysis pointed out the lack of unified
language and shares the fact that the high level characteristics in a software product must be quantified [6].
ANALYSIS OF SOFTWARE QUALITY, FROM DESIGN AND ARCHITECTURE‟S PERSPECTIVE
Integration of reusable components proved beneficial for the evolution of software product but it also demands
complete understanding of the previous version of software. For that understanding of code is not sufficient, other
descriptions such as design and architecture descriptions are also necessary. The paper focuses on cognitive approach
based on previous knowledge and experiences. Design and architecture primarily expresses functional aspects, an
experiment is conducted to identify whether it is possible to represent some non-functional aspects. The research
concluded that incorporation of these representations in design and architecture is worthwhile thus helping developers
in maintaining and evolving complex software systems [7].

QMOOD (QUALITY MODEL FOR OBJECT ORIENTED DESIGN)
Rapid upsurge in environmental changes introduce various business challenges for the organizations. The issue is
highlighted and addressed, by presenting a hierarchical model for quality assessment in service oriented architecture.
Suggested model recognizes design problems before hand they flow into implementation phase of system.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


258
www.ijergs.org


Figure 4 SOA Design Components [8]
Flow of design problem into later stage makes the defects difficult to resolve, with more consumption of resources.
The research approach extends the QMOOD model for the object oriented software. Metrics those evaluate design
quality in the product design, provides organizations with an opportunity to save large expenditure for problem
resolution [8].

The Crucial Factors for Product Quality Improvement
Authors [9] targeted the design quality concerns, often introduced in the industrial practitioners by large distances
between manufacturing and design departments in supply chains. Design quality holds a crucial prominence and in
the supply chain early involvement of the manufacturer is essential. To justify the importance, a case study of Chinese-
made toys is brought under consideration. Study illustrates model named as design- manufacture chain model. Paper
also presents a quality relationship model between the design quality, product quality and manufacturing quality by
elucidating a conceptual framework. Outcome can be sensitizing in the industrial domain with intended actors and
their association-ship with the product quality and design process.

Figure 5 Quality Relationship Model [9]
DEQUALITE (DESIGN ENHANCED QUALITY EVALUATION) APPROACH
Authors [10] work presents the quality models those take into account the design of systems, specifically antipatterns,
design patterns and code smells. Quality models are presented to calculate the quality of objected oriented embedded
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


259
www.ijergs.org

systems. Diverse methodologies are presented in the prior work whose aim is to enhance the design of systems with
good quality feature. An approach DEQUALITE (Design Enhanced Quality Evaluation) that build quality models is
projected. The method focused on the internal attributes of object oriented systems, their design and measures quality.
Quality- Assurance personnel, Practitioners- developers and managers can use this technique that is being instigated as
a working tool, which can evaluate the quality of their systems [10].

RELIABILITY & QUALITY IN DESIGN
The authors [11] open a talk about the quality improvement and annual reliability plan for designing and for the way
designing is carried out. The work presented an overview on the design process, showing the quality and reliability
perspective. Paper shows the two major approaches for design, Transactions and transformations. In the transactions
follow over creativity in design in encouraged according to a strategy, emphasizes the project‘s output performance
and value, to the time and cost factor. The transformation approach is to improve the methodology followed to
improve design, carrying it throughout the design production. A systematic approach is used in the improvement effort
of design process. The improvement strategy should be made on input and output activities of design process. The
reliability attribute of systems is shown according to organization‘s chart and is of two kinds, one is field- failure
analysis and the second one is predictive reliability. The paper relates problem to the diverse perspectives in an
organizations those are not identified. [11]

DESIGN QUALITY MANAGEMENT APPROACH
Design defects those unfortunately flow in the construction and operating stages cause large resource expenditure. It is
proposed that 40% of quality problems are caused by flaw in product design. The authors [12] in the paper present a
project life cycle, this cycle introduces design quality management. Survey based on questionnaire strategy is used to
collect and investigate diverse opinions from relevant departmental personnel. EBS-OBS based design quality
matrixes are implicated in the case study. The communication among all the personals is considered important.
Authors, based on survey result valued design quality management on a project life cycle as essential.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


260
www.ijergs.org


Figure 6 Information Interaction in Project Life Cycle [12]

DESIGN AND IMPLEMENTATION OF AN IMPROVED CHANNELIZED ARCHITECTURE
Authors [13] in their research work illustrated digital channelized receiver architecture, it covers theory of arithmetic
and implementation for real time signal processing. The proposed architecture‘s performance conforms in quality with
the conventional architecture strategy. The study analysis the convolution in the non- blind spots digital channelized
receiver. Using two modules the filter bank structure is also achieved. The research work concluded that the suggested
architecture is beneficial in solving processor resource issues [13].

QUALITY MEASUREMENT IN OBJECT-ORIENTED DESIGN
Authors in this research work [14] depicted that by the use of adequate quantification strategies, quality can be
calculated in object oriented software systems. Metrics in systems does not portray information for creating a verdict
about code transformation that can help to enhance quality. A mechanism known as factor-strategy is recommended.
Goodness in design is mentioned in terms of metrics, conforming the design quality. The work concludes that the
detection methodology is beneficial, finds the design problems and heuristics are enumerated in metrics- based rules.

STUDY OF MINING PATTERNS TO SUPPORT SOFTWARE ARCHITECTURE EVALUATION
Authors [15] have illustrated an approach that depicts the software architecture evaluation process. This approach
takes into account systematic extraction, architecturally essential data from the software design and architecture
patterns. Patterns used are EJB architecture usage patterns. Any benefits claimed by pattern can only be achieved by
applying same tactics. Paper also examines the validation pattern those are published. Major research objective
presented by authors is distill sensitive scenarios in quality attribute and improve SA design. Study suggests that
software patterns are helpful and important source of information about architecture. This information latterly is
extracted and documented systematically to improve SA evaluation process.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


261
www.ijergs.org


Figure 7 Information Interaction in Project Life Cycle [12]
ANALYSIS
As above mentioned sections depict that our survey encompasses fifteen approaches and used sixteen parameters for
evaluation. Table 2 show the results of analysis of evaluation parameters defined in evaluation criteria in table 1.
Through the analysis of Table 2, it is found that almost all the techniques use the tool support including [2, 4]. All the
techniques have the quality parameter of reusability showing integration with the components, conforming to its
specification, clearly defined and verified. Research work stated in [5, 6, 7, 12] cater for behavior specification. Most
of these methodologies have a parameter of robustness which is a quality assurance methodology focused on testing
the robustness of software. Robustness testing has also been used to describe the process of verifying the robustness
(i.e. correctness) of test cases in a test process. Xianlong XU [1] uses the case study of heavy vehicle transmission
design, Jie Ding [3] the case study of On-line parameter design, Yanmai Zhu [5] the case study of Automotive body
components while Christoph Stoermer [9] research study is related to a case study on Chinese-made toys focusing on
the deign quality from industrial perspective and addressing their manufacturing flaws [9]. Many of the techniques do
not use testability. Testability is an important quality characteristic of software. A major and essential feature is lack of
testability contributes to a higher test and maintenance effort. Testability of product needs to be measured throughout
the life cycle. This means to start with testability requirements, i.e. requirements related to the testability of the
software product. Stakeholders for testability requirements include the customers and software users since testability is
important to shorten maintenance cycles and to locate residual errors. The research previously stated in [4, 6, 8, 10,
14], deal with the issue in language interoperability. However, rest of approaches has the competences of language
interoperability.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


262
www.ijergs.org



Table 1: Evaluation Criteria for Quality in design and architecture


Evaluation Parameters Meaning Possible Value
Tool support For proposed design a tool is produced. Yes, No
Performance In terms of responsiveness and stability. Yes, No
Language
Interoperability Language translator for real time implementation
Yes, No
Behavior Specification
Functional decomposing and representation of the
problem
Yes, No, State Chart, Other
modeling notation
Maintainability
It can be restoring to specified condition within a
specified period of time or not.
Yes, No
Usability
User interface in software development should contain
usability attribute for its intended audience
Yes, No
Testability The design being proposed is testable or not. Yes, No
Security
The software is able to withstand hostile act and
Influences or not.
Encryption algorithm, No
Case study Support of examples. Yes, No
Reliability
Probability of system failure and that it will perform its
intended function or not for a specified time interval
Yes, No
Correctness Required functions are performed accurately or not. Yes, No
Robustness
Whether it is able to operate under stress or tolerate
unpredictable or invalid input.
Yes, No
Reusability
It has the ability to add further features with slight or no
modification
Yes, No
Timing Constraint Quality specification through timing Yes, No
UML Complaint UML standard has been followed or not. Yes, No
Extensibility
New capability can be added to the software without
changes to the underline architecture
Yes, No
S# Techniques Correctnes
s
Reliability Case Study Testability Maintainabil
ity
Language
Interoperabi
lity
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


263
www.ijergs.org

1 Xianlong XU et al,
2007
Yes Yes Case study of
heavy vehicle
transmission
design
No Yes No
2 S. Sivaloganathan et
al, 1997
Yes Yes No No Yes No
3
Jie Ding et al, 2000
Yes No Plasma
etching
process
modeling and
online
parameter
design
Yes Yes No
4 Ratiu et al, 2004 Yes Yes Yes Yes Yes C++ and Java
5
Christoph Stoermer et
al, 2000
Yes Yes Related to
Automotive
body
components
Yes Yes No
6 Francisca Losavio et
al, 2001
Yes Yes No Yes Yes Object
Oriented
7 Lars Bratthall et al,
2002
Yes Yes No Yes Yes No
8 Bingo Shim et al,
2005.
Yes Yes No No Yes Yes
9 Yanmai Zhu et al,
2008.
Yes Yes Chinese made
toys
No Yes No
10 Foutse Khomh et al,
2009.
Yes Yes No No Yes OOP
11 W.A. Golomski &
Associates, 1995
Yes Yes No No Yes No
12 Luo Yan et al, 2009 Yes Yes Yes No Yes No
13 Xu Shichao et al,
2009
Yes No No No Yes No
14 Radu Marienescu,
2005
Yes Yes Yes Yes Yes OOP
interoperabili
ty
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


264
www.ijergs.org

Table 2: Analysis of parameters of quality attributes in design and architecture
CONCLUSIONS
Approaches presented in the previous mentioned sections can be way more advantageous to introduce design quality
issues in the development process. There should be possible way to show some attributes related to quality clearly in
system‘s design and architecture. A good product design covering its user‘s needs generate a quality product. It defines
product congenital, inherent quality. The improvement in product design quality depends on a set of rational and right
decisions in design process. The evaluation and control of each stage of development in a well-defined process will
improve the overall quality of the final software product.
The quality of design is influenced by several factors, which include inter-alia, the designers or design team
involved in the project, the design techniques, tools and methods employed during the design process, the quality and
level of available technical knowledge, the quality of the management of the design process, and the nature of the
environment under which the design process is carried out. The quality practices those link internal attributes of
system to the external features are limited to the fault proneness and do not considered the systems designs. This
makes it hard to differentiate between a well-structured and a system with poor design, even though their respective
designs are the first things that maintainers see. Design flaws later make the expenditure large in the construction and
operation stage, so the quality in design has a great influence on the life-cycle quality of the project. There is merely a
perfect software design. The process of producing software design is error prone and makes no exception. The defects
in system design have inverse effect on the quality attributes such as flexibility or maintainability. Thus, the
identification and detection of these design problems is essential for the evaluation and making a product with
improved quality.
REFERENCES:
[1] Xianlong Xu and Shurong Tong ―A model of manufacturing quality information supporting design‖ International
Conference of Industrial Engineering and Engineering Management, IEEE, 2007

[2] Evbuomwan, Sivaloganathan, and S. Jebb ―A Design function deployment-a design for quality system, Customer
Driven Quality in Product Design‖ IEEE, 1997

[3] Chinnam R.B, Jie Ding and May G.S ―Intelligent quality controllers for on-line parameter design‖
Semiconductor Manufacturing, IEEE Transactions, 2000

[4] Marinescu and Ratiu ―Quantifying the quality of object-oriented design: the factor-strategy model‖ proceeding of
11th Working Conference on Reverse Engineering, 2004.

[5] Christoph Stoermer and Liam O‘Brien ―Moving Towards Quality Attributes Driven Software Architecture
Reconstruction‖ Robert Bosch Corporation, Software Engineering Institute Carnegie Mellon University, USA, 2000

[6] Francisca Losavio and Ledis Chirinos ―Quality models to design software architectures‖ proceedings of
Technology of Object-Oriented Languages and Systems, 07 August, 2001

15 Muhammad Ali
Babar et al, 2004
Yes Yes No No Yes No
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


265
www.ijergs.org

[7] Lars Bratthall and Claes Wohlin ―Understanding Some Software Quality Aspects from Architecture and Design,
Models‖ Dept. Communication Systems, Lund University, 2002

[8] Bingu Shim, Siho Choue, Suntae Kim and Sooyoung Park ―A Design Quality Model for Service-Oriented
Architecture‖ 15
th
Asia-Pacific Software Engineering Conference, 03 December, 2005

[9] Yanmei Zhu, Jianxin You and Alard ―Design Quality: The Crucial Factor for Product Quality Improvement in
International Production Networks‖ 4th International Conference on Wireless Communications, Networking and
Mobile Computing, February, 2008

[10] Khomh ―Software Quality Understanding through the Analysis of Design‖ WCRE 16th Working Conference
on Reverse Engineering, 2009

[11] Golomski and W.A ―Reliability and quality in design‖ IEEE, 1995

[12] Luo Yan, Mao Peng and Chen Qun ―Innovation of Design Quality Management Based on Project Life Cycle‖
International Conference on Management and Service Science, December, 2009

[13] Xu Shichao, Gao Meiguo and Liu Guoman ―Design and implementation of an improved channelized
architecture‖ International Conference on Computer Science and Information Technology, August, 2009

[14] Radu Marinescu ―Measurement and quality in object-oriented design‖ Proceedings of the 21st IEEE
International Conference on Software Maintenance, IEEE, 2005

[15] Liming Zhu, Muhammad Ali Babar and Ross Jeffery ―Mining Patterns to Support Software‖, 2004






International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


266
www.ijergs.org

A Review of Different Content Based Image Retrieval Techniques
Amit Singh
1
, Parag Sohoni
1
, Manoj Kumar
1

1
Department of Computer Science & Engineering, LNCTS Bhopal, India
E-Mail- amit.singh5683@gmail.com

Abstract: The extraction of features and its demonstration from the large database is the major issue in content based
image retrieval (CBIR). The image retrieval is interesting and fastest developing methodology in all fields. It is
effective and well-organized approach for retrieving the image. In CBIR system the images are stored in the form of
low level visual information due to this the direct correlation with high level semantic is absent. To bridge the gap
between high-level and low-level semantics several methodologies has developed. For the retrieval of image, firstly
extracts the features of stored images then all extracted features will goes for the training. After the completion of
preprocess, it‘ll compare with the query image. In this paper the study of different approaches are discussed.
Keywords: - CBIR, Extraction, Semantic gap, DWT, SVM, Relevance Feedback, EHD, Color model.
INTRODUCTION
With the development in the computer technologies and the advent of the internet, there has been bang in the amount
and the difficulty of digital data being produced, stored, conveyed, analyzed, and accessed. The lots of this information
are multimedia in behavior, comprising digital images, audio, video, graphics, and text information. In order to
construct use of this enormous amount of data, proficient and valuable techniques to retrieve multimedia information
based on its content need to be developed. In all the features of multimedia, image is the prime factor.
Image retrieval techniques are splitted into two categories text and content-based categories. The text-based algorithm comprises
some special words like keywords. Keywords and annotations should be dispenses to each image, when the images are stored in a
database. The annotation operation is time consuming and tedious. In addition, it is subjective. Furthermore, the annotations are
sometimes incomplete and it is possible that some image features may not be mentioned in annotations [1]. In a CBIR system,
images are automatically indexed by their visual contents through extracted low-level features, such as shape, texture, color, size
and so on [1, 2]. However, extracting all visual features of an image is a difficult task and there is a problem namely semantic gap
in the semantic gap, presenting high-level visual concepts using low-level visual concept is very hard. In order to alleviate these
limitations, some researchers use both techniques together using different features. This combination improves the performance
compared to each technique separately [3, 4].
In this paper, there are two steps for answering a query to retrieve an image. First, some keywords are used to retrieve similar
images and after that some special visual features such as color and texture are extracted. In other words, in the second step, CBIR
is applied. Color moments for color feature and co-occurrence matrix for extraction of texture features have been computed. This
paper is organized as follows. The next session focuses on the related works in the field. In section 3, content-based image
retrieval systems have been explained. In section 4, about different CBIR techniques are explained and in last section the paper is
concluded.
CONTENT BASED IMAGE RETIEVAL
A typical CBIR system automatically extract visual attributes (color, shape, texture and spatial information) of each image in the
database based on its pixel values and stores them in to a different database within the system called feature database [5,6]. The
feature data for each of the visual attributes of each image is very much smaller in size compared to the image data. The feature
database contains an abstraction of the images in the image database; each image is represented by a compact representation of its
contents like color, texture, shape and spatial information in the form of a fixed length real-valued multi-component feature
vectors or signature. The users usually prepare query image and present to the system. The system usually extract the visual
attributes of the query image in the same mode as it does for each database image and then identifies images in the database whose
feature vectors match those of the query image, and sorts the finest analogous objects according to their similarity value. During
operation the system processes less compact feature vectors rather than the large size image data thus giving CBIR is
contemptible, speedy and proficient advantageous over text-based retrieval. CBIR system can be used in one of two ways. First,
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


267
www.ijergs.org

precise image matching, that is matching two images, one an example image and another image in image database. Second is
estimated image matching which is finding very intimately match images to a query image [7].

FIG. 1. BLOCK DIAGRAM OF SEMANTIC IMAGE RETRIEVAL
BASICALLY CBIR USED TWO APPROACHES FOR RETRIEVING THE IMAGES FROM THE IMAGE DATA BASE.
• Two approaches
TEXT-BASED APPROACH (INDEX IMAGES USING KEYWORDS)
CONTENT-BASED APPROACH (INDEX IMAGES USING IMAGES)
Text-Based Approach:
Text based method used the keywords descriptions as a input and get the desired output in the form of similar types of
images .Examples:- (Google, Lycos, etc.) [14].
Content-Based Approach:
Content based approach using image as an input query and it generate the output of similar types of images [14].
RELATED WORK
There are various method has been proposed to extract the features of images from very large database. In this paper
various algorithms are discussed to retrieve the image:
a) Jisha. K. P, Thusnavis Bella Mary. I, Dr. A. Vasuki [8]: proposed the semantic based image retrieval system
using Gray Level Co-occurrence Matrix (GLCM) for texture attribute extraction. On the basis of texture features,
semantic explanation is given to the extracted textures. The images are regained according to user contentment and
thereby lessen the semantic gap between low level features and high level features.
b) Swati Agarwal, A. K. Verma, Preetvanti Singh [9]:
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


268
www.ijergs.org

The proposed algorithm is enlightened for image retrieval based on shape and texture features not only on the basis of
color information. Firstly the input image is decomposed into wavelet coefficients these wavelet coefficients give
generally horizontal, vertical and diagonal features in the image. Subsequent to wavelet transform (WT) and Edge
Histogram Descriptor (EHD) is then used on preferred wavelet coefficients to gather the information of foremost edge
orientations. The grouping of DWT and EHD methods increases the performance of image retrieval system for shape
and texture based retrieve. The performance of diverse wavelets is also compared to find the appropriateness of
meticulous wavelet function for image retrieval. The proposed algorithm is skilled and examined for large image
database. The results of retrieval are conveyed in terms of exactitude and recall and compared with different other
proposed schemes to show the supremacy of our scheme.
c) Xiang-Yang Wang, Hong-Ying Yang, Dong-Ming Li [10]: proposed a new content-based image retrieval
technique using color and texture information, which achieves higher retrieval effectiveness..Initially, the image is
altered from RGB space to adversary chromaticity space and the individuality of the color contents of an image is
incarcerated by using Zernike chromaticity distribution moments from the chromaticity space. In next, the texture
attributes are extracted using a rotation-invariant and scale-invariant image descriptor in contour-let domain, which
presents the proficient and flexible estimation of early processing in the human visual system. Lastly, the
amalgamation of the color and texture information provides a vigorous feature set for color image retrieval. The
experimental results reveal that the proposed color image retrieval is more accurate and efficient in retrieving the user-
interested images.
d) S. Manoharan, S. Sathappan [11]: They Implemented the high level filtering wherever they are using the
Anisotropic Morphological Filters, hierarchical Kaman filter and particle filter proceeding with feature extraction
method based on color and gray level feature and subsequent to this the results were normalized.
e) Heng Chen and Zhicheng Zhao [12]: authors described relevance feedback method for image retrieval.
Relevance feedback (RF) is an efficient method for content-based image retrieval (CBIR), and it is also a realistic step
to shorten the semantic gap between low-level visual feature and high-level perception. SVM-based RF algorithm is
proposed to advances the performance of image retrieval. In classifier training, a model expanding method is adopted
to stability the proportion of positive samples and negative samples. After that a fusion method for multiple classifiers
based on adaptive weighting is proposed to vote the final query results. SVM-based RF scheme is proposed to improve
performance of image retrieval. In classifier training, a sample intensifying scheme is accepted to balance the
proportion of positive and negative samples and then fusion scheme for multiple classifiers based on adaptive
weighting is anticipated to vote the final query results.
f) Monika Daga, Kamlesh Lakhwani [13]:
Proposed a new CBIR classification was being developed using the negative selection algorithm (NSA) of ais. Matrix
laboratory functionalities are being used to extend a fresh CBIR system which has reduced complexity and an
effectiveness of retrieval is increasing in percentage depending upon the image type.
g) S. Nandagopalan, Dr. B. S. Adiga, and N. Deepak [15]: They proposed a novel technique for generalized image
retrieval based on semantic contents is offered. The grouping of three feature extraction methods specifically color,
texture, and edge histogram descriptor. There is a prerequisite to include new features in future for better retrieval
efficiency. Any combination of these techniques, which is more suitable for the application, can be used for retrieval.
This is presented through User Interface (UI) in the form of relevance feedback. The image properties analyzed in this
work are by using computer vision and image processing algorithms. Anticipated for color the histogram of images are
calculated, for texture co-occurrence matrix based entropy, energy etc are calculated and for edge density it is Edge
Histogram Descriptor (EHD) that is found. To retrieval of images, a new idea is developed based on greedy approach
to lessen the computational complexity.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


269
www.ijergs.org

h) G. Pass [16]: They proposed a novel method to describe spatial features in a more precise way. Moreover, this
model is invariant to scaling, rotation and shifting. In the proposed method segmentations are objects of the images
and all images are segmented into several pieces and ROI (Region of Interest) technique is applied to extract the ROI
region to enhance the user interaction.
i) Yamamoto [17] proposed a content-based image retrieval system which takes account of the spatial information of
colours by using multiple histograms. The proposed system roughly captures spatial information of colors by dividing
an image into two rectangular sub-images recursively. The proposed method divides an image into dominant two
regions using a straight line vertically or horizontally, even when the image has three or more color regions and the
shape of each region is not rectangular. In each sub-image, the division process continues recursively until each region
has a homogeneous color distribution or the size of each region becomes smaller than a given threshold value. As a
result, a binary tree which roughly represents the color distribution of the image is derived. The tree structure
facilitates the evaluation of similarity among images.
DIFFERENT IMAGE RETRIEVAL TECHNIQUES
There are various techniques have been proposed to retrieve the image effectively and efficiently from the large set of
image data in which some of the methods are described below:

Relevance Feedback:
Every user‘s need will be different and time varying. A typical scenario for relevance feedback in content-based image
retrieval is as follows [19]:
Step 1: Machine provides early retrieval results
Step 2: User provides opinion on the currently exhibited images based on the degree whether they are relevant or
irrelevant to her/his request
Step 3: Machine learns the judgment of the user and again search for the images according to user query. Go to step 2
Gaussian Mixture Models:
Gaussian mixture models are one of the density models which includes a number of component Gaussian functions.
These functions are combined with different weights to form a multi-modal density. Gaussian mixture models are a
semi-parametric which can be used instead of non-parametric histograms (which can also be used to approximate
densities). It has high flexibility and precision in modeling the underlying distribution of sub-band coefficients.
Consider N texture classes labeled by n ∈N ≅{1,….N} related to different entities. In order to classify a pixel,
neighborhood of that pixel must be considered. Then S´S sub-images blocks features can be computed assign classes
to these blocks [20]. The set of blocks is represented by B. The neighborhood of a block b is called patch P(b). It
should be defined as the group of blocks in a larger T.´T sub-image with b at its centre. Db is designated as the data
associated to that block and Vb ∈ N be the classification of b. The classification can be done based on the following
rule Equation (1):
v = argmax ÕPr(Db | vb = n) (1)
Thus, all the blocks in P(b) which has class n maximizes the probability of the data in P(b). It reduces computation
time to classify the texture. The data Db linked with each block is denoted by the vector of features . For each and
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


270
www.ijergs.org

every texture class, a probability distribution that represents the feature statistics of a block of that class must be
selected. Thus the probability that obtained will be a convex combination of M Gaussian densities Equation (2):

,

=

=1
,

(2)
where, ,

is Gaussian of mean

and
Covariance Σ the parameters for a given class are thus

,

∈ .
A GMM is the natural model which can be if a texture class contains a number of distinct subclasses. Thus by using
Gaussian mixture model to retrieve the texture properties of the image gives desired accuracy.
Semantic template:
This technique is not so widely used. Semantic templates are generated to support high-level image retrieval. Semantic
template is usually defined as the "representative" feature of concept calculated from a collection of sample images
[8].
Wavelet Transform:
Wavelet transforms are based on diminutive waves, called wavelets, of varying frequency & limited duration. Discrete
wavelet transform renovate the image in four different parts higher frequency part (HH), high low frequency part
(HL), Low high frequency part(LH), lower part (LL) vertical parts is 1-level image decompositions then compute
moments of all frequency part than store and use it as feature to obtain the images. Texture entropy and contrast,
clumsiness are the mostly used properties. Statistical features of grey levels were one of the efficient methods to
classify texture. The Grey Level Co-occurrence Matrix (GLCM) is used to extract second order statistics from an
image. GLCMs have been used very profitably for texture calculations. From Grey Level Co-occurrence Matrix all the
features are deliberated and stored into the database. The use of Grey Level Co-occurrence Matrix provides good
result but it is in spatial domain so it is more error pron. CCH (Contrast Context Histogram) to find out the feature of
the query image and other images stored in the database. CCH is in spatial domain and it presents global distribution.
The MPEG Descriptors has been used like Edge Histogram Descriptor for texture. The Edge histogram differentiates
edges according to their direction [20].
Gabor filter:
They are widely used for texture analysis because its similar characteristics with human perception. A two-
dimensional Gabor function g(x ,y) consists of a sinusoidal plane wave of some frequency and orientation (carrier),
modulated by a two dimensional translated Gaussian envelope. Gabor Filter have one mother filter using that other
filter banks are generated and their features are calculated and stored in database. Structure of different types of Edges
[20]

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


271
www.ijergs.org


Fig. 2. Different types of edges
Support Vector Machine
Support vector machine is a supervised learning technique that analyzes data and identify pattern used for
classification. It takes a set of input, read it and for each input desired output form [21] such type of process is known
as classification, when if output is continuous than regression performed. For constructing maximum separating hyper-
planes SVM maps input vector to a higher dimension feature space. Feature space refers to an input space which is
reserved for measuring similarity with the help of kernel function. It is high dimension space where linear separation
becomes very easier than input space [22]. In this, raw data is transformed into a fixed length sample vectors. Here are
two terms which are used in feature space i.e. called feature values and feature vectors. The features of image is called
feature values and these feature values presented the machine in a vectors is known as feature vectors. Kernel function
used in the kernel method performing some operation such as classification, clustering upon different categories of
data like text document, progression, vectors, group of points, image and graphs etc. it maps the input data into a
higher dimension feature space because in this data could be easily separated or better structured [23]. There are some
points in the feature space which are separated by some distance is called support vectors. It is the point between
origin and that point and demonstrates the location of the separator. The detachment from the decision surface to the
closet data point concludes the margin the classifier.

Fig. 3. Linear separating hyper-planes for two class separation


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


272
www.ijergs.org

Color Histogram:
It is a standard demonstration of color characteristic in CBIR systems. It is very efficient in description of both local
and global features of colors. This computes the chromatic information and invariant of image along the view axes for
translation and rotation, when the large scale image data base computes histogram, its efficiency is not satisfactory and
to overcome this conflict joint histogram technique is introduced. Color histograms are a fundamental technique for
retrieving images and extensively used in CBIR system. The color space has segmentation, for every segment the
pixels of the color within its bandwidth are counted, which demonstrates the relative frequencies of the counted colors.
We use the RGB color space for the histograms. Only minor differences have been observed with other color spaces
for the histogram. Color Histogram H(m) is a distant probability function of the image color. This probability function
is used for the determination of joint probability function for the intensities of the three color channels. Further
informally, the color histogram is defined as.
ha,b,c=N.prob(a,b,c)
where a, b, c represent the three color channel (RGB)
H (m) = [h1, h2…hn]
Hk=nk/N, k=1, 2….n;
where N is the no. of pixel image M and nk is the no. of pixel with the image value k.

2D Dual-Tree Discrete Wavelet Transform:
D-DWT is developed to overcome two main drawbacks of DWT: shift variance and poor directional selectivity [24].
With carefully designed filter banks, DDWT mainly has following advantages: approximate shift invariance,
directional selectivity, restricted redundancy, and analogous computation efficiency as DWT either the real part or the
imaginary part of DDWT [24] yields perfect reconstruction and thus can be employed as a stand-alone transform. We
use magnitude of sub-bands to determine feature vector. The execution of DDWT is very simple. An input image is
decomposed by two partitions of filter banks,
0

,
1

and
0

,
1

disjointedly, filtering the image horizontally and
then vertically just as predictable 2D- DWT does. Then eight sub bands are acquired:

,

,

,

,

,

,

and

Each high-pass sub-band from one filter bank is combined with the corresponding sub-band from the other filter bank
by uncomplicated linear operations: averaging or differencing. The size of every sub-band is the equal as that of 2D
DWT at the same level. But there are six high pass sub-bands instead of three high-pass sub-bands at each level. There
are two low-pass sub-bands, LLb and LLa are recursively decomposed up to a preferred level within each branch. The
basic functions of 2D DDWT and 2D DWT are shown in Fig. 4.a and Fig. 4.b correspondingly. Each DDWT basis
function is oriented at a definite direction, including ±75º, ± 15º, and ±45º. Conversely, the basis function of HH sub-
band of 2D DWT mixes directions of ±45º together.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


273
www.ijergs.org



(a) (b)

(c)
Fig. 4. 2-D Discrete Wavelet Transform sub-bands
CONCLUSION AND FUTURE WORK
The study in content-based image retrieval (CBIR) in the past has been emphasis on image processing, low-level
feature extraction, etc. Extensive experiments on CBIR systems demonstrate that low-level image features cannot
always describe high-level semantic concepts in the users‘ mind. It is believed that CBIR systems should provide
maximum support in bridging the ‗semantic gap‘ between low-level visual features and the richness of human
semantics. In this paper literature of different content retrieval method is discussed like SVM based retrieval, SVM
with relevance feedback method, DWT based method etc in which some of the methods are efficient to shorten the
semantic gap between the image while some are less so in future work need to develop such technique which much
efficiently and effectively reduces the semantic gap and increases the information gain.

REFERENCES:
[1]. H. Mohamadi, A. Shahbahrami, J. Akbari, ― Image retrieval using the combination of text-based and content-
based algorithms‖, Journal of AI and Data Mining, Published online: 20, February-2013.
[2]. Pabboju , S. and gopal, R. ( 2009). A Novel Approach For Content- Based Image Global and Region Indexing and
Retrieval System Using Features. International Journal of Computer Science and Network Security. 9(2), 15-21.
[3]. Li, X., Shou, L., Chen, G., Hu, T. and Dong, J. (2008). Modelling Image Data for Effective Indexing and Retrieval
In Large General Image Database. IEEE Transaction on Knowledge and Data Engineering. 20(11), 1566-1580.
[4]. Demerdash, O., Kosseim, L. and Bergler, S. (2008). CLaC at Image CLEFphoto 2008, ImageCLEF Working
Notes.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


274
www.ijergs.org

[5]. K. C. Sia and Irwin King. ―Relevance feedback based on parameter estimation of target distribution‖ In IEEE
International Joint Conference on Neural Networks, pages 1974–1979, 2002.
[6] Simon Tong and Edward Chang. ―Support vector machine active learning for image retrieval. In MULTIMEDIA
―in Proceedings of the ninth ACM international conference on Multimedia, pages 107–118.2001.
[7] M. E. J. Wood, N. W. Campbell, and B. T. Thomas. ―Iterative refinement by relevance feedback in content-based
digital image retrieval‖ In ACM Multimedia 98, pages 13–20. ACM, 1998.
[8] Jisha.K.P, Thusnavis Bella Mary. I, Dr. A.Vasuki, ―An Image Retrieve Al Technique Based On Texture Features
Using Semantic Properties‖, International Conference on Signal Processing, Image Processing and Pattern
Recognition [ICSIPR], 2013.
[9] Swati Agarwal, A. K. Verma, Preetvanti Singh, ―Content Based Image Retrieval using Discrete Wavelet
Transform and Edge Histogram Descriptor‖, International Conference on Information Systems and Computer
Networks, proceeding of IEEE xplore-2013.
[10] Xiang-Yang Wang, Hong-Ying Yang, Dong-Ming Li ―A new content-based image retrieval technique using color
and texture information‖, Computers & Electrical Engineering, Volume 39, Issue 3, April 2013, Pages 746-761
[11] S. Manoharan, S. Sathappan, ―A Novel Approach For Content Based Image Retrieval Using Hybrid Filter
Techniques‖, 8th International Conference on Computer Science & Education (ICCSE 2013) April 26-28, 2013.
Colombo, Sri Lanka
[12] Heng chen, zhicheng zhao ―an effective relevance feedback algorithm for image retrieval‖ 978-1-4244-6853-
9/10/ 2010 IEEE.
[13] Monika Daga, Kamlesh Lakhwani, ―A Novel Content Based Image Retrieval Implemented By NSA Of AIS‖,
International Journal Of Scientific & Technology Research Volume 2, Issue 7, July 2013 ISSN 2277-8616.
[14] Patheja P.S., Waoo Akhilesh A. and Maurya Jay Prakash, ―An Enhanced Approach for Content Based Image
Retrieval‖, International Science Congress Association, Research Journal of Recent Sciences, ISSN 2277 – 2502 Vol.
1(ISC-2011), 415-418, 2012.
[15] S. Nandagopalan, Dr. B. S. Adiga, and N. Deepak, ―A Universal Model for Content-Based Image Retrieval‖,
World Academy of Science, Engineering and Technology, Vol:2 2008-10-29.
[16] ROI Image Retrieval Based on the Spatial Structure of Objects, Weiwen ZOU1 , Guocan FENG2, 12Mathematics
and Computational School, Sun Yat-sen University, Guangzhou, China, 510275 paper: 05170290.
[17] H. Yamamoto, H. Iwasa, N. Yokoya, and H. Takemura, ―Content- Based Similarity Retrieval of Images Based on
Spatial Color Distributions‖, ICIAP '99 Proceedings of the 10th International Conference on Image Analysis and
Processing.
[18] Patil, P.B. and M.B. Kokare, ―Relevance feedback in content based image retrieval: A review‖ J. Appli. Comp.
Sci. Math., 10: 41-47.
[19] Shanmugapriya, N. and R. Nallusamy, ―a new content based image retrieval system using gmm and relevance
feedback‖, Journal of Computer Science 10 (2): 330-340, 2014 ISSN: 1549-3636.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


275
www.ijergs.org

[20] Mit Patel, Keyur Brahmbhatt, Kanu Patel, ― Feature based Image retrieval based on clustering and non-clustering
techniques using low level image features‖, International Journal of Advance Engineering and Research Development
(IJAERD) Volume 1,Issue 3, April 2014, e-ISSN: 2348 - 4470 , print-ISSN:2348-6406
[21] SANDEEP KUMAR, ZEESHAN KHAN, ANURAGJAIN, ―A REVIEW OF CONTENT BASED IMAGE CLASSIFICATION
USING MACHINE LEARNING APPROACH‖, INTERNATIONAL JOURNAL OF ADVANCED COMPUTER RESEARCH (ISSN
(PRINT): 2249-7277 ISSN (ONLINE): 2277-7970) VOLUME-2 NUMBER-3 ISSUE-5 SEPTEMBER-2012
[22] T. JYOTHIRMAYI, SURESH REDDY, ―AN ALGORITHM FOR BETTER DECISION TREE‖, (IJCSE) INTERNATIONAL
JOURNAL ON COMPUTER SCIENCE AND ENGINEERING, VOL. 02, NO. 09, 2010, 2827-2830.
[23] SUNKARI MADHU, ―CONTENT BASED IMAGE RETRIEVAL: A QUANTITATIVE COMPARISON BETWEEN QUERY BY
COLOR AND QUERY BY TEXTURE‖, JOURNAL OF INDUSTRIAL AND INTELLIGENT INFORMATION VOL. 2, NO. 2, JUNE
2014
[24] N S T SAI, R C PATIL, ―IMAGE RETRIEVAL USING 2D DUAL-TREE DISCRETE WAVELET TRANSFORM‖,
INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS (0975 – 8887) VOLUME 14– NO.6, FEBRUARY 2011





















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


276
www.ijergs.org

Evaluating the Efficiency of Bilateral Filter
Harsimran Kaur
1
, Neetu Gupta
2

1
Research Scholar (M.Tech), ECE Deptt, GIMET
2
Asst. prof, ECE Deptt, GIMET
E-Mail- er.harsimrankaur@gmail.com

Abstract- Bilateral filtering is a simple, non-iterative scheme for texture removal and edge-preserving and noise-reducing
smoothing filter. The intensity value at each pixel in an image is replaced by a weighted average of intensity values from nearby
pixels. This weight is based on a Gaussian distribution. Thus noise is averaged and signal strength is preserved. Performance
parameters of bilateral filter have been evaluated. The design and implementation is done in MATLAB using image processing
toolbox. The comparison has shown that the bilateral is quite effective for random and Gaussian noise
Keywords- Filtering, noise, gaussian noise, texture, GUI, artifact, compression
INTRODUCTION
A bilateral filter is non-linear, edge-preserving and noise-reducing smoothing filter[1]. The intensity value at each pixel
in an image is replaced by a weighted average of intensity values from nearby pixels. This weight can be based on a
Gaussian distribution. This preserves sharp edges by systematically looping through each pixel and adjusting weights
to the adjacent pixels accordingly[2].The bilateral filter is defined as:
where:
- I
filtered
is the filtered image;
- I is the original input image to be filtered;
- X are the coordinates of the current pixel to be filtered;
- is the window cantered in x;
- f
r
is the range kernel for smoothing differences in intensities. This function can be a Gaussian function;
- g
s
is the spatial kernel for smoothing differences in coordinates. This function can be a Gaussian function.
Gaussian low-pass filtering computes a weighted average of pixel values in the neighbourhood, in which the weights decrease
with distance from the neighbourhood centre. However, such an averaging consequently blurs the image. How can we prevent
averaging across edges, while still averaging within smooth regions? Bilateral filtering is a simple, non-iterative scheme for
edge-preserving smoothing. The basic idea underlying bilateral filtering is to do in the range of an image what traditional
filters do in its domain[7],[10].
THE GAUSSIAN CASE
A simple and important case of bilateral filtering is shift-invariant Gaussian filtering, in which both the closeness function c and
the similarity function s are Gaussian functions of the Euclidean distance between their arguments [4].More specifically, c is
symmetric.

where

is the Euclidean distance.
METHODOLOGY USED
The following flowchart gives the procedure of the bilateral filtering algorithm with an image f(x,y).
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


277
www.ijergs.org


Fig.1 Bilateral filter algorithm
TEST BED
The following table shows the experimental images related to the project with their size and type of format.
TABLE X
IMAGES USED IN SIMULATION
S.No. TITLE OF THE IMAGE SIZE FORMAT
1. Colg1 2.05 MB JPG
2. 2 83.3 KB JPG
3. Sim1 2.14 MB JPG
4. Mandrill 31.6 KB JPG
5. fruits 988 KB JPG

PERFORMANCE PARAMETERS
A good objective quality measure should reflect the distortion on the image well due to, for example, blurring, noise, compression,
and sensor inadequacy. Such measures could be instrumental in predicting the performance of vision-based algorithms such as
feature extraction, image-based measurements, detection, tracking, and segmentation, etc., tasks. Quantitative measures for image
quality can be classified according to two criteria:
1. number of images used in the measurement;
2. nature or type of measurement.
According to the first criterion, the measures are divided into two classes: univariate and bivariate. A univariate measure uses a
single image, whereas a bivariate measure is a comparison between two images. A number of measures have been defined to
determine the closeness of the degraded and original image fields. On the basis of this a study is done on the following measures
and analysed.
1. Pixel difference-based measures: (eg. the Mean Square Error and Maximum Difference).
2. Correlation-based measures: A variant of correlation based measures can be obtained by considering the absolute mean and
variance statistics (eg. Structural Correlation / Content, Normalized Cross Correlation) [1],[5].

A. Mean Square Error
In the image coding and computer vision literature, the most frequently used measures are deviations between the original and
coded images of which the mean square error (MSE) or signal to noise ratio (SNR) being the most common measures. The reasons
for these metrics widespread popularity are their mathematical tractability and the fact that it is often straightforward to design
systems that minimize the MSE but cannot capture the artifacts like blur or blocking artifacts. The effectiveness of the coder is
optimised by having the minimum MSE at a particular compression and MSE is computed using the following equation:-
Input image f(x,y)
Define w= half width
sigma1=gaussian distance weights
sigma2=gaussian intensity weights
Add gaussian noise to input image
f(x,y)
Calculate Gaussian Distance by G
= exp(-
(X.^2+Y.^2)/(2*sigma_d^2))
Calculate Itensity weights by H =
exp(-(I-A(i,j)).^2/(2*sigma_r^2)
Apply filtering values on noisy
image by F = H*G((iMin:iMax)-
i+w+1,(jMin:jMax)-j+w+1)
get the resultant filtered as final
output image
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


278
www.ijergs.org


B. Peak Signal – to – Noise-Ratio
Larger SNR and PSNR indicate a smaller difference between the original (without noise) and reconstructed image. The main
advantage of this measure is ease of computation but it does not reflect perceptual quality. An important property of PSNR is that
a slight spatial shift of an image can cause a large numerical distortion but no visual distortion and conversely a small average
distortion can result in a damaging visual artifact, if all the error is concentrated in a small important region[12].This metric
neglects global and composite errors PSNR is calculated using the following equation:


C. Average Difference
A lower value of Average Difference (AD) gives a ―cleaner‖ image as more noise is reduced and it is computed using following
equation:


D. Maximum Difference
Maximum difference (MD) is calculated using the given equation and it has a good correlation with MOS for all tested
compression techniques so this is preferred as a very simple measure as a reference for measuring compressed picture quality in
different compression systems. Large value of MD means that the image is of poor quality.

SIMULATION RESULTS
The bilateral filtering algorithm is applied to the experimental images which are displayed in GUI(Graphic User Interface).It has
‗LOAD‘ button to browse the image, ‗APPLY‘ button to apply filtering action, ‗CLOSE‘ button for closing the GUI window. The
following snapshots show the simulation results.

Fig. 2 Applying filtering algorithm

Fig. 3 Filtered image after applying bilateral filtering
Efficiency parameters i.e. values obtained from the bilateral filtering action applied on image no.1 from table 1.1
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


279
www.ijergs.org


Fig. 4 performance parameters
On comparison with other (median) filter, the values obtained for the same image:

Fig. 5 (contd.) performance parameters
TABLE XII
EFFICIENCY PARAMETERS
S.No. PARAMETER BILATERAL
FILTER
MEDIAN
FILTER
1. Mean square
error
0.0050 15296
2. Peak SNR 71.1815 6.2849
3. Average
difference
-0.0046 115.1108
4. Maximum
difference
0.4749 254.1485

Evaluation: The values obtained of different parameters are mentioned in the tables above. The study shows that the bilateral filter
has greater efficiency .For e.g. if we consider image no. 1, the peak SNR is higher in case of bilateral filter than the median
filter[9],[11]. Similarly, other parameter values depict the higher efficiency of the bilateral filter.
VII . CONCLUSION
The work has presented a detailed study on the bilateral filtering technique. By conducting the survey we have found that the
bilateral filter is based on the concept of Gaussian distribution. The bilateral filter is a non linear filter and also it reduces the noise
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


280
www.ijergs.org

in such a way that it preserves the edges. The survey has shown that the bilateral filter is quite effective for random noise so that is
why it is more preferable over others. The design and implementation is done in MATLAB using image processing toolbox. The
comparison has shown that the bilateral is quite effective for random and Gaussian noise.

FUTURE SCOPE
- To enhance the visibility of digital images.
- To reduce the random noise from the images.
- To remove the fog or haze from the images.
- To filter in such a way that it can preserves the edges.
As it is known bilateral filter is unable to remove salt and pepper noise so in near future we will extend this research work by
integrating the bilateral filter with median filter. Because we know that the median filter can remove salt and pepper noise.

REFERENCES:
[1]. ―A BILATERAL FILTER IN GRADIENT DOMAIN‖- ZHENGGUO LI, JINGHONG ZHEN, ZIJIAN ZHU, SHIQIAN WU, SUSANTO
RAHARDJA SIGNAL PROCESSING DEPARTMENT, INSTITUTE FOR INFOCOMM RESEARCH, 1 FUSIONOPOLIS WAY, SINGAPORE ©2012
IEEE 1113 ICASSP 2012.
[2]."FAST BILATERAL FILTER WITH ARBITRARY RANGE AND DOMAIN KERNELS‖- BAHADIR K. GUNTURK, SENIOR MEMBER, IEEE.
[3].―A BLOCK-BASED 2D-TO-3D CONVERSION SYSTEM WITH BILATERAL FILTER‖-CHAO-CHUNG CHENG, CHUNG-TE LI, PO-SEN
HUANG, TSUNG-KAI LIN, YI-MIN TSAI, AND LIANG-GEE CHEN GRADUATE INSTITUTE OF ELECTRONICS ENGINEERING, NATIONAL
TAIWAN UNIVERSITY, TAIWAN, R.O.C.
[4]. ―SWITCHING BILATERAL FILTER WITH A TEXTURE/NOISE DETECTOR FOR UNIVERSAL NOISE REMOVAL‖-CHIH-HSING LIN, JIA-
SHIUAN TSAI, AND CHING-TE CHIU, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 9, SEPTEMBER 2010,
2307.
[5].―NEW TEMPORAL HIGH-PASS FILTER NON UNIFORMITY CORRECTION BASED ON BILATERAL FILTER‖- CHAO ZUO, QIAN
CHEN, GUOHUA GU, AND WEIXIAN QIAN 440 LAB, JGMT, EEOT, NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY,
NANJING 210094, CHINA(RECEIVED SEPTEMBER 26, 2010; ACCEPTED DECEMBER 27, 2010).










International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


281
www.ijergs.org

Investigation of SMAW Joints By Varying Concentration of Rutile (TiO
2
) in
Electrode Flux
Chirag Sharma
1
, Amit Sharma
2
, Pushpinder Sharma
2

1
Scholar, Mechanical Department, Geeta Engineering College, Panipat, Haryana, India
2
Assistant Professor, Mechanical Department, Geeta Engineering College, Panipat, Haryana, India
E-Mail- chirag485@gmail.com

Abstract— Our aim to investigate the SMAW joints by varying the concentration of Rutile (TiO
2
) in the flux composition on the
various characteristics of metal cored coated electrodes for the purpose of developing efficient and better rutile electrodes for
structural mild steel. In this work five rutile metal cored coated electrodes were prepared by increasing Rutile (TiO
2
), at the
expense of cellulose and Si-bearing components like Mica and Calcite in the fluxes. Various mechanical properties like micro
hardness, tensile properties and Impact toughness were measured and metallographic studies were undertaken. Qualitative
measurements of operational properties like porosity, slag detachability and arc stability were also carried out.
Keywords— Rutile (TiO
2
),Composition of Flux in various electrodes, Hardness Test, Tensile Test, Impact Test, Slag
Detachability, porosity, Microstructure of weld bead.
INTRODUCTION
Welding is a fabrication process that joins materials permanently, usually similar or dissimilar metals by the use of heat causing
fusion with or without the application of pressure. SMAW is the arc welding process known to even a layman and can be
considered a roadside welding process. When an arc is struck between an electrode and the work piece, the electrode core wire and
its coating melt, the latter provides a gas shield to protect the molten weld pool and the tip of the electrode from the ill effects of
the atmospheric gases. The diameter of electrodes usually varies between 3.15 to 12.50 mm. the length varies between 350 to 450
mm.
EXPERIMENTAL METHOD
The method for the accomplishment of the experiment include the production of electrodes, extrusion of electrodes, micro
hardness testing, tensile strength testing, impact strength testing and microstructure testing of the weld beads produced by five new
developed electrodes.
Process of Electrode Production
The ingredients in dry form were weighted and were mixed appropriately for around 10-15 minutes to obtain a homogeneous dry
flux mixture. A liquid silicate binder was then added to the mixed dry flux, followed by mixing for further 10 minutes, this process
is also known as wet mixing. The binder consists of a complex mixture of different alkali silicates with a wide range of viscosities.
The flux coating ingredients commonly used are Rutile, Aluminite, CaCO
3
MgCO
3,
Cellulose, Ferromanganese, Quartz, Calcite,
China clay, Mica, Iron Powder, Talcum Powder, and Binding agents (Sodium Silicate) etc.
The flux was then extruded onto a 3.15 mm diameter mild steel core wire and coating diameter of flux .4 -.55 mm, final dia. after
coating is approximately 3.7 mm. with coating factor of 1.18, where the coating factor represents the ratio of the core wire
diameter to the final electrode diameter.
The electrodes were baked after the extrusion. The baking cycle consisted of 90 minutes at 140-150°C. These electrodes were
tested by taking weld bead on plates and finally 5 types of flux coating composition were obtained by varying the Rutile (TiO
2
)
from 27 to 42 % wt. at the expense of calcium fluoride, cellulose, Calcite, and Si-bearing raw materials in the dry mix.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


282
www.ijergs.org

Extrusion of Electrodes
The final five electrodes were finally extruded from injection moulding machine. All the electrodes were produced with the same
wire and different powder raw material batches.
The coating dry mix composition with the corresponding weight percentage of the components is shown in Table with Coating
Composition (wt %):-
Constituents 27% TiO
2
31% TiO
2
34.5% TiO
2
39% TiO
2
42%TiO
2

Aluminite 16.47 16.47 16.42 16.39 16.32
CaCO
3
MgCO
3
7.6 7.5 7.3 7 7
Cellulose 5.43 5.2 4.6 3.9 3.6
Ferromanganese 6 5.5 5.5 5.5 5.5
Quartz 6.52 6.2 5.9 5.2 4.8
Calcite 9.8 8.3 7.6 7.1 6.3
China Clay 9.8 8.8 7.8 6.6 5.8
Mica 8.7 8.4 7.7 6.7 6.12
Telcom Power 1.6 1.6 1.6 1.6 1.6
Iron Powder 1.6 1.1 1.1 1.1 1.1


RESULTS AND DISCUSSION
Slag Properties
The slag properties by all of the flux coatings are of good quality i.e. all of them covered the bead completely. The bead was in
good shape and cleans after the removal of slag. The slag produced by 31 % TiO
2
flux was observed to interfere with the weld
pool in both of the current conditions i.e. DCEP and DCEN.
On the other hand the 27 %, 34.5 %, 39 % and 42 % TiO
2
slag did not interfere with the weld pool and the weld beads obtained by
these electrodes were smooth and clean.
Spatter
The spatters produced in DCEP welding were observed to be more than in DCEN welding. Further it was observed that in DCEP
27 %, 31 % and 34.5 % TiO
2
electrodes produced more spatters than in other electrodes. In General, the spatters were easy to
remove and were of medium size.

Fig. Weld beads obtained on welding with DCEN and DCEP
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


283
www.ijergs.org

Operational Properties
In general, the arc stability in DCEN welding for all types of electrodes was better than that in case of DCEP. The slag produced
by 27 %, 31 %, 34.5 % and 39 % TiO
2
electrodes was thicker than that of 42 % TiO
2
electrodes.
The slag detachability is good for DCEN welding for all electrodes. The slag was more difficult to detach in DCEP especially with
39 % and 42 % TiO
2
electrodes.
The slag for all electrodes presented porosity but it was more prominent in DCEP especially with 42% TiO
2
electrodes.
Observations of porosity, arc stability, slag detachability during welding:
Coating Current Type Arc Stability Slag Detachability Porosity
27% TiO
2
DCEP Good Good Present
DCEN Good Good Present
31% TiO
2
DCEP Medium Medium Present
DCEN Good Good Present
34.5% TiO
2
DCEP Medium Medium Present
DCEN Good Good Present
39% TiO
2
DCEP Good Medium Present
DCEN Good Good Highly Present
42% TiO
2
DCEP Excellent Good Present
DCEN Excellent Good Highly Present

Micro hardness Measurements— The micro hardness was measured at five points on each sides of the weld bead including
the weld bead itself on a specimen.
Micro Hardness Test Results (MVH) DCEP:


Micro hardness at Weld Bead Vs TiO
2
Composition

26
27
28
29
30
31
32
33
34
35
Base
metal
27% 31% 34.50% 39% 42%
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
%age of TiO
2
Microhardness Vs %age of TiO
2
at Weld Bead
DCEP
DCEN
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


284
www.ijergs.org


Micro hardness variation along the test coupon 27 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 31 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 34.5 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 39 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 42 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 27 %
TiO
2
(DCEN)
26.4
27.26
28.3
30.06
29.4
30.05
29.2
27.58
26.44
26
27
28
29
30
31
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance From Weld Bead
Microhardness variation on test
coupon (27% TiO
2
DCEP)
26.28
27.24
27.3
29.76
29.58
29.78
27.6
27.6
26.32
26
27
28
29
30
31
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s
(
M
V
H
)
Distance From Weld Bead
Microhardness variation on test
coupon (31% TiO
2
DCEP)
26.34
26.94
28.55
29.79
29.56
29.79
28.5
27.64
26.36
26
27
28
29
30
31
-12 -9 -6 -3 0 3 6 9 12 M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variations on test
coupons (34.5%TiO
2
DCEP)
26.2
27.48
30.36
32.46
32.37
32.45
30.24
27.26
26.32
26
27
28
29
30
31
32
33
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variation on test
coupon (39%TiO
2
DCEP)
26.92
28.26
32.94
33.6
32.56
33.65
33.08
27.96
26.88
26
27
28
29
30
31
32
33
34
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variation on test
coupon (42%TiO
2
DCEP)
26.36
28.82
30.7
32.2
33.2
31.9
29.2
28.64
26.18
26
27
28
29
30
31
32
33
34
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variation on test
coupon (27%TiO
2
DCEN)
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August – September 2014
ISSN 2091-2730


285
www.ijergs.org


Micro hardness variation along the test coupon 31 %
TiO
2
(DCEN)


Micro hardness variation along the test coupon 34.5 %
TiO
2
(DCEN)

Micro hardness variation along the test coupon 39 %
TiO
2
(DCEN)

Micro hardness variation along the test coupon 42 %

TiO
2
(DCEN)
26.28
28.24
29.3
31.5
32.6
30.78
28.6
27.96
26.22
26
27
28
29
30
31
32
33
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance form weld bead
Microhardness variation on test
coupon (31%TiO
2
DCEN)
26.24
28.5
30.15
31.79
33.7
31.83
31.56
28.34
26.31
26
27
28
29
30
31
32
33
34
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variations on test
coupons (34.5%TiO
2
DCEN)
26.16
28.48
30.4
31.6
33.7
31.45
30.24
28.14
26.26
26
27
28
29
30
31
32
33
34
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variation on test
coupon (39%TiO
2
DCEN)
26.9
29.96
32.54
33.65
34.6
33.39
33.08
33.06
26.8
26
27
28
29
30
31
32
33
34
35
-12 -9 -6 -3 0 3 6 9 12 M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variaton on test
coupon (42%TiO
2
DCEN)

Tensile Properties Test Results
The results of tensile properties measurements are recorded in Tables for DCEP and DCEN currents respectively. The elongation is
decreased with decrease in tensile strength.

Histogram for Tensile Properties for DCEP

Histogram for Tensile Properties for DCEN
Charpy V Notch Impact Test Results
Charpy V notch test samples were prepared for the impact strength measurements. These test coupons were dipped in liquid nitrogen
to drop their temperature from room temperature to -30
o
C, -20
o
C,-10
o
C and 0
o
C by varying the dipping time of test coupons in the
liquid nitrogen.
The results showed that the toughness of the weld coupon increases as the percentage of TiO
2
is increased for both types of current
conditions. The variations of impact energy with temperature for DCEP and DCEN are shown in figures respectively. Toughness is
related to the hardness and tensile properties of the material. The toughness of the weld metal is reported to be increased with a
reduction in tensile strength of the weld coupon and an increment in toughness is also observable with an increment in micro hardness
of the weld coupon.
Tensile properties Vs %age of TiO
2
(DCEN)
24 19 18 20 15
14
531
423
429
439
393
382
98
116
144
134 128 126
0
100
200
300
400
500
600
Base
Metal
27 31 34.5 39 42
%age of TiO2
Elongation
(mm)
Load (kN)
Tensile
Strength
(N/mm
2
)
Tensile Properties Vs %age of TiO
2
(DCEP)
22 16 21 18
92
323
426
431
373
352
15 19
106
123
131 133
143
528
0
100
200
300
400
500
600
Base
Metal
27 31 34.5 39 42
%age of TiO
2

Elongation
(mm)
Load (kN)
Tensile
Strength
(N/mm
2
)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

287 www.ijergs.org


Energy Vs Temperature graph of impact Test Results (DCEP)

Energy Vs Temperature graph of impact Test Results (DCEN)
Microstructure Test Results
The microstructure of base metal shows ferrite grains and small quantities of pearlite at the grain boundaries.
In the electrodes having 27 % TiO
2
and 31 % TiO
2
small quantity of grain boundary ferrite is observed whereas the acicular ferrite is
prominently present. The presence of pearlite is also observable with a minute quantity of martensite which results in small increment
in micro hardness.
For electrodes having 34.5 %, 39 % and 42 % TiO
2
acicular ferrite is much less than that of 27 % and 31 % TiO
2
electrodes
microstructure. With the presence of pearlite, aggregates of cementite are also observable. Precipitates of aligned martensite are also
noticeable. The presence of cementite and aligned martensite results in increased micro hardness. Such type of microstructure renders
the weld metal with low ductility and increased toughness. The presence of martensite is more prominent in DCEN welding than that
of DCEP.


Variation of Impact Energy w.r.t. Temperature (DCEN)
0
20
40
60
80
100
120
-30 -20 -10 0 30
Temperature (
o
C)
E
n
e
r
g
y

(
J
o
u
l
e
)

27% TiO2
31% TiO2
34.5% TiO2
39% TiO2
42% TiO2
Base Metal
Variation of impact energy w.r.t. Temperature (DCEP)
0
20
40
60
80
100
120
-30 -20 -10 0 30
Temperature
o
C
E
n
e
r
g
y

(
J
o
u
l
e
)

27% TiO2
31% TiO2
34.5% TiO2
39% TiO2
42% TiO2
Base Metal
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

288 www.ijergs.org

CONCLUSION
The penetration and bead width was increased with increase in TiO
2
percentage in general for all types of electrodes and for DCEN type of
current conditions.
The bead geometry produced in DCEN welding was better than that produced in DCEP.
The arc stability is observed to be good for 42 % TiO
2
electrodes. Smoke level was seemed to be reduced at higher percentage of TiO
2
. Slag
detachability was generally good for 34.5 %, 39 % & 42 % TiO
2
electrodes.
An overall increase in the Micro Hardness at the weld bead was observed with the increase in the amount of TiO
2
.The micro hardness is
observed to be increased due to the increase in percentage and migration of carbon and silicon.
An overall decrease in the tensile strength was observed with the increase in TiO
2
. The increment in silicon and carbon resulted in reduction
in the tensile strength of the weld metal.

REFERENCES:
[1] U. Mitra, C. S. Chai and T.W. Eagar, Slag Metal Reactions during Submerged Arc Welding of Steel, Proc. Of Int. Conf. on
Quality and Reliability in Welding, 2, Chinese Mech. Engg. Soc. Harbin, 1984
[2] R. Datta, D. Mukharjee and S. Mishra, Weldability and toughness evaluation of pressure vessel quality steel using shielded
metal arc welding(SMAW) process, Journal of Materials Engineering and Performance, Volume 7(6) December 1998, 817-823
[3] N. M. R. De Rissone, J. P. Farias, I. De Souza Bott, and E. S. Surian, ANSI/AWS A5.1-91 E6013 Rutile Electrodes: The Effect
of Calcite, Suppliment to the Welding Journal, July 2002
[4] K. Sampath, Constraints-Based Modeling Enables Successful Development of a Welding Electrode Specification for Critical
Navy Applications, Welding Journal, August 2005, 131-138
[5] G. Mulas, F. Delogu

, E. Arca, J. Rodriguez-Ruiz and S. Palmas,The influence of mechanical processing on the
photoelectrochemical behaviour of TiO
2
powders, Journal of Materials Engineering and Performance 2009
[6] Paulino Estrada Diaz, Ana Ma. Paniagua-Mercado,Victor M. Lopez-Hirata, Hector J. Dorantes-Rosales and Elvia
Diaz Valdez, Effect of TiO2-containing fluxes on the mechanical properties and microstructure in submerged-arc
weld steels, Published online in 2009
[7] G. Magudeeswaran, V. Balasubramanian, and G. Madhusudhan Reddy, Effect of Welding Consumables on Fatigue
Performance of Shielded Metal Arc Welded High Strength, Q&T Steel Joints, JMEPEG (2009) 18:49–56
[8] Kook-soo Bang, Chan Park, Hong-chul Jung and Jong-bong Lee, Effects of Flux Composition on the Element Transfer and
Mechanical Properties of Weld Metal in Submerged Arc Welding, Met. Mater. Int., Vol. 15, No. 3 (2009), 471-477
[9] Amado Cruz-Crespo, Rafael Ferna´ndez Fuentes, and Ame´rico Scotti ‗The Influence of Calcite, Fluorite, and Rutile on the
Fusion-Related Behavior of Metal Cored Coated Electrodes for Hardfacing‘ Journal of Materials Engineering and
Performance, Volume 19(5) July 2010, 685-692
[10] E. Rahimi, M. Fattahi, N. Nabhani and M.R. Vaezi, Improvement of impact toughness of AWS E6010 weld metal
by adding TiO2 nanoparticles to the electrode coating , Journal of Materials Engineering and Performance ,
Published online (2011)
[11] Kook-soo Bang, Hong-chul Jung and Il-wook Han, ‗Comparison of the Effects of Fluorides in Rutile-Type Flux Cored Wire‘,
Met. Mater. Int., Vol. 16, No. 3 (2010), 489-494 American Welding Society, Structured welding code Steel 1996



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

289 www.ijergs.org

An Analytical Study on integration of Multibiometric Traits at Matching Score
Level using Transformation Techniques
Santosh Kumar
1
, Vikas Kumar
1
, Arjun Singh
1

1
Asst. Professor, ECE Deptt, Invertis university, Bareilly
E-Mail- Santosh.v@invertis.org
Abstract— Biometric is one of those egressing technologies which are exploited for identifying a person on the basis of
physiological and behavioral characteristic. However, unimodal biometric system faces the problem of lack of individuality, spoof
attacks, non-universality, degree of freedom etc., which make these systems less precise and erroneous. In order to overcome these
problems, multi biometric has become the favorite choice for verification of an individual to declare him as an imposte or a genuine.
However, the fusion or integration of multiple biometric traits can be done at any one of the four module of a general multibiometric
system. Further, achieving fusion at matching score level is more preferable due to the availability of sufficient amount of information
present over there. In this paper we have presented a comparative study of normalization methodology which is basically used to
convert the different feature vectors of individual traits in common domain in order to combine them as a single feature vector.
Keywords— Biometric, Multibiometric, Normalization, Unimodal, Unsupervised Learning rules, Imposter, Genuine.
1. INTRODUCTION
Biometric system is fundamentally a pattern recognition system. The Biometric is a Greek word in which ‗bio‘ stands for life and
‗metric‘ for the measurement. Biometrics has been used in the science that studies living organisms for the data analysis problems for
a long period [1]. Different kinds of triats are use for authentication of individuality such as fingerprint recognition, hand geometry,
facial recognition, iris recognition, key stroke recognition, signature recognition, gait recognition, DNA (De-oxyribo Nucleic Acid),
voice recognition and palm print [2]. In the conventional approach of security, many password-cracking proficiencies being used
today and the complexity necessary for passwords make these system a bit less preferable choice. Further, it is also easy for an
application programmer to crack the password of someone, if any of these identity proof(token card and password) is lost by the
individual which he is carrying along with him, then it might be used by an imposter and will create problems.To overcome of all
these limitation, we use bio-metric techniques. In unimodal system, we use only one trait out of all for the identification [3]. In case of
fingerprint recognition, user place his/her finger on the fingerprint sensor and is identified as a genuine one or as an imposter, but in
due course of time when if the residual of the previous user remain present on the sensor may produces the false results and the right
identity of individual will not be measured. In addition to this, facial recognition is highly dependent on the quality of the image,
which is generally affected when low quality camera is used or due to the environmental factors. Many a time, facial recognition
system fails in verification process of identical twins, or father-son. In Multimodal biometric systems, more than one physiological or
behavioral characteristic are used for enrollment, verification, identification process or authentication of individuality. Multimodal
biometric systems have some unique advantages over unimodal biometric in terms of accuracy, enrolment rates, and susceptibility to
spoofing attacks [4]. In order to design a multi biometric system, the features of different biometric modalities are integrated at
different modules of a general multi biometric system.
In multimodal biometric system the entropies of data can be combined at any one of four levels, namely sensor level, feature
extraction level, matching score level and decision level, and fusion can occur at any level [5]. However it is beneficial to fuse the
information at that level only where maximum amount of information can be accessed with ease. Due to the presence of sufficient
amount of information at matching score level, it is best suited for fusion purpose. In this report we have briefly explored the problem
and better solution for choosing sensors and fusion methods, and present a case study describing its impact on biometric systems.

2. NORMALIZATION
Normalization is a particular course of action intended to coordinate the fields and mutual exclusiveness of the information in a
database in which relations among entropies are explicitly defined as approachable dimensions to minimize redundancy and
dependency. The goal is to set apart the data so that accessions, omissions, and changes of a field can be made in just one table and
then dispersed through with the rest of the database via the fixed relationships. Nevertheless, the objective of data normalization is to
cut down and evenly eliminate data layoff. It is useful to minimize the discontinuities when covering the database structure, to make
the data model more instructive to users, to keep away from preconception towards any particular form of questioning. Here, we have
used two normalization methods to change the matching scores obtained from the finger print and face in common domain [6].
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

290 www.ijergs.org

3. QUANTILE NORMALIZATION
Quantile normalization is a technique for making distributions identical in statistical properties [7]. To Quantile-normalize a test
distribution to a reference distribution of the same length, sort the test distribution and sort the reference distribution. The highest entry
in the test distribution then takes the value of the highest entry in the reference distribution, the next highest entry in the reference
distribution, and so on, until the test distribution is a perturbation of the reference distribution. To Quantile normalize two or more
distributions to each other, without a reference distribution, sort as before, and then set to the average of the distributions. So the
highest value in all cases becomes the mean of the highest values, the second highest value becomes the mean of the second highest
values, and so on. Quantile normalization is frequently used in microarray data analysis.Extending this approximation to N dimensions
contributes us a technique of determining a common statistical distribution from multiple number of biometric modalities in the
following steps.
1. A two dimensional matrix (X) of matching scores having N database of length M each obtained from different identifiers is
available in MxN form.
2. Now, configure p = (
1
N
,...,
1
N
)
3. Sort each column of MxN matrix (X) of the matching scores to obtain the X
sort
.
4. Successively each row of X
sort
is Projected onto p to get
sort
X'
5. Finally,
sort
X' is rearranged having the same order of original X to obtained the X
norm
.

4. DELTA NORMALIZATION
The delta normalization is a novel approach to convert the data in common domain and it helps to spread the whole statistical
distribution in the range of 0 and 1, i.e. the minimum values approaches toward 0 and maximum toward 1 [8].this method is both
functioning effectively and full-bodied in nature as it does not estimate the statistical distribution and cuts down the impression of
outliers too. If δ is the archetype matching score then normalized scores δ
'
are given by.
δ

=
1
2
1 −
δ
δ
2
+∝

Here, α is a smoothing constant which takes out the infrequent and uncorrelated data from the statistical distribution. Usually we
take the value of α approximately equal to the 100 and more as it gives better accuracy for higher value of α.

5. FUSION
Fusion is the method necessary for combining information from various single modality systems. The process of integrating the
information from number of evidences to build-up a multi biometric system is called fusion. The information can be integrated or
fused by fusion at any level. In this, information from different domains is transformed into a common domain [6].

5.1. Sum Rule
sum rule is helps in eliminating the problem of ambiguity during assortment of the database. Futhermore, after the normalization
of finger and face data they are summed up to acquire the fused score. Here, input pattern is delegated to the class c such that.

=

P

i
x

=
1

5.2. Product Rule
The product rule renders a more inadvertent consequences than sum rule as it is based on the statistical influence of the feature
vectores. The input pattern designated to the class c is given by.

=

P

i
x

=
1

5.3. Min Rule
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

291 www.ijergs.org

Min rule of fusion dominate by considering a mimnum posterior probability which is accumulated out of all classifies. Therefore,
the stimulus pattern designated to the class c such that[9].

( )
j j i
c argmax minP w | x
i
=


5.4. Max rule
Max rule of fusion dominate by considering a maximum posterior probability which is accumulated out of all classifies.
Therefore, the stimulus pattern designated to the class c such that[9].
( )
j j i
c argmax maxP w | x
i
=



6. MATCHING SCORE & DATABASE
To assess the execution of the normalization proficiencies with the fusion rules, the NIST- Biometric Scores Set - Release 1
(BSSR1), biometric database has been utilized. This database has a prominent amount of matching scores of faces and fingers,
particularly derived for the fusion procedure.

7. FINGERPRINT MATCHING SCORE
Matching score for the fingerprint of 10 users have been considered for the experimental study.

Table 1. Matching scores of fingerprint of 10 users

8. FACE MATCHING SCORE
Matching scores for the face of 10 users have been considered for the experimental study.
Table 2. Matching scores of face of 10 users
Users A B C D E F G H I J
1 29 4 6 4 4 7 5 6 6 9
2 7 26 12 4 11 9 4 9 6 5
3 8 5 63 6 7 5 9 6 7 8
4 8 5 10 73 9 8 12 6 16 6
5 11 5 12 6 175 6 9 8 8 10
6 8 4 6 3 4 10 6 5 6 3
7 9 3 6 5 5 5 11 5 4 5
8 8 4 10 5 9 10 8 38 8 5
9 6 6 5 7 11 4 11 6 142 6
10 3 5 8 4 14 6 6 10 6 163
Users A B C D E F G H I J
1 .57 .53 .52 .55 .54 .54 .55 .55 .58 .52
2 .56 .78 .51 .51 .51 .52 .51 .54 .56 .52
3 .45 .52 .81 .49 .51 .54 .53 .50 .54 .58
4 .51 .53 .49 .82 .47 .51 .53 .51 .51 .52
5 .50 .55 .54 .50 .59 .54 .54 .52 .52 .51
6 .45 .49 .52 .52 .49 .67 .52 .47 .51 .52
7 .53 .57 .52 .53 .49 .50 .67 .52 .55 .52
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

292 www.ijergs.org


9. NORMALIZED MATCHING SCORE
The matching scores considered previously have been applied in Quantile and delta normalization and the following tables are
evaluated.
Table 3. Normalized matching scores of fingerprint of 10 users through Quantile normalization

Users A B C D E F G H I J
1
0.937 -0.467 -0.354 -0.467 -0.467 -0.298 -0.411 -0.354 -0.354 -0.411
2
0.263 -0.354 -0.017 -0.467 -0.074 -0.186 -0.467 -0.186 -0.354 -0.13
3
-0.24 -0.411 2.8469 -0.354 -0.298 -0.411 -0.186 -0.354 -0.298 -0.354
4
-0.24 -0.411 -0.13 3.4085 -0.186 -0.242 -0.017 -0.354 0.2072 -0.13
5
-0.07 -0.411 -0.017 -0.354 9.1371 -0.354 -0.186 -0.242 -0.242 -0.074
6
-0.24 -0.467 -0.354 -0.523 -0.467 -0.13 -0.354 -0.411 -0.354 -0.354
7
-0.19 -0.523 -0.354 -0.411 -0.411 -0.411 -0.074 -0.411 -0.467 -0.411
8
-0.24 -0.467 -0.13 -0.411 -0.186 -0.13 -0.242 1.4428 -0.242 0.3757
9
-0.35 -0.354 -0.411 -0.298 -0.074 -0.467 -0.074 -0.354 7.2837 -0.523
10
-0.24 -0.411 -0.242 -0.467 0.0949 -0.354 -0.354 -0.13 -0.354 1.2182
Table 4. Normalized matching scores of face of 10 users through Quantile normalization


Table 5. Normalized matching scores of fingerprint of 10 users through delta normalization
8 .54 .54 .48 .53 .57 .49 .52 .77 .49 .51
9 .52 .53 .52 .53 .54 .50 .52 .50 .69 .52
10 .50 .52 .50 .55 .57 .52 .52 .60 .54 .58
Users A B C D E F G H I J
1
1.202 -0.269 -0.64 0.426 -0.142 -0.004 0.235 0.547 1.438 -0.685
2
0.843 9.3888 -1.007 -1.31 -0.987 -0.637 -1.23 0.136 0.903 -0.885
3
-3.32 -0.812 10.52 -1.96 -0.994 0.104 -0.29 -1.45 0.183 1.6488
4
-1.31 -0.472 -1.967 11.12 -2.655 -1.247 -0.46 -1.29 -1.08 -0.615
5
-1.37 0.2182 -0.036 -1.37 1.7858 0.04 0.173 -0.67 -0.88 -1.25
6
-3.47 -2.045 -0.867 -0.92 -2.111 5.12 -0.63 -2.61 -1.2 -0.931
7
-0.47 1.1465 -0.735 -0.31 -1.983 -1.503 4.936 -0.77 0.284 -0.893
8
0.158 -0.162 -2.274 -0.38 1.0883 -1.752 -0.78 9.14 -1.77 -0.997
9
-0.6 -0.306 -0.93 -0.51 0.0726 -1.511 -0.95 -1.71 6.058 -0.85
10
-1.45 -0.737 -1.42 0.577 1.3527 -0.628 -0.81 2.517 0.142 1.7597
Users A B C D E F G H I J
1 0.473 0.186 0.257 0.186 0.186 0.287 0.224 0.257 0.257 0.224
2 0.431 0.257 0.384 0.186 0.37 0.334 0.186 0.334 0.257 0.354
3 0.312 0.224 0.494 0.257 0.287 0.224 0.334 0.257 0.287 0.257
4 0.312 0.224 0.354 0.495 0.334 0.312 0.384 0.257 0.424 0.354
5 0.37 0.224 0.384 0.257 0.499 0.257 0.334 0.312 0.312 0.37
6 0.312 0.186 0.257 0.144 0.186 0.354 0.257 0.224 0.257 0.257
7 0.334 0.144 0.257 0.224 0.224 0.224 0.37 0.224 0.186 0.224
8 0.312 0.186 0.354 0.224 0.334 0.354 0.312 0.484 0.312 0.442
9 0.257 0.257 0.224 0.287 0.37 0.186 0.37 0.257 0.499 0.144
10 0.312 0.224 0.312 0.186 0.407 0.257 0.257 0.354 0.257 0.48
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

293 www.ijergs.org


Table 6. Normalized matching scores of face of 10 users through delta normalization

Users A B C D E F G H I J
1
0.0287 0.0269 0.0264 0.0277 0.027 0.0272 0.0275 0.0279 0.029 0.0263
2
0.0283 0.0391 0.0259 0.0255 0.0259 0.0264 0.0256 0.0274 0.0284 0.0261
3
0.023 0.0262 0.0406 0.0247 0.0259 0.0273 0.0268 0.0254 0.0274 0.0293
4 0.0255 0.0266 0.0247 0.0413 0.0238 0.0256 0.0266 0.0256 0.0258 0.0264
5
0.0255 0.0275 0.0272 0.0255 0.0295 0.0273 0.0274 0.0264 0.0261 0.0256
6 0.0228 0.0246 0.0261 0.026 0.0245 0.0337 0.0264 0.0239 0.0257 0.026
7
0.0266 0.0287 0.0263 0.0268 0.0247 0.0253 0.0335 0.0262 0.0276 0.0261
8
0.0274 0.027 0.0243 0.0267 0.0286 0.025 0.0262 0.0388 0.0249 0.0259
9 0.0264 0.0268 0.026 0.0265 0.0273 0.0253 0.026 0.025 0.0349 0.0261
10
0.0254 0.0263 0.0254 0.0279 0.0289 0.0264 0.0262 0.0304 0.0274 0.0294

10. FUSED SCORE

The resultant tables obtained after the normalization are fused together to get the fused score and are evaluated as followed.
Table7.Fused scores of 10 users using sum rule fusion through Quantile Normalization

Users A B C D E F G H I J
1
2.139 -0.735 -0.995 -0.041 -0.609 -0.302 -0.176 0.193 1.084 -1.096
2
1.107 9.034 -1.024 -1.777 -1.061 -0.823 -1.698 -0.050 0.548 -1.014
3
-3.566 -1.222 13.367 -2.310 -1.292 -0.306 -0.480 -1.804 -0.115 1.294
4
-1.552 -0.883 -2.097 14.531 -2.841 -1.489 -0.481 -1.641 -0.873 -0.745
5
-1.439 -0.192 -0.054 -1.726 10.923 -0.315 -0.013 -0.910 -1.126 -1.324
6
-3.714 -2.512 -1.221 -1.440 -2.578 4.990 -0.984 -3.021 -1.550 -1.286
7
-0.652 0.624 -1.089 -0.720 -2.394 -1.914 4.862 -1.180 -0.182 -1.303
8
-0.084 -0.628 -2.404 -0.793 0.902 -1.882 -1.024 10.583 -2.016 -0.621
9
-0.957 -0.660 -1.341 -0.813 -0.001 -1.978 -1.019 -2.065 13.342 -1.373
10
-1.688 -1.148 -1.662 0.110 1.448 -0.982 -1.161 2.387 -0.212 2.978

Table 8. Fused scores of 10 users using product rule fusion through Quantile Normalization

Users A B C D E F G H I J
1
1.126 0.125 0.227 -0.199 0.066 0.001 -0.096 -0.194 -0.510 0.281
2
0.222 -3.327 0.018 0.612 0.073 0.118 0.575 -0.025 -0.320 0.115
3
0.805 0.333 29.949 0.693 0.296 -0.043 0.055 0.514 -0.055 -0.584
4
0.317 0.194 0.255 37.912 0.494 0.302 0.008 0.456 -0.224 0.080
5
0.100 -0.090 0.001 0.486 16.317 -0.014 -0.032 0.162 0.214 0.092
6
0.840 0.955 0.307 0.479 0.985 -0.664 0.223 1.072 0.424 0.330
7
0.087 -0.599 0.261 0.127 0.814 0.617 -0.363 0.316 -0.133 0.367
8
-0.038 0.076 0.295 0.157 -0.202 0.227 0.189 13.187 0.429 -0.375
9
0.214 0.108 0.382 0.